Executive Order on AI: White House Should Follow Board’s Guidance
By Larry Clinton, Contributing Author / October 21, 2023
Larry Clinton is President of the Internet Security Alliance (ISA). The ISA is a multi-sector trade association that focuses on thought leadership, policy advocacy and developing best practices for cyber security. Mr. Clinton holds a certification on Cyber Risk management for Corporate Boards from Carnegie Mellon University, He is on the faculty of the Wharton School where he teaches a graduate Executive Education course in cyber security.
The National Association of Corporate Directors has twice named Mr. Clinton as one of the 100 most influential people in the field of corporate governance. He is a two term Chair of the IT Sector Coordinating Council and serves on the Cybersecurity Advisory Board for the Center for Audit Quality and the Cyber Advisory Board for the Better Business Bureau. He is widely published and has been a featured spokesman in virtually all major media outlets from WSJ, USA Today Fox News, NBC, CBS, NYT, PBS Morning Edition CNN & even MTV in India. He testifies often before Congress. He has briefed industry and governments world-wide including NATO and the OAS. ISA was also the only trade association to be part of the official cyber security briefing for the Republican National Convention in Cleveland.
ISA recently published the Cyber Social Contract (Vol. 3), which outlines 106 recommendations for the President and Congress. The previous editions of the ISA Social Contract were endorsed by the House GOP Task Force on Cyber Security and were the basis for President Obama’s Executive Order 13636 on Cyber Security. He is the industry co-chair – DHS is the government co-chair– of the Policy Leadership Working Group on Cyber Security Collective Defense featured at the National Cyber Security Summit in New York in July.
He literally “wrote the book” — the Cyber Risk Handbook for corporate boards which is the only private sector publication endorsed by both DHS and DOJ. PWC has independently evaluated the Cyber Risk Handbook and found it substantially changed how corporate director’s address cyber risk management leading to higher budgets, better risk management, closer alignment of cyber security with business goals and helping to create a culture of security. In 2017 ISA adapted the Handbook for the UK and Germany. As in the US, the German edition has been endorsed by the German government. ISA is now working with the OAS on a Latin American version of the handbook; as well as an edition for India and Japan, in partnerships with industry groups.
Anticipation for the Executive Order on AI
There is tremendous anticipation regarding the imminent release of a sweeping Executive Order (EO) on the use of Artificial Intelligence from the Biden White House. Although the EO holds potentially game-changing reach, it needs to be understood in the context that the government is largely playing catch-up on developing policy with respect to AI.
The Role of Government vs. Private Sector
This is not a criticism of the government. Actually, it’s a good thing. In a dynamic market economy such as we have in the United States, it is the appropriate role of the private sector – which is far larger and better resourced than the government – to take the lead with respect to technical innovation and management. Moreover, as we have argued extensively in our recent book Fixing American Cybersecurity, the entrepreneurial, incentive-based, and innovative nature of the US economy is one of our primary advantages over authoritarian states and their government-controlled economies as we face the challenges of the digital age.
Government’s Role in AI Oversight
As the White House, and government writ-large, weigh into AI (as well eventually quantum) issues, it is critical for the government to understand that their role is not to “manage” the technologies. The government’s role in the public sector is equivalent to the role of a corporate board in the private sector. That is, the role of the board/government is not management; it is the equally important role of oversight.
Learning from the Private Sector
The reality is that many, perhaps most (maybe all) leading private sector organizations have been grappling with both the practicalities and ethical implications of AI for a few years. Given the stunning dynamism of growth of AI, it would be especially wise in this case for the government to abandon their traditional “not invented here” bias and seek to build on top of – not from the ground up – the considered work on oversight of AI that already exists in the private sector.
Starting Point: Principles and Toolkits
An obvious place for the government to start is with the Principles and Toolkits published earlier this year by the National Association of Corporate Directors in partnership with the ISA in the fourth edition of their Cyber Risk Oversight Handbook. Government itself, through DHS (CISA), the FBI, and the US Secret Service, all contributed to the handbook. There are now multiple adapted versions of the handbook in five languages on four continents that have been developed through coalitions including the European Conference of Director Associations, the German Federal Office of Information Security (BSI), the Japanese Federation of Businesses, the Organization of American States, and others. Moreover, these handbooks have been independently assessed and found to generate positive cybersecurity impacts by organizations including PWC, MIT, and the World Economic Forum.
The latest edition of the handbook, published this past March, contains two sections specifically devoted to the issues that organizations need to focus on as they consider their use of AI. This material ought to be the starting point for the government’s much-needed efforts to develop a coherent policy for the oversight of artificial intelligence.
Questions for Organizations Evaluating How to Use AI
What are the specific goals the organization is seeking to achieve in deploying the AI system?
What is the plan to build and deploy the AI or ML application responsibly?
What type of system is the organization using for process automation, cognitive insight, cognitive engagement, or other? Does management understand how this system works?
What are the economic benefits of the chosen system?
What are the estimated costs of not implementing the system?
Are there potential alternatives to the AI or ML system in question?
How easy will it be for an adversary to attack the system based on its technical characteristics?
What is the organization’s strategy to validate data and collection practices?
How will the organization prevent inaccuracies that may exist in the data set?
What will be the damage from an attack on the system, including the likelihood and ramifications of the attack?
How frequently will the organization update its data policies?
What is the organization’s response plan for cyber-attacks on these systems?
What is the organization’s plan to audit the AI system?
Should the organization create a team to audit the AI or ML system?
Should the organization build an educational program for the staff to learn about the use and risks of AI and ML in general?