It has been a few weeks since President Biden issued an Executive Order on Artificial Intelligence (the Executive Order). For those who missed it, here is a short summary with a few insights below.
The Executive Order, issued on 30 October 2023, is intended to mitigate against the risks posed by the technology whilst promoting “responsible innovation”.
The Executive Order is the most significant step taken by the US towards the regulation of AI. It follows voluntary commitments made by 15 companies (including Amazon, Google, Meta, Microsoft and OpenAI) earlier this year, to manage the risks of AI. Alongside the announcement of the Executive Order, President Biden has indicated that legislation is incoming, stating “we are going to need bipartisan legislation to do more on artificial intelligence”.
Overview
AI Safety and Security |
|
Protecting Americans’ Privacy |
|
Advancing Equity and Civil Rights |
|
Standing Up for Consumers, Patients and Students |
|
Supporting Workers |
|
Promoting Innovation and Competition |
|
Advancing American Leadership Abroad |
|
Ensuring Responsible and Effective Government Use of AI |
|
Key points and analysis
- Legislative Impact – There is a risk that the ambitions of the Executive Order do not materialise into tangible progress. The White House has the power to bring certain enforcement actions and direct various departments to develop guidelines, but it can only encourage other independent agencies to comply with its recommendations. Further, the scope of the Executive Order is generally limited to regulating how federal government uses AI, as President Biden is more constrained when it comes to the private sector and requires Congressional action to advance regulation in that regard. Finally, certain goals, such as the introduction of privacy legislation, have faced significant obstacles to realisation in the past and are at risk of continuing to stall going forwards.
- Safety Test Results – One of the more significant aspects of the Executive Order is that companies developing AI models which pose a threat to national security, economic security or health and safety must share their safety test results with the government before they are released to the public. President Biden has relied on the Cold War era Defense Production Act to introduce this mandate.
- AI-generated Content – There is also a significant focus on watermarking AI-generated content so that the public can confirm the authenticity of government communications. President Biden seemed concerned by the risks associated with deepfakes which can be used to commit fraud and spread misinformation.
- Deadlines – There are a number of action points contained within the Executive Order, with deadlines ranging between 90 to 365 days. The earliest deadlines relate to safety and security items.
Further Reading
- The White House, ‘Executive Order on the Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence’ (30 October 2023)
- The White House, ‘FACT SHEET: President Biden Issues Executive Order on Safe, Secure, and Trustworthy Artificial Intelligence’ (30 October 2023)
- Valentin Baltadzhiev, ‘Biden’s AI Executive Order – All the Deadlines’ (1 November 2023)
- Stefania Palma and George Hammond, ‘Joe Biden moves to compel tech groups to share AI safety test results’ (The Financial Times, 30 October 2023)
- Cecilia Kang and David E Sanger, ‘Biden Issues Executive Order to Create AI Safeguards’ (The New York Times, 30 October 2023)