This browser is not actively supported anymore. For the best passle experience, we strongly recommend you upgrade your browser.
| 3 minutes read

Executive Order on Safe, Secure, and Trustworthy Artificial Intelligence

It has been a few weeks since President Biden issued an Executive Order on Artificial Intelligence (the Executive Order). For those who missed it, here is a short summary with a few insights below.

The Executive Order, issued on 30 October 2023, is intended to mitigate against the risks posed by the technology whilst promoting “responsible innovation”.

The Executive Order is the most significant step taken by the US towards the regulation of AI. It follows voluntary commitments made by 15 companies (including Amazon, Google, Meta, Microsoft and OpenAI) earlier this year, to manage the risks of AI. Alongside the announcement of the Executive Order, President Biden has indicated that legislation is incoming, stating “we are going to need bipartisan legislation to do more on artificial intelligence”.

Overview

AI Safety and Security
  • Requirement for developers of the most powerful AI systems to share their safety test results and other critical information with the US government. 
  • Develop standards, tools, and tests to help ensure that AI systems are safe, secure, and trustworthy. 
  • Develop standards for biological synthesis screening to protect against the risks of using AI to engineer dangerous biological materials.
  • Establish standards and best practices for detecting AI-generated content and authenticating official content.
  • Establish an advanced cybersecurity program to develop AI tools to find and fix vulnerabilities in critical software.
Protecting Americans’ Privacy
  • A call to pass bipartisan data privacy legislation.
  • Prioritise federal support for accelerating the development and use of privacy-preserving techniques.
Advancing Equity and Civil Rights
  • Provide clear guidance to landlords, Federal benefits programs, and federal contractors to keep AI algorithms from being used to exacerbate discrimination.
  • Address algorithmic discrimination through training, technical assistance, and coordination between the Department of Justice and Federal civil rights offices on best practices for investigating and prosecuting civil rights violations related to AI.
  • Ensure fairness throughout the criminal justice system by developing best practices on the use of AI in sentencing, parole and probation, pretrial release and detention, risk assessments, surveillance, crime forecasting and predictive policing, and forensic analysis.
Standing Up for Consumers, Patients and Students
  • Advance the responsible use of AI in healthcare and the development of affordable and life-saving drugs.
  • Shape AI’s potential to transform education by creating resources to support educators deploying AI-enabled educational tools.
Supporting Workers
  • Develop principles and best practices to mitigate the harms and maximise the benefits of AI for workers by addressing job displacement; labour standards; workplace equity, health, and safety; and data collection.
  • Produce a report on AI’s potential labour-market impacts.
Promoting Innovation and Competition
  • Catalyse AI research through a pilot of the National AI Research Resource.
  • Work with small businesses to commercialise AI breakthroughs.
  • Prioritise the expertise of immigrants and non-immigrants by streamlining visa criteria, interviews and reviews.
Advancing American Leadership Abroad
  • Expand bilateral, multilateral, and multistakeholder engagements to collaborate on AI.
  • Promote the safe, responsible, and rights-affirming development and deployment of AI abroad to solve global challenges.
Ensuring Responsible and Effective Government Use of AI
  • Issue guidance for agencies’ use of AI.
  • Assist agencies’ to more effectively and cheaply procure AI products and services.
  • Hire more AI professionals in the public sector.

 

Key points and analysis

  • Legislative Impact – There is a risk that the ambitions of the Executive Order do not materialise into tangible progress. The White House has the power to bring certain enforcement actions and direct various departments to develop guidelines, but it can only encourage other independent agencies to comply with its recommendations. Further, the scope of the Executive Order is generally limited to regulating how federal government uses AI, as President Biden is more constrained      when it comes to the private sector and requires Congressional action to advance regulation in that regard. Finally, certain goals, such as the introduction of privacy legislation, have faced significant obstacles to realisation in the past and are at risk of continuing to stall going forwards.
  • Safety Test Results – One of the more significant aspects of the Executive Order is that companies developing AI models which pose a threat to national security, economic security or health and safety must share their safety test results with the government before they are released to the public. President Biden has relied on the Cold War era Defense Production Act to introduce this mandate.
  • AI-generated Content – There is also a significant focus on watermarking AI-generated content so that the public can confirm the authenticity of government communications. President Biden seemed concerned by the risks associated with deepfakes which can be used to commit fraud and spread misinformation.
  • Deadlines – There are a number of action points contained within the Executive Order, with deadlines ranging between 90 to 365 days. The earliest deadlines relate to safety and security items.

Further Reading

Tags

artificial intelligence, digital transformation, digital disruption, it and digital, robotics, technology