This browser is not actively supported anymore. For the best passle experience, we strongly recommend you upgrade your browser.
| 1 minute read

Top AI firms sign up to White House voluntary commitments

Last Friday (21 July 2023), the White House announced it has obtained eight voluntary commitments on AI from Amazon, Anthropic, Google, Inflection, Meta, Microsoft and OpenAI. The eight commitments are:

  1. Using watermarking on audio and visual content to help identify content generated by AI.
  2. Allowing independent experts to try to push models into bad behaviour (i.e. “red-teaming”).
  3. Sharing trust and safety information with the government and other companies.
  4. Investing in cybersecurity measures.
  5. Encouraging third parties to uncover security vulnerabilities.
  6. Reporting societal risks such as inappropriate uses and bias.
  7. Prioritising research on AI’s societal risks.
  8. Using cutting-edge AI systems, known as frontier models, to solve society’s greatest problems

Much of the commentary is about their voluntary nature and as they are not enforceable they are "meaningless". But how often is it that any group of industry leaders comes together to agree voluntary regulatory steps?

Also, they signal the forthcoming regulatory architecture for AI in the US. We hear an executive order on AI may be signed soon while the Democrats submit legislative proposals. So these commitments are likely to set the standard for future harder law and regulation.

As they're focused on safety and transparency, they don't address every significant issue involved in AI, including the IP and commercial risk and liability issues involved.

It's also interesting that they apply to these leading companies but not the smaller wave of domain-specific AI players emerging, never mend the Open Source AI community (can that even be regulated directly?).

Find out more about our capabilities in the AI space here.

These commitments, which the companies have chosen to undertake immediately, underscore three principles that must be fundamental to the future of AI – safety, security, and trust – and mark a critical step toward developing responsible AI.

Tags

artificial intelligence, it and digital, technology, robotics, digital disruption