Last Friday (21 July 2023), the White House announced it has obtained eight voluntary commitments on AI from Amazon, Anthropic, Google, Inflection, Meta, Microsoft and OpenAI. The eight commitments are:
- Using watermarking on audio and visual content to help identify content generated by AI.
- Allowing independent experts to try to push models into bad behaviour (i.e. “red-teaming”).
- Sharing trust and safety information with the government and other companies.
- Investing in cybersecurity measures.
- Encouraging third parties to uncover security vulnerabilities.
- Reporting societal risks such as inappropriate uses and bias.
- Prioritising research on AI’s societal risks.
- Using cutting-edge AI systems, known as frontier models, to solve society’s greatest problems
Much of the commentary is about their voluntary nature and as they are not enforceable they are "meaningless". But how often is it that any group of industry leaders comes together to agree voluntary regulatory steps?
Also, they signal the forthcoming regulatory architecture for AI in the US. We hear an executive order on AI may be signed soon while the Democrats submit legislative proposals. So these commitments are likely to set the standard for future harder law and regulation.
As they're focused on safety and transparency, they don't address every significant issue involved in AI, including the IP and commercial risk and liability issues involved.
It's also interesting that they apply to these leading companies but not the smaller wave of domain-specific AI players emerging, never mend the Open Source AI community (can that even be regulated directly?).
Find out more about our capabilities in the AI space here.