This browser is not actively supported anymore. For the best passle experience, we strongly recommend you upgrade your browser.
| 6 minutes read

EU’s AI Act crosses the finishing line

Very late on Friday (8 December) evening the EU Commission, Parliament and Council concluded their epic trilogue negotiation on the EU’s AI Act. The meeting started on Wednesday afternoon, with everyone breaking over Thursday night, before resuming at 9am on Friday. It is reportedly the longest ever final trilogue negotiation. As Bristows wrote last week, the stakes were high and there was immense pressure to get the negotiations finished before the weekend. 

Although this is really only the end of the beginning, and there is a great deal left to do over the next two years to create the technical standards and the regulatory infrastructure that will underpin the Act, Friday’s announcement deserves recognition as an important moment in AI policy. 

You can read the EU Parliament’s press release from Friday evening here whilst the Council’s is here. For the truly committed reader, the video of Friday’s midnight press conference is here

On Friday, a political deal was reached on the most contentious and/or outstanding major items. But there is still lots to do to finalise the text of the Act. So the detail of what has been agreed, both the big ticket items at this last session and many smaller but still impactful provisions agreed over the preceding months, is not yet precisely known. This post covers the key points that were announced on Friday night, and we’ll provide further updates as details emerge over the coming weeks. 

Prohibited AI Systems 

The final list of banned AI Systems is as follows: 

  • biometric categorisation systems that analyse and categorise people based on sensitive characteristics (e.g. political, religious, philosophical beliefs, sexual orientation, race);
  • untargeted scraping of facial images from the internet or CCTV footage to create facial recognition databases;
  • emotion recognition in the workplace and educational institutions (with a caveat for systems used for safety purposes, e.g. to monitor if a truck driver falls asleep);
  • social scoring based on social behaviour or personal characteristics;
  • predictive policing software used to evaluate an individual’s risk of committing future crimes;
  • AI systems that manipulate human behaviour to circumvent their free will; and 
  • AI used to exploit the vulnerabilities of people (due to their age, disability, social or economic situation). 

Military use of AI Systems are out scope of the Act. 

Remote Biometric Identification (RBI) – law enforcement exceptions

The longest running disagreement between the Parliament and the Council has been whether there should be a complete ban on the use of AI Systems used for RBI of people in public spaces, or whether it can be permitted only for law enforcement use cases in tightly defined circumstances. In the final phase of the negotiations, Parliament conceded that live use of RBI would be permitted by law enforcement, subject to prior judicial authorisation, for the following purposes: 

  • targeted searches of victims (abduction, trafficking, sexual exploitation);
  • prevention of a specific and present terrorist threat; or
  • finding or identifying a person suspected of having committed one of a list of specified crimes (e.g. terrorism, trafficking, sexual exploitation, murder, kidnapping, rape, armed robbery, participation in a criminal organisation, environmental crime).

There will also be a ban of the use of RBI retrospectively (i.e. via analysis of CCTV and similar footage) but with slightly less strict exemptions for law enforcement use. 

Foundation models

The intense disagreements over foundation models in recent weeks settled on a compromise largely based on the Spanish presidency’s revised position tabled a couple of weeks ago. This involves distinguishing between general purpose AI models and general purpose AI systems. GPAI models pre-trained with more than 10~25 floating point operations are designated as having “systemic risk”. A standalone set of rules will apply to all GPAI systems and models requiring summaries of information about the model and pre-training data (including relating to copyright) and the preparation of technical documentation. 

Models that are systemically risky will have a more stringent regime, including having to conduct model evaluations, assess and mitigate systemic risks, conduct adversarial testing, report to the Commission on serious incidents, ensure cybersecurity and report on their energy efficiency.  Hopefully more detail on the text of what was agreed will be disclosed in the coming days. 

High-risk use cases

The list of high-risk sectoral use cases has stayed relatively stable throughout the negotiations. Some of the ones added by Parliament have been accepted by the Council. Those we know of are AI systems used to influence vote behaviour, and the public authority use cases of the forecasting of migration trends, and of border control management and surveillance. It isn’t clear yet how many more made it through the negotiations, for example whether digital infrastructure has been added to the management and operation of critical infrastructure use case. The Parliament’s attempt to add recommender systems of social media platforms designated as Very Large Online Platforms (VLOPs) under the Digital Services Act did not survive the negotiation process. 

High risk filters 

There are now four exceptions (or “filters”) to the default rules for determining whether an AI System use case or product is high-risk. These are designed to try and screen out innocuous use of AI Systems in a “high-risk” context, as follows: 

  • the performance of a narrow procedural task, e.g. an AI model that transforms unstructured data into structured data or classifies incoming documents into categories;
  • review or improve the output of some human activity, for example to improve the quality of wording in a document;
  • if the use is purely intended to analyse human decision-making patterns and flag potential inconsistencies or anomalies, e.g. grading by teachers; and
  • preparatory tasks that have a low impact in terms of risk, e.g. file-handling software.

Regulatory guidance will be provided to assist in interpreting when these filters can be relied on, which is welcome as this feels like a part of the Act that will need careful navigation in practice. 

Fundamental Rights Impact Assessment (FRIA)

One of the Parliament’s most impactful proposals was that deployers of high-risk AI Systems (other than in critical infrastructure use cases) should have to undertake a fundamental rights impact assessment before deployment, including a six-week consultation process. This has been narrowed down, and now only applies to public sector bodies like hospitals and schools, and also to banks and insurance companies. The addition of banking and insurance is surprising as these sectors barely feature in the list of high-risk sectors and use cases in Annex III of the Act. We’ll have to wait for more detail to emerge on what exactly a FRIA will entail in practice, including on whether deployers will have to include a consultation process within it. 

Research and innovation

It appears that the exemption for research and innovation has been extended to include testing in real world conditions, which is very helpful. Other provisions support SME developers, though we don’t know the precise scope of these yet. 

Coming into force

Some key sections of the Act have been brought forward to apply earlier than the two years originally proposed in the Commission’s draft. It has been reported that the requirements for high-risk AI systems, the rules relating to GPAI models, and some of the governance and enforcement provisions may apply after one year. This could be a very significant change, though again we’ll need to wait for the final text of the Act to fully assess the practical impact.

Next steps 

Finalisation of the text of the Act is likely to stretch into January at least. Then it will need to be confirmed by the Council and the Parliament, before being formally prepared for adoption by the institutions and final publication. Final publication will likely occur by early April 2024 at the very latest. The Act will apply as law in stages between one and two years after publication. 

Related insights

Chris Holder, Vik Khurana and Charlie Hawes will be speaking at our upcoming masterclass series that will provide insight into the latest market positions, how to reduce time-to-contract and achieve robust agreements geared for success with truly practical drafting hints and tips, all from our team of top-ranked experts sharing real-world examples. It will cover topics such as, but not limited to, leveraging AI and performance management. Find out more here.

For more information and insights - including briefing notes - from our team of leading AI experts, please visit Bristows' dedicated AI webpage.

Tags

ai regulation, artificial intelligence, robotics, technology