This browser is not actively supported anymore. For the best passle experience, we strongly recommend you upgrade your browser.
| 7 minutes read

Navigating the EU AI Act: what you need to know

The EU AI Act is a game-changing AI regulatory framework that will be significant for organisations inside and outside the AI industry. By now, many organisations will be implementing AI governance frameworks to help them comply with the AI Act, and many lawyers will have received training on the Act and many are leading their organisation’s compliance efforts.

At Bristows we have been advising clients on a range of issues relating to the AI Act: how the Act classifies their AI products and services, helping design governance frameworks mapping the roles and responsibilities needed, and how they can work and contract with third party AI providers, all with effective risk management in mind.

A common question we get is, “what do I need to know about the AI Act right now?” Essentially, what are the core components of the Act, how might it affect my organisation, and what are some useful ways to better understand my role in advising on AI?

So, in this article series, to be released over the next couple of weeks, our experts will unpack what we think organisations across all sectors should understand about this significant regulatory change to inform their compliance strategy.

  • Is my AI “high risk” under the AI Act? Much of the Act is focused on defining and regulating “high risk” AI systems and use cases, so this is crucial to understand given the requirements that flow from this question. 
  • What is my role under the AI Act? The Act does not only regulate ‘Big Tech’, all manner of organisations developing, building upon and using AI are potentially caught.
  • What is a good starting point for an AI governance framework? Lessons from a recent significant regulatory change: GDPR and data protection. 
  • Product safety rules under the AI Act. This significant aspect of the AI Act has close parallels with existing product safety laws, so we look at lessons from medical device regulation. 
  • Intellectual property and the AI Act. IP is one of the main issues for everyone involved in the burgeoning AI ecosystem, so we look at how the Act deals with copyright issues. 
  • Employment and HR under the AI Act. As many internal AI use cases will affect employees, we look at what employers need to know about the Act.

Is my AI “high-risk” under the AI Act?

The primary purpose of the AI Act is to prevent risks caused by “high-risk” AI systems. Yes, the Act does other things. It bans some AI use cases. It imposes transparency measures on deceptively realistic AI. It has a standalone section for foundation models, with additional safety rules designed for future generations of increasingly powerful models. But most of the Act is a legislative framework for the regulation of AI systems that it classifies as high-risk. 

So if your AI system is within scope of the Act, how do you figure out whether it is high-risk or not?

The starting point is to understand that the Act conceptualises “high-risk” in two ways: harm to the health and safety of individuals, and harm to the fundamental rights and freedoms of individuals as enshrined in the EU’s Charter of Fundamental Rights.

These are two very different types of risk. The potential risks of physical harm to people from AI-powered products such as medical devices, toys and machinery going awry is obvious. The risks to rights and freedoms perhaps less so, but here the focus is on AI systems that influence decisions that may impinge on these rights, particularly in the public sector. For example, the right to asylum, to not be discriminated against and the right to education. 

Keeping the distinction in mind between physical harm and harms to fundamental human rights will help you navigate the Act’s rules on high-risk AI.

AI risk classifications

  • Prohibited AI practices (e.g. social scoring): A short list of specific AI practices are banned
  • High-risk (e.g. recruitment, medical devices): AI systems for uses classified as “High-risk” permitted subject to mandatory technical and transparency requirements and a conformity assessment regime
  • Simulacra & synthetic content: AI systems that simulate people or that create deceptively simulated content are subject to separate transparency requirements
  • GPAIs: General Purpose AI systems - transparency and information provisions, with additional rules for GPAIs with systemic risk
  • All other AI (out of scope): All other AI systems are permitted without any restrictions under the Regulation

How does the high-risk categorisation work? 

The Act creates two broad categories of high-risk AI: high risk products, and high-risk use cases. As a rule of thumb, the risks relating to products are predominantly health and safety risks, and the risks relating to use cases are predominantly risks relating to fundamental rights. The Act formulates each of these in a different way.

High-risk products

The Act deems AI systems as high-risk products by referring to a list of EU product safety laws set out in Annex I. The list is split into two sections: 

  • in the first section are product safety laws of a range of products, notably medical devices, machinery, toys and radio equipment; 
  • the second section lists product safety laws relating to forms of transport, mostly planes, trains and automobiles. 

The difference between these two sections is crucial. Almost none of the Act applies to the second list covering planes, trains and automobiles, other than some minor provisions including to ensure consistency with the technical requirements of the Act. 

The laws in the first section of Annex I are what you should focus on. AI systems in products covered by these laws are classified as high-risk if they meet both of the following criteria: 

  • that the AI system is intended to be a safety component of a product, or is itself a product; and 
  • that the product is required under the relevant law to undergo a third-party conformity assessment prior to it being placed on the market or put into service. 

The fact that both criteria must apply is important. Under the laws in question, only a subset of products have to undergo a conformity assessment by a third party. In many cases, the manufacturer is permitted to perform the conformity assessment themselves. If you are developing or supplying these products already, you will know which products have to undergo a conformity assessment by a third party, and which can be self-assessed. We explore the more difficult question of how well the Act integrates with these Annex I laws such as the Medical Devices Regulation and the In-Vitro Diagnostics Medical Device Regulation in this article, and in our next article ‘What is my role under the AI Act?’, for the Act’s close parallels with, and lessons that can be learned from, the world of medical devices.

High-risk use cases

The Act takes a different approach to classifying high-risk use cases. Rather than cross-referring to a list of laws, it refers to Annex III which describes AI systems in specified use cases within specified sectors as high-risk. For example, “critical infrastructure” is a sector, and AI systems used as safety components in the supply of water, gas, heating or electricity are automatically deemed as high-risk uses cases. To take another example, in the “employment” sector, AI systems used to analyse and filter job applications are classified as high-risk.

High-risk use cases

  • Biometrics:  
    • Remote biometric identification
    • Biometric categorisation per protected characteristics
    • Emotion recognition
  • Education:
    • Determining access to educational institutions 
    • Assessments and/or admission tests
    • Determining level of education provided
    • Cheat detection
  • Employment:
    • Recruitment or selection
    • Promotion, task allocation and termination
    • Evaluating performance and behaviour
  • Essential services:
    • Evaluate eligibility for state benefits and services
    • Credit scoring
    • Risk assessment for life and health insurance
  • Critical Infrastructure:
    • Safety component of system used in critical digital infrastructure, road traffic or supply of water, gas, heating and electricity
  • Law Enforcement:
    • Predicting likelihood of person being a victim, assessing evidence, polygraphs and similar
    • Assessment re-offending risk
    • Profiling for crime-related analytics
  • Administration of Justice & Democracy:
    • Assisting a judicial authority in research and application of law
    • Influencing outcome of election or voting behaviour
  • Migration & Border control:
    • Verification of travel documents; examination of applications for asylum, visa and resident permits
    • Polygraphs and similar for risk-assessment, including a security or health risk

Most of these are in the public sector, but not all are. 

The basis on which use cases have been included or excluded is not necessarily intuitive. Emotion recognition systems are included in the biometric sector. Credit scoring is deemed an essential private service but is the only financial services use case that has been included. The list of employment and HR-related use cases is surprisingly long. The bottom line: if you think your sector and/or use case might be in scope, you’ll need to read the relevant wording of Annex III carefully to try and discern whether your AI system will be caught. 

There is also a set of so-called “filters”, that were added to the Act at a late stage, designed to ensure that innocuous deployments of AI systems in the use cases in Annex III are not categorised as high-risk. This means that AI systems intended for the following tasks will not be considered high-risk under Annex III: (a) narrow procedural tasks; (b) improving the result of a completed human activity; (c) analysing human decision-making patterns; and (d) performing preparatory tasks to an assessment relevant for the purpose of a use case. You may notice that the wording of these filters is itself also not immediately clear. What is a “narrow” procedural task, as opposed to a broad one? What does (d) actually mean? It is easy to see the filters themselves becoming contested.

The good news is that the EU Commission has an obligation under the Act to publish guidelines to assist in interpreting high-risk for the purposes of Annex III. The Act states that these guidelines must include a comprehensive list of practical examples of uses cases that are high-risk and not high-risk. The less good news is that the deadline for publishing the guidelines will be in March 2026 (assuming the Act comes into force in August 2024). This will be just six months before most of the Act will apply as law. Whilst this may be too late for developers of AI systems that do not map neatly onto Annex III, it is possible that the Commission’s new AI Office will publish the guidelines earlier, or at least provide informal guidance in webinars and other forums in the meantime, perhaps in the context of the AI Pact initiative.

Conclusion

The rules around high-risk AI products and use cases are complex and rarely intuitive, but understanding the principles behind the Act’s approach to high-risk AI should help in applying them to your AI system.

To hear more from our experts on AI, visit our dedicated page here and register now for our Tech Summit 2024!

Subscribe to receive our latest insights - on the topics that matter most to you - direct to your inbox, at your preferred frequency. Subscribe here

Tags

spotlighton-euaiact, bristowsshorts, artificial intelligence, technology, article