This browser is not actively supported anymore. For the best passle experience, we strongly recommend you upgrade your browser.
| 6 minute read

What is my role under the AI Act?

If your AI system might be “high-risk” under the AI Act, the next question to ask is what role(s) and associated responsibilities does my organisation have under the Act? This is critical, because the Act’s obligations relating to high-risk AI systems are determined by the position that the Act assigns to the organisation in the value chain of high-risk AI systems that is created by the Act.

The concept of a value or supply chain of products, conceived as a series of economic operators occupying different tiers in the sequence by which a product proceeds from manufacturer to end customer, comes from the EU’s “New Legislative Framework” (NLF) product safety legislation. But the AI Act takes the idea further by placing obligations not just on entities that “provide” (and, to a far lesser extent, “distribute” or “import”) high-risk AI systems, but also on those that “deploy” them. These two roles, the Provider and the Deployer, are the two most important in the Act. 

Once you determine your role, then the Act sets out your obligations in relation to high-risk AI systems in an apparently tidy way. The obligations on Providers are listed in Article 16, and those for Importers, Distributors and Deployers are set out in Articles 23, 24 and 26 respectively. Much of the rest of the Act is focussed on the obligations of Providers of high-risk AI systems, and establishing a framework for enforcement of the Act. 

In fact, the potential roles and responsibilities that can accrue to an organisation under the Act go beyond this orderly set of roles in the AI value chain. And there is another important actor hidden away in Article 25, that of a third party supplier of components of a high-risk AI System, whose responsibilities are less clearly defined, but seem likely to have a wide impact in practice.

Roles in the AI value chain

We summarise the main roles as follows:

  • Provider: An entity that develops (or commissions) an AI system or a general purpose AI model and places it on the market or puts it into service under its own name or trademark, whether for payment or free of charge.
  • Deployer: An entity using an AI system under its authority, except where the AI system is used in the course of a personal non-professional activity.
  • Distributor: An entity in the supply chain, other than the provider or the importer, that makes an AI system available on the EU market.
  • Importer: An entity located or established in the EU that places on the market an AI system that bears the name or trademark of an entity established outside the EU.
  • Authorised representative: An entity located or established in the EU that has received and accepted a written mandate from a provider of an AI system or a general-purpose AI model to, respectively, perform and carry out on its behalf the obligations and procedures established by the Act.

Many organisations making use of AI will be Deployers

Organisations that use high-risk AI systems will be Deployers, so will be the most common type of role created by the Act and we expect will capture many businesses making use of AI. The obligations on Deployers in Article 26 are not trivial. What kind of technical and organisational measures will Deployers have to take to ensure that AI systems are used in accordance with the Provider’s instructions of use? What roles will be given over to the human oversight function required by the Act? How will a Deployer ensure that input data is relevant and sufficiently representative? The optimal outcome would be that a combination of guidance from the EU AI Office and harmonised technical standards will provide practical insights to help answer these questions. 

Complex AI supply chain means other roles are harder to determine

The definitions of Provider, Distributor and Importer are all built on the fundamental NLF product safety law concept of the placing on the market of a product, along with the related terms of making available on the market and putting into service. These terms are all defined in the Act, and in many circumstances will be relatively straightforward to apply. 

However, given Bristows’ long experience in advising on these terms in the context of the EU Medical Device Regulation, especially as that legislation applies to Software as a Medical Device, we anticipate that, in many scenarios, applying these terms to AI systems will present considerable practical challenges. This is partly because the terms were originally defined with physical products in mind and the read-across to software is not necessarily intuitive, and partly because the access and distribution channels in the AI product ecosystem (e.g. API, SaaS, closed v open source) are already far more complex than when the Act was conceived and drafted, both technically and in the sense of the underlying legal/contractual relationships.

Guidance on and examples of roles expected to be needed

Surprisingly, there does not appear to be any specific guidance scheduled for publication on this topic by the AI Office. Article 96(1)(e) obliges the Commission to publish detailed guidance on the relationship between the Act and other EU legislation, including product safety legislation. It would be optimal for, at the least, that guidance to include a section on how to interpret placing on the market in the specific context of the AI Act, accompanied by a long list of detailed examples. Similar guidance exists in the form of the EU Commission’s Blue Guide, which is the authoritative reference manual used by practitioners to interpret placing on the market and ancillary terms in the context of the EU’s NLF product safety legislation.

However, the Blue Guide does not consider the specific difficulties associated with interpreting these terms in the context of software and AI, because it is primarily intended for use in relation to tangible products which are manufactured, warehoused, shipped and physically supplied/installed. The Blue Guide also does not address the Deployer role, because that role does not exist in other NLF product safety legislation. AI-specific guidance, which takes into account the intangibility of software and all of the different distribution models which are available for AI systems, would be of great value. 

Role of Provider is critical to understand

What the Act does usefully do is explain the circumstances in which a Deployer, Importer, Distributor or other third party can become a Provider of a high-risk AI system that is already circulating in the market. 

This is explained in Article 25, which we expect will become one of the most heavily cited (and perhaps contested) provisions of the Act. Given the compliance overhead that will fall on Providers of high-risk AI systems, no organisation will want to become one inadvertently. The fact that putting your name or trade mark on a high-risk AI system that is already on the market is enough for your organisation to be deemed the Provider of that AI system is a key takeaway here. 

Article 25 also introduces another role in the value chain: the supplier of an AI system, tools, services, components or processes that are used or integrated in a high-risk AI system. Because it is not neatly defined, and only appears in Article 25(4), this component supplier role has not attracted much attention or commentary, but our view is that it could catch a broad swathe of vendors who otherwise consider themselves out-of-scope of the Act. 

The obligation is for a written contract to be used between Providers of high-risk AI systems and these suppliers to facilitate the Provider’s compliance with the Act. The AI Office is to produce a set of voluntary model contractual terms for this purpose. 

It is interesting to consider this in light of the so-called LLM orchestration stack that has emerged over the last 18 months in relation to foundation models, with a variety of highly specialised service providers (e.g. vector databases, open-source LLM platforms) now available to assist in developing a LLM-based product. Given how widely scoped the component supplier role is under Article 25(4), it is easy to see the AI Office’s voluntary model terms being very widely adopted if (as seems inevitable) a similar technology/vendor stack develops around high-risk AI systems, or at least some of the more popular use cases. 

Overlaps and closing thought

Article 25 raises the possibility of AI systems developed by a third party, being integrated into a Provider’s high-risk AI system. The same is true of general purpose AI models under Article 54, which could be a component of an AI system (whether high-risk or not), or become a standalone high-risk AI system. 

Similarly, the transparency provisions in Article 50 (relating to lifelike interactive AI, deepfake content, etc.) will apply to any AI system that has the relevant functionality, which could be a high-risk AI system and/or a general purpose AI model.

So, whilst the value chain of Provider, Importer, Distributor and Deployer of high-risk AI systems is still the core set of roles around which the Act is organised, they are not the whole story, and we are only beginning the journey of understanding how to interpret their obligations under the AI Act.

To hear more from our experts on AI, visit our dedicated page here and register now for our Tech Summit 2024!

Subscribe to receive our latest insights - on the topics that matter most to you - direct to your inbox, at your preferred frequency. Subscribe here

Tags

spotlighton-euaiact, bristowsshorts, artificial intelligence, technology, article