An AI system is classified as “high-risk”, the AI Act imposes significant requirements on the provider in order for it to place the system on the market and then on an ongoing basis. These requirements mean that, in many ways, the AI Act amounts to a piece of product safety legislation. As such, organisations in those sectors well-versed in placing safety-critical products on the market are better prepared to comply with the product safety aspects of the Act than others. One such sector is life sciences, where there are very close parallels between the product safety features of medical device legislation and those of the AI Act. Our specialist life sciences regulatory team regularly advises clients on these matters, from risk classification, product liability, the need for conformity assessments, CE-marking and post-market surveillance, all of which are mirrored in the AI Act.
Below, we explore a few of those parallels and offer some potential lessons from medical device legislation for those seeking to comply with the product safety features of the AI Act, in the life sciences sector and beyond.
Medical device law – quick background
In 2017, the EU adopted twin new regulations: the Medical Devices Regulation and the In Vitro Diagnostics Medical Devices Regulation. These represented significant changes to the relatively well-established regulatory frameworks for medical devices and in-vitro diagnostic medical devices.
These new regulatory frameworks represented the biggest change in thirty years and imposed a number of more stringent requirements on “Manufacturers” (the entities with primary regulatory responsibility for a device). However, the new requirements were not reserved to “Manufacturers”: new requirements were also imposed on other economic operators involved in the design, development and supply of components of medical devices.
This caused several issues for organisations involved in the supply chain, many of which seem set to be repeated by the EU AI Act. In fact, the AI Act is even more ambitious than the new economic operator requirements in that it also seeks to regulate entities that deploy AI systems and in certain instances, suppliers of components to high-risk AI systems. This will result in significant additional compliance obligations in the supply chain.
|
Tight deadlines got extended and now proposed changes
While all of these changes were well-intentioned and improved the regulatory framework, the implementation caused enormous difficulties and the withdrawal of products. These difficulties resulted in delays to the implementation of the new frameworks, extended transitional periods and now a proposal for a slew of amendments.
In our view, the main causes of the disruption associated with MDR and IVDR were:
- Unrealistic implementation timetables.
- A shortage of Notified Body capacity – especially as regards software and in vitro diagnostics.
- A lack of guidance and delays issuing the guidance. In all honesty, when some of the guidance was finally published, it was so ambiguous that the rushed guidance caused difficulties.
- Inconsistent interpretations and approaches adopted by different member states and different notified bodies.
- Overly optimistic timetables to build and deploy a database.
Likely to repeat each of these
Unfortunately, each and every one of these missteps appear likely to be replicated with the adoption of the AI Act particularly for “high-risk” AI systems, which represent the primary portion of the AI Act. We can already see:
- A disastrous shortage of competence and capacity at Notified Bodies.
- Virtually none of the guidance necessary to implement the AI Act has been published
- We still do not have the new central regulator or, in many cases, the national competent authority.
This is particularly disconcerting given that the requirements in the AI Act require all stakeholders to come to grips with totally new concepts like bias management and governance.
Further, frustratingly, there are a number of significant challenges reconciling the AI Act with the requirements under the MDR and the IVDR. This is crucial as the two regulatory frameworks are intended to operate in an interconnected manner. By way of example, the language in the AI Act is inconsistent with the accepted terminology for Conformity Assessment of existing products like medical devices. Worse, there are now some instances of serious unintentional conflicts between the requirements of the AI Act and the MDR and IVDR, whereby conducting an authorised clinical study in accordance with the MDR or IVDR might constitute an offence under the AI Act.
To hear more from our experts on AI, visit our dedicated page here and register now for our Tech Summit 2024! |