AI presents many opportunities in the women’s health space, but it can also pose considerable risks. Anyone seeking to commercialise AI-enabled femtech products in the EU needs to be aware of the impact of the EU’s Regulation (EU) 2024/1689 (the AI Act) as it can impose onerous regulatory obligations. This article considers the demands imposed by the EU’s medical device regulations and the EU AI Act, and how this could impact femtech products.
The role of AI in women’s health
Femtech - the fast-growing space at the intersection of technology and women’s health - is undergoing an AI revolution. From drug development and digital therapeutics, to clinical decision-making and personalised treatment, AI is a powerful tool that can help to close the women’s healthcare gap.
A particularly exciting area of growth is in AI-enabled medical devices. Examples include AI systems that perform mammography analysis to screen for breast cancer, and systems that can diagnose endometriosis from data such as transvaginal ultrasounds and MRI scans. As research into women’s health has historically been (and continues to be) underfunded and overlooked, this wave of innovation is not just welcome, but long overdue.
That said, the risks of AI are significant and can be particularly detrimental in the context of women’s health. There is scope, for example, for the exacerbation of health inequalities through biased training data, privacy risks stemming from invasive data mining and sharing, and the perpetuation of health misinformation. It is therefore essential that AI in healthcare is developed and used responsibly, with proper governance, oversight, accountability, and public education.
The impact of the EU AI Act
To mitigate some of the risks of AI, the EU implemented the AI Act, which aims to support the development and deployment of trustworthy AI in the EU. The AI Act is the world’s first comprehensive AI legislation. It has an extraterritorial scope, applying to organisations whose “AI models and systems” are commercialised in the EU or where the output of the AI model or system is used in the EU, even if said organisation is established outside of the EU.
The AI Act imposes a range of obligations on various stakeholders, including “providers,”1 “deployers”2 and other “operators” (including authorised representatives, importers and distributors). The particular obligations depend on the role the person plays and the category of risk posed by the AI model or system. Notably, unlike the medical devices regulations, the people/organisations deploying AI systems are regulated. However, overall, the obligations are intended to protect fundamental rights, promote safety and security, boost transparency and protect intellectual property.
These are worthy objectives, but the measures the AI Act invokes can be onerous. As a result, the AI Act may present a significant regulatory hurdle for stakeholders. For companies operating in the women’s health space - already facing a number of challenges and who may have limited compliance resources - these regulatory demands can be especially burdensome. There is an added layer of complexity for those with medical devices, as the two regulatory frameworks interact.
Interaction between medical devices and AI regulatory frameworks
In the EU, medical devices are regulated by the Medical Device Regulation (EU) 2017/745 (MDR).3 Under these regulations, devices are assigned risk classifications. For the lowest risk devices (Class I medical devices), the manufacturer4 can self-certify compliance with the MDR prior to the product being placed on the market or put into service in the EU. However, high risk devices (Class IIa or above medical devices) must undergo a third party conformity assessment carried out by a notified body. Notified body conformity assessments require a detailed review of the manufacturer’s quality management system, technical documentation, systems and procedures. The process will often take more than a year to complete. Additionally, manufacturers have to grapple with ongoing burdens such as vigilance and post-market surveillance.
Like the MDR, the AI Act also distinguishes between AI systems that pose different levels of risk. The AI Act imposes onerous obligations on “high risk” AI systems, including in relation to accuracy, transparency, risk management, data quality and governance, and human oversight (amongst others). Although there is some overlap between the MDR and AI Act requirements, many are new AI-specific obligations. These pose a significant additional regulatory burden. This increases the complexity and cost of compliance for stakeholders.
Notably, the risk classification of an AI system that is itself or is included in a medical device is linked to the device’s classification. Under the AI Act, AI systems are classified as “high risk” systems if:
(a) the AI system is a safety component of a medical device or the AI system itself is a medical device; and
(b) the medical device is required to undergo a conformity assessment under the MDR.
Therefore, low risk medical devices (i.e., Class I medical devices) that are self-certified will not be “high risk” AI systems. Whereas, any device that requires a notified body to perform its conformity assessment will be a “high risk” AI system, and so will be subject to the additional AI Act requirements.
Unfortunately for those wishing to avoid the “high risk” AI system requirements, there are relatively few Class I devices under the MDR. Therefore, the majority of medical devices that are an AI system/have an AI system as a safety component will qualify as a “high risk” AI system.
One notable example of a Class I device is software intended to support conception by calculating the user’s fertility status based on a validated statistical algorithm.5 Therefore, if this kind of software medical device is also an AI system, it would not be classed as a “high risk” AI system. Although, the manufacturers of these devices would need to carefully consider any product developments that add additional functionality. For example, if the device could also be used as a means of contraception it would be a Class IIb medical device instead. This would in turn mean the AI system because a “high risk” system.
Therefore, whilst AI has the potential to provide tremendous benefits for femtech, it also triggers additional complexity that can be time-consuming and costly to navigate. However, it is important to get it right in terms of compliance in order to maintain consumer trust, avoid regulatory penalties, and pave the way for long-term success and viability.
Other regulatory considerations
Of course, all of this also takes place against a backdrop of applicable horizontal legislation, such as the GDPR and EU Data Act, which present a significant compliance burden in their own right.