This browser is not actively supported anymore. For the best passle experience, we strongly recommend you upgrade your browser.
| 1 minute read

No Algorithmic Impact Assessment? No NHS data!

According to a press release from the Department of Health and Social Care, NHS England will pilot a scheme requiring AI developers to complete a so-called "Algorithmic Impact Assessment" (AIA) before they can access NHS patient data. Specifically, the need for an AIA will be incorporated into the data access process for the National Covid-19 Chest Imaging Database and the proposed National Medical Imaging Platform.

The purpose of an AIA is to encourage AI developers to think about the risk of their AI systems making biased decisions and to take steps to eradicate or mitigate those risks at an early stage in development. Bias in the design and development of medical devices can have deadly consequences, as was tragically shown by the news last year that people of colour have suffered worse health outcomes when infected with COVID-19 as a result of pulse oximeters not having been validated on a sufficiently diverse range of skin tones. 

AI systems which deploy machine learning techniques are perhaps particularly susceptible to bias being "baked in" from an early stage in their development, as it can be difficult to go back and tweak the way the system works once it has been trained, so a process such as an AIA for mitigating bias from the outset is valuable.

Based on guidance published by the Ada Lovelace Institute to accompany the pilot scheme, completing an AIA will be no simple form-filling exercise. AI developers will complete the initial impact assessment on their own, but they will then join a participatory workshop run by the NHS AI Lab which will provide support and prompt them to update their impact assessment further. AI developers will continue to update their impact assessment as their AI system progresses through development. As a result, the AIA will be a "living document", requiring AI developers to continually consider the risk of bias being introduced into their systems. 

It's impressive that the NHS is willing to take such a hands-on and practical approach to supporting AI developers with these kinds of difficult ethical issues.  

An important point to note is that completion of an AIA will be entirely additional to every other part of the AI development process. The Ada Lovelace Institute makes clear that AI developers will still need to complete a separate Data Protection Impact Assessment and will still need to ensure that their system is developed in such a way that it complies with any applicable medical device safety requirements. 

The algorithmic impact assessment will prompt developers to explore and address the legal, social and ethical implications of their proposed AI systems as a condition of accessing NHS data. We anticipate that this will lead to improvements in AI systems and assure patients that their data is being used responsibly and for the public good.

Subscribe to receive our latest insights - on the topics that matter most to you - direct to your inbox, at your preferred frequency. Subscribe here

Tags

artificial intelligence, data protection and privacy, health tech, life sciences, life sciences regulatory, technology