This browser is not actively supported anymore. For the best passle experience, we strongly recommend you upgrade your browser.
| 1 minute read

AI: cure for or cause of discriminatory outcomes in healthcare?

It's heartening to read this Nature Medicine report outlining how AI systems can reduce healthcare inequalities, by avoiding the unconscious bias trap that HCPs can sometimes fall prey to. These evolving systems have the potential to revolutionise the way whole communities experience interactions with the healthcare sector.

On the other hand, it must not be forgotten that AI systems are built by humans, using training datasets that are compiled by humans, and therefore also have the ability to contain biases and lead to discriminatory outcomes (for more detail see my recent article in Bristows' Biotech Review on the use of AI in healthcare research). The ICO's AI Auditing Framework Guidance goes into detail about how to mitigate the risk of discrimination in AI models, to comply with the GDPR principle of fairness, in particular when imbalanced training data has been used, or the dataset reflects past discrimination.  

Conclusion? AI can be a huge force for good. With the right regulation in place and a Data Protection By Design approach embedded into the development stage, the human bias element can be circumvented and the power of data can be harnessed, leading to better outcomes for everyone. 

The study is interesting because AI itself has often been accused of being discriminatory.

Tags

artificial intelligence, health tech, data protection and privacy, life sciences