We Recall Drugs with Adverse Effects, So Why Not Recall AI Algorithms with Racial Bias?

We Recall Drugs with Adverse Effects, So Why Not Recall AI Algorithms with Racial Bias?
Ed Ikeguchi, MD, Chief Executive Officer, AiCure

Over the last decade, AI has transformed from a futuristic, hype-driven tool, to now being a daily part of our lives. We use AI for everything from unlocking our smartphones, to evaluating our credit scores, to helping clinicians make patient care decisions. In particular, the potential of AI to transform drug development and patient care is unparalleled – its ability to gather objective insights can help elevate a clinical trial’s data and bring potentially life-saving drugs to people in need faster. But, the data mining capabilities that make AI so exciting are also the same ones that worry industry leaders. To ensure AI doesn’t unintentionally perpetuate human biases and put minority populations at a disadvantage, the industry must work together to help AI reach its greatest potential. AI is only as strong as the data that it’s fed, so ensuring the quality of data we start with must be at the core of everything – the credibility of AI’s data backbones must be failsafe. 

When this technology is governed sufficiently, it holds significant potential for automating processes across all industries and taking innovation to new heights. We saw COVID-19 bring to light longstanding issues of healthcare disparities, and now more than ever, the life science industry is challenged to re-evaluate the foundation of the AI our drug development and patient care decisions increasingly rely on. There’s now a responsibility – both ethically and for the sake of “good science” – to thoroughly test algorithms. Companies are responsible for ensuring that their algorithms will perform as expected outside of a controlled research environment. They can do this by first establishing processes that help determine data sets are representative of the broader population, and second, normalizing going back to the drawing board when algorithms don’t work as planned, re-building them from the ground up. 

Applying “checks and balances” to spot bias

Often in today’s environment, once an AI solution receives a relatively arbitrary stamp of approval, there are limited protocols in place to assess how it performs in the real world. We need to be wary of this, as today’s AI developers still consistently lack access to large, diverse data sets and often train algorithms on small, single-origin data samples with limited diversity. Usually, this is because many of the open-source data sets developers use were trained using computer programmer volunteers, a predominantly white population. When these algorithms are applied in real-world scenarios to a broader population of different races, genders, ages and more, tech that appears highly accurate in research falls short in delivering on its promise and can lead to faulty conclusions about a person’s health.

Similar to how new drugs go through years of clinical trial testing with thousands of patients to determine adverse events, a vetting process for AI can help companies understand if their tech will fall short in real-world scenarios. There are usually unforeseen results when you move from a controlled research environment to real-world populations. For example, even after a new drug is approved, once given to hundreds of patients outside of a clinical trial, new side effects or discoveries that never arose during the trial often are uncovered. Just like there’s a process to reassess that drug, there should be a similar checks and balances protocol for AI that detects inaccuracies in real-world scenarios, revealing when it doesn’t work for certain skin colors or other biases. An element of governance and peer review for all algorithms should be mandated, as even the most solid and tested algorithm is bound to have unexpected results arise. An algorithm is never done learning – it must be constantly developed and fed more data over time to improve. 

Identify & refine

When companies notice that algorithms aren’t working properly across the entire population, they should be incentivized to rebuild their algorithms and incorporate more diverse patients into their testing. Whether it’s including patients with different skin tones or people wearing hats, sunglasses, or patterned clothes, training the AI to distinguish the individual person no matter their appearance, dress or environment will produce stronger algorithms, and therefore, improved patient outcomes. 

As an industry, we need to become more skeptical of AI’s conclusions and encourage transparency.  Companies should be able to readily answer basic questions such as how was the algorithm trained? On what basis did it draw this conclusion? Only once we interrogate and constantly evaluate an algorithm under both common and rare scenarios with varied populations will it be ready for introduction into real-world situations.

Recognizing there is work to be done

The first step towards fixing the problem is recognizing there is one. Many still haven’t grasped the notion that different complexions and appearances need to be factored into algorithms in order for the tech to work effectively. As the AI industry continues to grow and these tools increasingly become a pivotal part of how we research drugs and deliver new treatments, the future of the healthcare industry and patient care holds great promise. We must prioritize equality in the technology our patients and pharmaceutical companies use to help it reach its potential and make healthcare a more inclusive industry.


About Ed Ikeguchi, M.D., CEO, AiCure

Edward F. Ikeguchi, M.D. is the Chief Executive Officer at AiCure. Prior to joining AiCure, he was previously a co-founder and Chief Medical Officer at Medidata for nearly a decade, where he also served on their board of directors. Dr. Ikeguchi served as assistant professor of clinical urology at Columbia University, where he has experience using healthcare technology solutions as a clinical investigator in numerous trials sponsored by both the commercial industry and the National Institutes of Health. Dr.