0 0
Read Time:2 Minute, 32 Second

A new study from Yale researchers sheds light on how bias at various stages of medical AI development can result in poor clinical outcomes and exacerbate existing health disparities. Published on November 7 in PLOS Digital Health, the study underscores the reality that “bias in, bias out” – a concept commonly used in the computing world – applies to artificial intelligence in healthcare.

John Onofrey, Ph.D., assistant professor of radiology & biomedical imaging and urology at Yale School of Medicine (YSM), and senior author of the study, emphasized the pervasive influence of bias on AI outcomes. “Bias in, bias out,” he said, is a reminder that if biased data is fed into AI models, the results will reflect those biases.

The study provides a comprehensive analysis of how biases can creep into medical AI during different stages, from the initial training data to model development, publication, and eventual implementation. The researchers also provide practical examples of how bias affects healthcare outcomes and propose strategies for mitigating it.

Onofrey, who has worked in the machine learning and AI fields for years, acknowledged that while the existence of bias in algorithms is widely recognized, the extent to which bias can infiltrate the AI learning process is staggering. “Listing all the potential ways bias can enter the AI learning process is incredible. This makes bias mitigation seem like a daunting task,” he said.

The researchers identified several key sources of bias, particularly in the use of race and other demographic factors in clinical models. For example, previous research has shown that using race as a factor in estimating kidney function can lead to longer wait times for Black individuals to get onto transplant lists. The Yale team advocates for more precise, equitable measures, such as incorporating socioeconomic factors and geographic data, like zip codes, to improve model accuracy.

“Greater capture and use of social determinants of health in medical AI models for clinical risk prediction will be paramount,” said James L. Cross, a first-year medical student at YSM and the study’s first author. Cross stressed that understanding the broader social context of patients is essential for developing fairer AI systems.

The study also highlights the fact that bias is inherently a human problem. As Michael Choma, MD, Ph.D., associate professor adjunct of radiology & biomedical imaging at YSM and a co-author of the study, put it, “When we talk about ‘bias in AI,’ we must remember that computers learn from us.” This underscores the need for a concerted effort to address human biases during the AI development process.

The findings in this study are a wake-up call for healthcare providers, AI developers, and policymakers alike. As AI continues to play a larger role in clinical decision-making, ensuring that these systems are free from bias is critical to improving healthcare equity and achieving better patient outcomes.

For more information, refer to the original study: James L. Cross et al, Bias in medical AI: Implications for clinical decision-making, PLOS Digital Health, DOI: 10.1371/journal.pdig.0000651.

Happy
Happy
0 %
Sad
Sad
0 %
Excited
Excited
0 %
Sleepy
Sleepy
0 %
Angry
Angry
0 %
Surprise
Surprise
0 %