0 0
Read Time:2 Minute, 56 Second

New York, NY – The increasing use of artificial intelligence (AI) in medicine is raising concerns about over-reliance and the potential for errors, according to recent research and anecdotal evidence. While AI tools offer valuable assistance, experts warn that doctors may be developing an “automation bias,” unconsciously deferring to AI predictions even when they are incorrect.

Dr. Stephen Belmustakov recently experienced this firsthand. After working in a New York City hospital using Aidoc, an AI tool that predicts abnormalities on radiology scans, he moved to a private practice without such technology. He found himself feeling “on edge” without the AI’s “second opinion,” realizing he had become accustomed to its presence.

This reliance can affect learning, as AI tools often identify potential issues before physicians have a chance to analyze the images themselves. “If a tool already told you that it’s positive, it’s going to kind of change the way you look at things,” Dr. Belmustakov explained.

This phenomenon, known as automation bias, has been observed in other fields, such as drivers blindly following GPS directions into dangerous situations. In medicine, this could lead to serious consequences.

Dr. Tarun Kapoor, Chief Digital Transformation Officer at Virtua Health, acknowledges that while AI adoption in hospitals is still relatively low, “automation bias will become widespread with the tools that are continuing to develop at light speed.” He emphasizes the need for immediate discussion on this issue.

Experts suggest that the way AI tools present information to clinicians plays a crucial role in fostering trust. Research indicates that different “explainability methods,” such as simple boxes highlighting potential problems versus more detailed explanations with comparisons to similar scans, can significantly impact clinician responses. Dr. Paul Yi, Director of Intelligent Imaging Informatics at St. Jude Children’s Research Hospital, notes a “pretty big gap” in understanding how clinicians react to these different methods, despite the availability of hundreds of FDA-cleared AI products for radiology.

A study conducted by Dr. Yi and computer scientists at Johns Hopkins University revealed that when AI predictions were incorrect, non-radiologists were more likely to still rate the tool as useful, while radiologists were more critical. The study also found that simpler explanations led to faster agreement with the AI, which could be a “double-edged sword” for overworked radiologists, potentially leading to errors if the AI is wrong.

The human tendency to trust machines is a complex issue. Research suggests that trust is contextual, influenced by factors such as the user’s experience and the setting. In healthcare, where clinicians often face high-stakes decisions under pressure, the appeal of AI for decision reinforcement is strong.

Virtua Health is actively addressing this issue by considering slowing down the speed of their AI tool, GI Genius, which identifies polyps during colonoscopies, to maintain endoscopist engagement.

Furthermore, there are concerns about “reverse automation bias,” where clinicians might become overly cautious and second-guess the AI even when it is correct, leading to increased burnout.

Dr. Belmustakov’s experience highlights the fallibility of AI, as Aidoc sometimes missed findings or flagged false positives, leading to wasted time and requiring further verification. Legal cases have even emerged involving physicians who relied on incorrect AI predictions.

Looking ahead, researchers are exploring ways to mitigate automation bias, including tweaking how AI models explain their results and implementing training requirements for using AI tools. As AI continues to evolve and outperform humans in certain tasks, the focus must shift to optimizing human-machine interaction to ensure patient safety and effective healthcare delivery.

Happy
Happy
0 %
Sad
Sad
0 %
Excited
Excited
0 %
Sleepy
Sleepy
0 %
Angry
Angry
0 %
Surprise
Surprise
0 %