Artificial intelligence (AI) is revolutionizing healthcare by enhancing diagnostic accuracy and improving patient outcomes. However, emerging research from 2025 highlights a worrying trend: routine use of AI may erode essential cognitive and diagnostic skills among healthcare professionals, including experienced physicians. This dual-edged phenomenon raises critical questions about the long-term implications of AI in medicine, underscoring the need for a balanced approach to technology integration.
Key Findings from Recent Research
A landmark study published in The Lancet Gastroenterology and Hepatology in August 2025 examined the effects of AI assistance on doctors performing diagnostic colonoscopies. The researchers analyzed the performance of 19 highly experienced doctors, each with over 2,000 colonoscopies completed, comparing detection rates of pre-cancerous growths before and after AI implementation.
While AI significantly improved adenoma detection rates during assisted procedures, the study revealed a concerning 20% decline in doctors’ ability to detect tumors independently when AI support was removed. This suggests that over-reliance on AI diminishes clinicians’ observational skills and decision-making autonomy, potentially posing risks if the technology becomes unavailable due to system failures or cyber-attacks.
Supporting these findings, a 2025 study from Poland found similar skill degradation among doctors performing colonoscopies with AI assistance. The research highlighted that clinicians might unconsciously rely on AI cues, leading to lower vigilance and reduced independent diagnostic skills.
Beyond medicine, a 2025 MIT study demonstrated that AI tools like ChatGPT reduced cognitive engagement among students, illustrating a broader concern that excessive AI dependence may dampen critical thinking and active brain involvement.
Expert Perspectives
Dr. Catherin Menon, Principal Lecturer at the University of Hertfordshire’s Department of Computer Science, cautioned that while AI offers considerable diagnostic benefits, it also creates a “risk that healthcare professionals who become accustomed to AI support may perform more poorly without it.” She emphasized the importance of retaining fundamental skills to safeguard patient outcomes, particularly in scenarios where AI may face interruptions.
Gastroenterologist Dr. Omer Ahmad of University College Hospital London noted, “Although AI continues to enhance clinical outcomes, we must safeguard against the quiet erosion of fundamental skills required for high-quality endoscopy.” He expressed concern that this erosion might be even more pronounced among trainees who may develop overly dependent behaviors before mastering essential diagnostic abilities.
AI expert Mustafa Suleyman stressed the ethical challenge, stating, “The promise of technology is that it improves lives, but unchecked reliance risks inverting that promise by degrading critical human expertise”.
Context and Background
AI’s rise in healthcare aligns with its ability to analyze vast datasets, spot subtle patterns, and offer real-time decision support. Clinical decision support systems powered by AI can enhance diagnosis, optimize treatment plans, and predict patient outcomes with unprecedented accuracy. In 2025, health systems globally are investing heavily in AI, with countries like the UK committing millions to AI-assisted cancer detection trials.
However, the rapid adoption outpaces understanding of how AI shapes human cognitive processes. Automation bias, where clinicians trust AI outputs over their judgment, may gradually erode clinical acumen and vigilance, a phenomenon documented in multiple diagnostic fields including neurology and gastroenterology.
Implications for Public Health
The erosion of doctors’ observational and cognitive skills in the era of AI poses challenges to patient safety and clinical quality, especially in emergencies when AI might be inaccessible. It also risks creating a workforce less prepared to manage complex cases independently.
To strike an effective balance, healthcare institutions must adopt strategies that ensure AI serves as an augmentation tool rather than a crutch. These include:
-
Maintaining rigorous training programs emphasizing manual diagnostic skills alongside AI usage
-
Rotating clinicians between AI-assisted and non-AI clinical environments to preserve proficiency
-
Continuously monitoring diagnostic performance with and without AI to identify gaps
-
Cultivating awareness of automation bias and promoting critical engagement with AI recommendations.
Potential Limitations and Counterarguments
Some experts argue that concerns about skill erosion are outweighed by AI’s benefits in improving diagnostic accuracy and efficiency. AI has been shown to reduce diagnostic errors and increase detection rates of conditions like early-stage cancers. Moreover, for novice clinicians, AI can serve as an invaluable teaching aid and safety net.
However, the key lies in judicious, balanced integration. Over-dependence risks amplifying the technology’s limitations and vulnerabilities, making human oversight indispensable.
Practical Takeaways for Readers
For health-conscious consumers, AI represents a hopeful advancement in earlier and more accurate diagnosis. However, patients should also recognize the importance of human expertise in healthcare. Decisions made solely by AI without skilled human interpretation can present risks.
Maintaining confidence in healthcare providers’ clinical skills and critical judgment is vital, particularly when consulting in settings where AI is newly implemented or limited. Awareness of this balance may empower patients to engage thoughtfully with emerging AI-based healthcare tools.
Medical Disclaimer
This article is for informational purposes only and should not be considered medical advice. Always consult with qualified healthcare professionals before making any health-related decisions or changes to your treatment plan. The information presented here is based on current research and expert opinions, which may evolve as new evidence emerges.