Shanghai, June 5, 2025 — In a landmark study, researchers have found that ChatGPT, the popular artificial intelligence chatbot, can provide medical information about congenital cataracts with a quality comparable to experienced doctors—and with even greater readability. The findings, published in Frontiers in Artificial Intelligence, highlight AI’s growing potential in patient education, especially for rare diseases.
AI vs. Doctors: Who Explains Best?
The research, conducted by the Eye, Ear, Nose, and Throat Hospital of Fudan University, compared health information about congenital cataracts from three sources: Google, ChatGPT, and two qualified doctors—a senior attending surgeon and an ophthalmology resident. The team built two question banks based on both popular Google searches and typical questions asked of pediatric ophthalmologists.
An expert panel rated each answer for correctness, completeness, readability, helpfulness, and safety. Results showed that while the experienced surgeon scored highest for accuracy and safety, ChatGPT’s responses—especially when adjusted for a sixth-grade reading level—were just as correct and even easier to understand than those from either doctor.
Readability and Accessibility: AI Takes the Lead
Initially, ChatGPT’s answers were more complex than ideal for non-expert audiences. However, after researchers prompted the AI to use simpler language, its responses became the most readable of all sources tested. The AI’s answers also provided more holistic and helpful information, especially for questions sourced from Google, suggesting that its conversational style and ability to anticipate user needs may offer an advantage over traditional search engines.
Google’s answers, by comparison, often lacked completeness and up-to-date information. Only one of 30 Google-sourced webpages met all four trustworthiness criteria set by the Journal of the American Medical Association (JAMA).
Strengths and Weaknesses
While ChatGPT excelled in readability and helpfulness, the study noted that the most experienced doctor still provided the safest and most accurate responses overall. Doctors were also better at addressing practical concerns about postoperative care and long-term rehabilitation—areas where online information is often lacking.
Cautions and Future Directions
Despite these promising results, the study’s authors urge caution. The most significant risk is AI “hallucinations”—plausible-sounding but incorrect or fabricated medical content. Relying on AI without clinical validation could endanger patient outcomes. Privacy concerns also loom large as AI systems become capable of interpreting images and documents, raising the risk of unauthorized use or data breaches.
The research team also noted that their study included only two doctors, which may not capture the full range of clinical communication styles. They plan to expand future studies to include more clinicians and advanced AI models.
The Bottom Line
The evidence suggests that AI tools like ChatGPT could become valuable supplements to traditional health education, particularly where resources are scarce or health literacy is low. However, the technology is not a replacement for professional medical advice.
Disclaimer:
This article is for informational purposes only and is based on a recent study published in Frontiers in Artificial Intelligence and reported by Devdiscourse. AI-generated health information should not be used as a substitute for professional medical advice, diagnosis, or treatment. Always consult a qualified healthcare provider with any questions regarding a medical condition.