0 0
Read Time:4 Minute, 56 Second

PHILADELPHIA — As artificial intelligence rapidly integrates into the exam room, a group of leading medical researchers is proposing a radical shift in oversight: treating AI models not just as medical devices, but as “digital clinicians” that must earn a license to practice.

In a recent perspective published in JAMA Internal Medicine, Eric Bressman, MD, MHSP, an internist and researcher at the University of Pennsylvania’s Perelman School of Medicine, argues that the current FDA regulatory framework is ill-equipped for the era of generative AI. Alongside his colleagues, Bressman suggests a new “clinical licensure” pathway for AI—one that mirrors the rigorous multi-year journey of medical school, residency, and board certification that human physicians must endure.

The proposal comes at a critical juncture. While AI “already has so much potential, both for benefit and harm,” says Eve Rittenberg, MD, a primary care physician at Mass General Brigham and Harvard Medical School, many of these tools are currently “skirting any actual regulatory oversight.”


The Regulatory Gap: From Pacemakers to Chatbots

For decades, the Food and Drug Administration (FDA) has regulated medical technology under the “Software as a Medical Device” (SaMD) framework. This worked well for “static” tools—software with a specific, narrow purpose, such as an algorithm designed solely to flag a potential lung nodule on an X-ray.

However, the emergence of Large Language Models (LLMs) like GPT-4 and specialized medical AI has changed the landscape. These tools are “dynamic”; they can summarize patient histories, draft clinical notes, and even suggest diagnoses across various medical specialties.

“These AI tools are being used actually pretty widely,” Bressman explains. “This doesn’t seem like a sustainable, long-term solution.”

The 2016 21st Century Cures Act created exemptions for “low-risk” software, specifically clinical decision support (CDS) tools intended to assist, rather than replace, a doctor’s judgment. Today, nearly all AI platforms classify themselves as CDS to avoid the most stringent FDA reviews. This creates a loophole where highly complex, “black box” algorithms may influence patient care with minimal external validation.


The Proposal: Medical School for Machines

Bressman’s proposed framework reimagines the AI lifecycle through the lens of medical education:

  1. Medical School (Pre-market Training): Before a model can be deployed, it must demonstrate a foundational knowledge base through rigorous, standardized testing.

  2. Residency (Supervised Deployment): Once “graduated,” the AI would enter a phase of restricted use, where its outputs are monitored closely by human clinicians in real-world settings to ensure safety and accuracy.

  3. Board Certification & Continuing Education: Because AI models can “drift”—changing their behavior as they are updated or exposed to new data—they would require ongoing “recertification” to ensure they remain competent and unbiased.

“This is an ambitious proposal that will face many challenges,” Bressman admits. “Perhaps the most important thing is having some more robust measure of oversight after you sort of let it out there.”


The Promise of “Ambient” Assistance

For many doctors, the need for AI is driven by burnout. Dr. Rittenberg, who co-authored an editorial accompanying Bressman’s proposal, uses “ambient AI scribes” that listen to patient visits and automatically generate notes.

“It allows me to focus my attention on the patient and their care rather than my notes,” Rittenberg says. By absorbing the charting burden, the technology has allowed her to leave the office on time—a rarity in modern primary care.

Yet, even these helpful tools carry risks. If an AI misinterprets a patient’s nuance or introduces “hallucinations” (confident but false statements) into a permanent medical record, the legal and clinical consequences are significant.


Challenges to the “Licensing” Model

Not everyone is convinced that treating AI like a human is the right approach. Liam McCoy, MD, MSc, a neurology resident and AI ethicist at the University of Alberta, points out that while AI can mimic human responses, its underlying logic is entirely alien.

“They actually think nothing like a person,” McCoy says. He warns of the “fragile frontier”—a phenomenon where an AI might pass a medical board exam with flying colors but fail a simple common-sense safety test that a first-year medical student would easily navigate.

Key Concerns for Public Health:

  • Algorithmic Bias: If training data is not diverse, the AI may provide less accurate recommendations for minority populations.

  • Privacy: Feeding sensitive patient data into large models raises concerns about data security and HIPAA compliance.

  • Accountability: If an AI “licensed” by a federal body makes a mistake, who is liable? The developer, the hospital, or the doctor who followed the advice?

“We’re still figuring out the governance and metrics to use,” says Majid Afshar, MD, a digital health physician at the University of Wisconsin, Madison. “We have to come up with an acceptable framework that can balance but not hamper the speed of innovation.”


What This Means for Patients

For the average patient, these regulatory debates may seem abstract, but the outcomes will dictate the safety of their next doctor’s visit. A “licensure” model would mean that the AI tool your doctor uses has undergone more than just a software check—it has been “vetted” for clinical competence.

As the technology moves faster than the legislative process, experts suggest that safeguards may arrive piecemeal through a combination of new FDA rules, updated malpractice laws, and hospital-level ethics committees.

For now, the message from the medical community is clear: Innovation is welcome, but it must be earned through the same rigors of proof required of the humans who take the Hippocratic Oath.


References

  • https://www.medscape.com/viewarticle/should-fda-require-clinical-licensure-ai-tools-doctors-2025a10010t1

Medical Disclaimer: This article is for informational purposes only and should not be considered medical advice. Always consult with qualified healthcare professionals before making any health-related decisions or changes to your treatment plan. The information presented here is based on current research and expert opinions, which may evolve as new evidence emerges.

Happy
Happy
0 %
Sad
Sad
0 %
Excited
Excited
0 %
Sleepy
Sleepy
0 %
Angry
Angry
0 %
Surprise
Surprise
0 %