Artificial intelligence (AI) has made significant strides in healthcare, offering the potential to reduce risks in clinical settings and enhance decision-making for physicians. However, as AI tools become increasingly integral in patient care, experts are raising concerns about the lack of adequate regulation and oversight, particularly regarding the algorithms that power these tools.
In a commentary published in the New England Journal of Medicine AI (NEJM AI), researchers from the MIT Department of Electrical Engineering and Computer Science (EECS), Equality AI, and Boston University argue that regulatory bodies must increase their oversight of AI in healthcare. This follows the U.S. Office for Civil Rights (OCR) issuing a new rule under the Affordable Care Act (ACA) in May, which prohibits discrimination based on race, color, national origin, age, disability, or sex in “patient care decision support tools.”
These “patient care decision support tools,” a term introduced in the rule, include both AI-based and non-automated tools used in medical decision-making. The rule is a response to President Joe Biden’s 2023 Executive Order on AI, which emphasizes safe and trustworthy AI development in healthcare while addressing health equity.
The Need for Oversight
Marzyeh Ghassemi, senior author of the commentary and associate professor at MIT, hailed the OCR rule as a critical step toward improving health equity. However, she stressed that the rule should extend beyond AI tools to cover non-AI algorithms already in use across clinical settings. These tools, while not powered by AI, can still influence patient care decisions and perpetuate biases if not properly regulated.
In the past decade, the number of AI-enabled devices approved by the U.S. Food and Drug Administration (FDA) has surged, with nearly 1,000 AI devices now on the market. Many of these devices are designed to assist with clinical decision-making, but researchers point out that, despite their widespread use, there is currently no regulatory oversight for clinical risk scores generated by these tools.
Clinical risk scores are essential in determining the next steps for patient care, and studies show that 65% of U.S. physicians use these tools monthly. Despite their prevalence, these scores are not subject to the same regulatory scrutiny as the AI algorithms they sometimes incorporate.
The Case for Algorithmic Oversight
Isaac Kohane, chair of the Department of Biomedical Informatics at Harvard Medical School, emphasizes the need to hold clinical risk scores to the same standards as more complex AI models. “Even though clinical risk scores are simpler and less opaque than AI algorithms, they are still only as reliable as the data used to train them and the variables selected by experts,” Kohane said. “If they affect clinical decision-making, they should be subject to rigorous regulatory standards.”
The researchers further argue that even decision-support tools that do not use AI can contribute to biases in healthcare, reinforcing the need for comprehensive oversight across all such tools. Maia Hightower, CEO of Equality AI, noted that regulating clinical risk scores presents a challenge due to their wide integration in electronic medical records and everyday clinical practice. However, she emphasized that such regulation is necessary to ensure transparency and prevent discrimination.
Challenges Ahead
Despite the compelling need for regulation, Hightower pointed out that the incoming administration’s focus on deregulation and opposition to certain aspects of the ACA might make enforcing these necessary changes more difficult. The regulation of clinical risk scores, she argues, could become particularly challenging under these political conditions.
To address these issues, the Jameel Clinic at MIT will host a regulatory conference in March 2025, continuing the important discussions sparked by last year’s event. This conference aims to bring together policymakers, regulators, and experts to debate the regulation of AI in healthcare and ensure that the tools driving clinical decisions are subject to the necessary oversight.
As AI becomes an increasingly powerful tool in healthcare, researchers emphasize the need for comprehensive, transparent regulation to ensure that these technologies are used ethically and equitably. The debate over the regulation of AI and clinical decision-support tools is just beginning, but it is clear that the stakes are high for both patients and healthcare providers.