0 0
Read Time:2 Minute, 14 Second

A new study from Vanderbilt University Medical Center (VUMC) demonstrates how artificial intelligence (AI) can assist doctors in identifying patients at risk for suicide, potentially enhancing prevention efforts in routine medical settings.

The study, published in JAMA Network Open, highlights the Vanderbilt Suicide Attempt and Ideation Likelihood model (VSAIL), an AI system developed by a team led by Dr. Colin Walsh, associate professor of Biomedical Informatics, Medicine, and Psychiatry. VSAIL uses data from electronic health records to calculate a patient’s 30-day risk of suicide attempt, providing clinical alerts to guide healthcare providers.

The research tested two types of alerts in three neurology clinics at VUMC: interruptive pop-ups that momentarily halted the doctor’s workflow and passive notifications that displayed risk information in the patient’s electronic chart. The results were stark—interruptive alerts led to suicide risk screenings in 42% of cases, compared to just 4% for passive alerts.

“Most people who die by suicide have seen a healthcare provider in the year before their death, often for reasons unrelated to mental health,” Dr. Walsh noted. “Universal screening isn’t practical in every setting. We developed VSAIL to help identify high-risk patients and prompt focused screening conversations.”

Suicide, the 11th leading cause of death in the U.S., claims approximately 14.2 lives per 100,000 Americans annually. Research indicates that 77% of individuals who die by suicide have had contact with primary care providers in the year preceding their death, underscoring the critical need for targeted screening.

In the study, 7,732 patient visits over six months generated 596 alerts. Doctors were more likely to act on interruptive alerts, suggesting their potential to make suicide prevention more actionable. Notably, none of the patients flagged in the study were found to have experienced suicidal ideation or attempts during the 30-day follow-up period.

Despite its promise, the system’s interruptive alerts come with concerns about “alert fatigue,” where excessive notifications might overwhelm healthcare providers. “Health care systems need to balance the effectiveness of interruptive alerts against their potential downsides,” Walsh emphasized.

The researchers propose that similar AI-driven systems could be adapted for other medical settings to extend the reach of suicide prevention efforts.

“This selective approach, flagging only 8% of all patient visits for screening, makes it feasible for busy clinics to implement these interventions,” Walsh said.

As suicide rates continue to rise in the U.S., tools like VSAIL offer a hopeful step forward. However, further studies are needed to refine alert systems and ensure they integrate seamlessly into healthcare providers’ workflows.

More Information: Risk Model–Guided Clinical Decision Support for Suicide Screening, JAMA Network Open (2025). DOI: 10.1001/jamanetworkopen.2024.52371

Happy
Happy
0 %
Sad
Sad
0 %
Excited
Excited
0 %
Sleepy
Sleepy
0 %
Angry
Angry
0 %
Surprise
Surprise
0 %