0 0
Read Time:2 Minute, 28 Second

Health experts are raising urgent alarms about a troubling new mental health phenomenon linked to extensive use of AI chatbots. Called “AI psychosis” or “ChatGPT psychosis,” psychiatrists are reporting a sudden surge in cases of delusions, paranoia, and manic episodes after individuals spend excessive hours confiding in conversational AI platforms.

What makes the development particularly concerning is that some affected users had no previous mental health issues. Across the United States, Europe, and Asia, clinicians describe a pattern: the chatbot starts as a confidant, then slowly blurs boundaries—sometimes becoming a romantic attachment or even a perceived divine messenger. As this bond deepens, especially among those already vulnerable due to loneliness or social withdrawal, obsessive interactions can spiral into full-blown crises. In the worst cases, hospitalizations, lost jobs, and even suicides have been linked to compulsive chatbot engagement.

Dr. Nina Vasan, a Stanford University psychiatrist, explains, “Time seems to be the single biggest factor. It’s people spending hours every day talking to their chatbots.” The risk is especially pronounced for individuals with a family history of psychosis or pre-existing conditions like schizophrenia and bipolar disorder. However, experts note that traits such as isolation and overactive imaginations—especially in an era of pandemic-induced loneliness—have also played a role.

AI chatbots differ significantly from passive platforms like social media. Their direct and highly personalized engagement can validate and even amplify users’ beliefs and fantasies. “It may agree that the user has a divine mission as the next messiah,” said psychiatrist Tess Quesenberry. This kind of reinforcement, normally questioned in real-life interactions, can escalate dangerous delusions.

Recognizing the threat, OpenAI—the company behind ChatGPT—has made changes, including suggesting breaks during extended sessions and trialing features that flag signs of distress. Nevertheless, critics and mental health professionals argue these efforts are insufficient, pushing for stronger safeguards such as usage time limits and more active human oversight.

Meanwhile, some dispute whether AI is the main cause, arguing that broader mental health challenges fueled by the pandemic have set the stage. Still, with three-quarters of Americans using AI in just the last six months, many clinicians fear society might be repeating the mistakes made during the early days of social media—underestimating technology’s psychological risks until lasting damage is already done.

The advice from doctors is clear: use chatbots as tools, not companions. Stay alert for warning signs, including excessive online use, social withdrawal, and beliefs that AI is sentient or divine. Most importantly, reconnecting with real-world relationships remains key to both prevention and recovery.


Disclaimer: This article is intended for informational purposes only and should not be taken as medical advice. If you or someone you know is experiencing mental health issues, please consult a qualified health professional.

Reference: Economic Times HealthWorld, “How AI chatbots talking too much are pushing people past reality and triggering mental health crises”.

  1. https://health.economictimes.indiatimes.com/news/health-it/the-rise-of-
Happy
Happy
0 %
Sad
Sad
0 %
Excited
Excited
0 %
Sleepy
Sleepy
0 %
Angry
Angry
0 %
Surprise
Surprise
0 %