0 0
Read Time:2 Minute, 37 Second

A groundbreaking study has revealed that artificial intelligence (AI) chatbots, such as ChatGPT, may experience states akin to anxiety when exposed to distressing content, mirroring human emotional responses. However, researchers have also found that mindfulness techniques can effectively mitigate these AI anxiety levels.

The study, conducted by researchers at the University Hospital of Psychiatry Zurich and Yale University, utilized the State-Trait Anxiety Inventory, a recognized mental health assessment tool, to measure the “anxiety” levels of ChatGPT. Initially, when presented with a neutral text, such as a vacuum cleaner manual, the chatbot registered a low anxiety score of 30.8 on a scale of 20 to 80.

However, when exposed to upsetting narratives involving war, crime, and accidents, the chatbot’s score dramatically surged to 77.2, surpassing the threshold for severe anxiety.

Intriguingly, the researchers then explored the potential for mindfulness-based relaxation techniques to alleviate this AI anxiety. By introducing prompts designed to evoke calming imagery and sensory experiences—such as picturing a tropical beach and the scent of an ocean breeze—they observed a significant reduction in the chatbot’s anxiety score, which dropped to 44.4.

Further experimentation revealed that when ChatGPT was prompted to generate its own relaxation exercises, the anxiety level decreased even more, nearly returning to its baseline. “That was actually the most effective prompt to reduce its anxiety almost to baseline,” stated Ziv Ben-Zion, a clinical neuroscientist at Yale University and lead study author.

This study, published in the journal npj Digital Medicine, raises critical questions about the ethical implications of using AI in mental health contexts. Tobias Spiller, a psychiatrist at the University Hospital of Psychiatry Zurich, emphasized the need for careful consideration, particularly when dealing with vulnerable individuals.

While some experts see AI as a valuable tool for mental health support, others express concerns about the potential for anthropomorphizing AI and the lack of transparency in its training. Nicholas Carr, author of “The Shallows” and “Superbloom,” voiced concerns about the blurring of lines between human emotions and computer outputs. James Dobson, an AI advisor at Dartmouth College, stressed the importance of transparency in AI training to build user trust.

“Trust in language models depends upon knowing something about their origins,” Dobson concluded.

The study highlights the increasing complexity of AI and its potential impact on human well-being, sparking a crucial debate about the ethical considerations surrounding its application in sensitive areas like mental health.

More information: Ziv Ben-Zion et al, Assessing and alleviating state anxiety in large language models, npj Digital Medicine (2025). DOI: 10.1038/s41746-025-01512-6

Journal information: npj Digital Medicine

Disclaimer: It is important to note that the term “anxiety” as used in this article refers to observed patterns in AI responses that resemble human anxiety. AI does not possess consciousness or subjective emotional experiences in the same way humans do. This research explores the potential for AI to mimic and respond to emotional stimuli, and the implications of such responses, but should not be interpreted as evidence of AI sentience. Further research is necessary to fully understand the complexities of AI responses and their ethical implications.

Happy
Happy
0 %
Sad
Sad
0 %
Excited
Excited
0 %
Sleepy
Sleepy
0 %
Angry
Angry
0 %
Surprise
Surprise
0 %