0 0
Read Time:3 Minute, 2 Second

PITTSBURGH, PA – From nerve-wracking presentations to navigating crowded social gatherings or difficult conversations, everyday situations can trigger significant stress. While talking therapies help, researchers at Carnegie Mellon University (CMU) are exploring a high-tech approach: using virtual and augmented reality (VR/AR) to help people practice stress management in realistic, simulated environments.

This innovative project, described as a modern take on exposure therapy, allows users to don VR/AR glasses and immerse themselves in scenarios that typically cause anxiety. They can then practice coping mechanisms and communication strategies with digital characters in settings that mimic real life.

Anna Fang, a graduate student in the School of Computer Science’s Human-Computer Interaction Institute and the project’s lead researcher, noted the prevalence of VR/AR in the mental health space, particularly meditation apps. However, she observed a gap: “These apps usually place users in a sanitized, isolated environment—like a virtual forest or beach—while they offer tips and breathing exercises… which makes it hard to transfer those skills into the real world,” Fang explained.

“The project comes from me wanting a practical way for people to learn these skills and apply them to their real lives,” she added. “Can we use virtual and augmented reality to simulate an office environment, or a conflict with someone? Then you can actually practice some of those self-care skills in an environment similar to real life.”

Fang and her team focused on three common stress-inducing scenarios identified through research: public speaking, crowded social events, and interpersonal conflict. They developed 24 distinct prototypes encompassing virtual reality, augmented/mixed reality, and even text-based environments, varying the level of interaction. Some prototypes featured responsive virtual audiences powered by large language models (LLMs), while others had silent observers. Users could access breathing and meditation exercises via controller buttons when needed.

An initial study involving 19 participants yielded overwhelmingly positive feedback. “The participants generally said that it was pretty realistic,” Fang reported. Users appreciated the technology’s ability to foster self-awareness and build self-sufficiency skills. Key feedback indicated a preference for choosing when to receive guidance, rather than having it offered automatically, and a desire to use the technology in specific real-world locations – like using AR glasses at home to practice a difficult conversation or in a classroom ahead of a presentation.

Building on these findings, the team is now developing a more advanced, “full-fidelity deployable model” intended for public release via app stores. This next iteration aims for greater realism, incorporating enhanced text-to-speech features for more natural avatar voices and tones. “If you think about being stressed in a situation, someone’s tone matters a lot,” Fang noted. Avatars will also feature more realistic facial expressions and movements, such as furrowed brows to convey anger.

The range of available self-care strategies will also expand beyond deep breathing to include relaxation techniques, body scanning, and grounding practices designed to manage anxiety or panic attacks.

“We want to use the system not only to help people learn these skills, but also to experiment with different self-care strategies,” Fang stated. “They can experiment in a virtual environment that works best and feels best for them… and then make an informed choice on what to implement in the real world.”

The research team plans to present their work at the upcoming Association for Computing Machinery Conference on Human Factors in Computing Systems (CHI 2025).


Disclaimer: This news article is based on information provided by Carnegie Mellon University regarding research conducted by its faculty and students. It summarizes the findings and future plans as presented in their press release.

Happy
Happy
0 %
Sad
Sad
0 %
Excited
Excited
0 %
Sleepy
Sleepy
0 %
Angry
Angry
0 %
Surprise
Surprise
0 %