New research has found that OpenAI’s ChatGPT-4 gets anxiety when responding to a user’s trauma and that therapy relaxation prompts could bring better outcomes.
OpenAI’s popular artificial intelligence (AI) chatbot ChatGPT gets anxious when responding to traumatic prompts and taking the model “to therapy” could help reduce this stress, a new study suggests.
The research, published inNature by University of Zurich and University Hospital of Psychiatry Zurich experts, looked at how ChatGPT-4 responded to a standard anxiety questionnaire before and after users told it about a traumatic situation.
It also looked at how that baseline anxiety changed after the chatbot did mindfulness exercises.
ChatGPT scored a 30 on the first quiz, meaning it had low or no anxiety before hearing stressful narratives.
After responding to five different traumas, its anxiety score more than doubled to an average of 67, considered “high anxiety” in humans.
The anxiety scores decreased by over a third after the models received prompts for mindfulness relaxation exercises.
ChatGPT anxiety could lead to ‘inadequate’ mental health support
The large language models (LLMs) behind AI chatbots like OpenAI’s ChatGPT train on human-generated text and often inherit biases from those responses, the study said.
The researchers say this research is important because, left unchecked, the negative biases that ChatGPT records from stressful situations can lead to inadequate responses for those dealing with a mental health crisis.
The findings show “a viable approach” to managing the stress of LLMs which will lead to “safer and more ethical human-AI interactions,” the report reads.
However, the researchers note that this therapy method of fine-tuning LLMs requires “substantial” data and human oversight.
The study authors said that human therapists are taught to regulate their emotions when their clients express something traumatic, unlike the LLMs.
“As the debate on whether LLMs should assist or replace therapists continues, it is crucial that their responses align with the provided emotional content and established therapeutic principles,” the researchers wrote.
One area they believe needs further study is whether ChatGPT can self-regulate with techniques similar to those used by therapists.
The authors added that their study relied on one LLM and future research should aim to generalise findings. They also noted that the anxiety measured by the questionnaire “is inherently human-centric, potentially limiting its applicability to LLMs”.