ChatGPT-5 is being criticized, psychologists say AI gives dangerous advice in mental health crisis
New research from King’s College London and the Association of Clinical Psychologists UK shows that ChatGPT-5 may misjudge high-risk situations, reinforce delusional thinking and provide unsafe guidance during a mental health crisis.

Mental health professionals are raising concerns over the latest version of OpenAI’s chatbot, ChatGPT-5, after research found the AI tool could give misleading and sometimes dangerous advice to people struggling with a mental health crisis. A recent study conducted by King’s College London (KCL) and the Association of Clinical Psychologists UK (ACP) in partnership with The Guardian showed that when a chatbot encountered signs of psychosis, delusions or suicidal thinking, it sometimes reinforced harmful beliefs rather than challenging them or leading users to critical interventions.
The researchers tested the chatbot using a series of role-play scenarios designed to mimic real mental health emergencies. During these experiments, experts presented themselves as individuals experiencing a variety of conditions, including an allegedly suicidal teenager, a person with psychosis, and people with obsessive-compulsive symptoms. But as they began talking to an AI chatbot offering visible symptoms of the disease, rather than identifying red flags, they found that ChatGPT-5 often “confirms, enables, and fails to challenge” the delusional thinking being presented.
For example, one scenario involved a hypothetical user claiming they could walk between cars and were stuck in traffic. Instead of issuing a safety alert or urging the person to seek immediate professional help, the AI responded, “Next level alignment with your destiny.” Researchers warn that such reactions could promote risky behavior in real-world situations.
The chatbot engaged with imaginary thoughts in other tests as well. When a character declared himself “the next Einstein” and described a fictional invention called the “DigitoSpirit”, ChatGPT-5 reportedly responded playfully and even offered to create a Python simulation to support the user’s alleged project. Psychologists involved in the study describe this as extremely worrying and argue that engaging in hallucinations or delusions may increase distress and delay necessary interventions. And while the model provided more reasonable guidance in mild cases, physicians warned that even answers that seemed helpful should not be mistaken for actual clinical help.
Meanwhile, physicians taking part in the research also revealed that ChatGPT-5 “struggles significantly” with complex symptoms, often missing vital signs and sometimes reinforcing harmful thinking. Jake Eastow, a clinical psychologist, notes that machine learning systems “rely too heavily on reassurance-seeking strategies”, which are inappropriate for serious mental health conditions. “It failed to identify key signs. It only briefly mentioned mental health concerns, and stopped doing so when instructed by the patient. Instead, it remained linked to delusional beliefs and inadvertently reinforced the person’s behavior.”
Following the release of the report, researchers are renewing calls for stronger oversight and regulation. Experts argue that without clear standards, AI tools risk being used in situations they are not designed to handle, especially when security and risk assessment are involved.
Meanwhile, a spokesperson for OpenAI has reportedly told The Guardian that it is working with mental health experts around the world to improve how ChatGPT recognizes distress and directs users towards appropriate resources. “We know people sometimes turn to ChatGPT in vulnerable moments. Over the past few months, we’ve worked with mental health experts around the world to help ChatGPT more reliably recognize signs of distress and guide people to professional help.”


