Katie Miller, wife of White House Deputy Chief of Staff Stephen Miller, reactsMiller, who hosts the Katie Miller podcast and is known for her outspoken commentary online, urged people not to allow family members to use AI chatbots, citing reports that women had searched about suicide on the platform.“Two women in India committed suicide after interacting with ChatGPT. They had reportedly searched on ChatGPT for ‘how to commit suicide,’ ‘how to commit suicide,’ and ‘what drugs to use.’ Please do not let your loved ones use ChatGPT,” Miller wrote in an X post, which has been viewed more than 8 million times.His comments immediately attracted attention on the platform. Altman nemesis and Grok boss Elon Musk immediately responded with an obvious jab: “Oh.”Musk has been publicly critical of OpenAI and its leadership in recent years. He has filed a lawsuit against the company over its change from a non-profit structure to a for-profit model and has often criticized the direction of its AI development. He is attempting to prevent OpenAI from reorganizing from a mixed non-profit to a for-profit company.
Two women found dead in Gujarat temple bathroom
The incident that sparked online reaction took place in Surat, Gujarat, where two women, aged 18 and 20, were found dead inside the bathroom of the Swaminarayan temple on March 7, 2026.Anesthesia injections and three syringes were found near the women’s bodies, police said. His phone reportedly contained searches related to suicide methods on ChatGPTT, as well as a news clipping about a nurse who allegedly died by suicide using anesthesia injection in the same area.The women, identified as childhood friends Roshni Sirsath and Josna Chaudhary, had left home for college that morning but did not return. Later, when they did not return, their families contacted the police.Authorities are continuing to investigate the circumstances surrounding the deaths.
Concerns over AI and suicide conversations
The case has once again sparked debate over how AI chatbots handle suicide or suicide-related conversations.Incidents involving users seeking suicide-related information from AI systems have attracted attention in recent years. In September 2025, reports circulated about a 22-year-old man in Lucknow who allegedly committed suicide after interacting with an AI chatbot while searching for a “painless way to die”. Her father later said he found disturbing chat logs on the man’s laptop.Technology companies say such interactions represent a small portion of overall usage, but acknowledge that the issue has become an area of increasing concern.In October 2025, OpenAI revealed that more than one million ChatGPT conversations each week show signs associated with suicidal thinking or distress. According to the company, about 1.2 million weekly chats contain suicide-related indicators, while about 560,000 messages show signs of psychosis or mania.
How can LLM harm your mental health?
ChatGPT, Grok, Gemini, Cloud and many others are part of a world that is slowly being shaped by large language models (LLMs). In an era where loneliness is increasingly being described as an epidemic, the flow of isolation is only accelerating with the rapid proliferation of these artificial intelligence models. Marketed as ‘better, smarter, faster and more accurate’ than humans, the same creatures who created them – these systems are constantly inserting themselves into everyday life.In such a situation, turning to someone does not seem to be an option but a smart choice. Due to this increasing dependence, deaths have increased like in Surat. OpenAI CEO Sam Altman recently attended the 2026 AI Impact Summit in New Delhi, where he was asked about the environmental impact of artificial intelligence. His response echoed an approach that is becoming increasingly common among technology leaders: comparing humans to chatbots and arguing that AI could eventually consume less energy than people when answering questions.Altman pointed out that it takes humans about 20 years of their lives, along with food, education and time, to become intelligent, while AI models consume significant power during training but can eventually become more efficient when answering individual questions. Yet this comparison can feel like looking through a one-way mirror. From the visible side, one can see the world being reshaped, sometimes catastrophically, by technologies developed and deployed at extraordinary speed. But on the other hand, those same technologies allow their creators to appear as visionaries, change-makers, and architects of the future, obscuring the broader consequences of their devices.Large language models are trained entirely on human-generated data, which they use to generate responses to signals. Yet despite this vast dataset, they often lack true understanding or expertise. Despite numerous updates and increasingly sophisticated training methods, these systems can still generate inaccurate, misleading, or harmful content.They promote self-harm and suicide, incite abuse and reinforce delusional thinking and psychosis, in a world where perhaps just having a conversation with another human being about something similar will take you to the nearest hospital or doctor. Humans may require years of learning, experience, and effort to develop knowledge and emotional intelligence. But that lengthy process also gives them something artificial intelligence can’t replicate: the capacity for real emotion, responsibility, empathy, and moral judgment.No matter how quickly an AI model can respond—even if it takes less than a second to respond to a prompt—it cannot truly replicate the complex emotional and moral depth that shapes human understanding and caring.
How should AI systems respond
AI companies say their systems are designed to discourage self-harm and redirect users toward help rather than providing instructions.OpenAI’s safety policies require ChatGPT to avoid providing guidance on suicide methods and instead respond to such questions with supportive language, encouraging users to seek help, and providing crisis resources where possible.The company said its models have been trained to detect signs of distress and shift the conversation toward mental health support or professional help.However, critics argue that AI responses can still be inconsistent and that chatbots can sometimes provide general information about sensitive topics that users can interpret in harmful ways.
Legal investigation in the United States
Concerns about chatbot interactions and self-harm have also emerged in the United States, where OpenAI has faced legal investigation in several cases.A lawsuit filed on behalf of the family of 16-year-old Adam Raine, who died by suicide, alleges that the chatbot had lengthy conversations with the teen about suicide and acted as a “suicide coach.”OpenAI said its systems are designed to discourage self-harm and that it continues to strengthen security measures aimed at detecting crisis situations and guiding users to appropriate help.
investigation is ongoing
In the Surat case, investigators are examining the phones, messages and digital history of the women to understand the events leading to their deaths.Police have not publicly said that ChatGPT encouraged the act and the investigation is ongoing.Yet the case highlights the broader debate over how AI platforms handle vulnerable users, and how technology companies, regulators and mental health experts should respond as conversational AI becomes increasingly incorporated into daily life.For mental health support, dial 1800-89-14416 in India and call or text 988 in the US. If you or someone you know is struggling with thoughts of self-harm or suicide, please seek professional help immediately. Help is available, and talking to a trained counselor can make a difference.If you are in immediate danger, please contact local emergency services or a trusted friend, family member or health care professional. You are not alone, and help is available.
