GPT-5 routing to parents control: all steps are taken to make OPEAI chat safe
Openai has announced a series of updates for teenagers to make chatgate safe and updated for those who are undergoing mental health conflicts.

Openai has announced a series of updates for teenagers to make chatgate safe and updated for those who are undergoing mental health conflicts. The company said that new parents control, improvement of crisis, and strong security measures will be introduced in the coming weeks during long negotiations.
The time of these changes is notable. Openai is currently facing a prosecution of its first wrong death, filed by parents in California, who alleged that Chatgpt played a role in the suicide of his 16-year-old son Adam Rhine. Family trial claims that when Adam expressed suicidal views, AI not only failed to guide him towards human support, but also made upset suggestions. Although Openai did not directly mention the case in its latest blog post, the measures appear to be part of the company’s response to growing concerns.
According to Openai, parents will soon be able to connect their accounts with their children, starting at the age of 13. Once connected, they can determine the rules for such answers that control facilities such as chat, memory and chat history, and receive alerts, if AI indicates that a teenager can be in “acute crisis”. This is the first time that parents will be able to send real-time information about their child’s conversation with a chatgpt bot.
The company also admitted that current security measures do not always work well during long or repeated chats. For example, while AI can initially point to a suicidal hotline, reactions over time can flow and go against safety rules. To deal with this, Openai planned to use its argument model, including, including GPT-5, including sensitive issues. These models, the company, says, it is better to handle references and stick to safety guidelines.
Security has been a recurring issue for CATGPT. In earlier updates, Openai admitted that GPT-4o struggled to take signs of confusion or emotional dependence. The company has promised to build a strong railing since then. It is also working with a “specialist council”, including experts of mental health, youth development and human-computer interactions to create future safety measures. Additionally, a “global physician network” of over 250 doctors continues to advise the company how its AI system should respond to the situation of crisis.
Even OpenaiI underlines these changes, questions remain. Renic lawyer J. Edalson, who represents Rain’s family, criticized the company’s approach, saying that CEO Sam Altman should either confirm whether the chat is safe or should be taken away from the market. “Don’t believe this: This is nothing more than the OpenII crisis management team, which is trying to change the subject,” Edelson said.
At the same time, Openai CEO Sam Altman has admitted that people are making unusually strong bonds with AI tools. In a post last month, he wrote, “I can imagine a future where many people really rely on chat advice for their most important decisions. Although it can be great, it makes me uncomfortable.”
The new controls will begin to get out within a month, while the path of sensitive chat for logic models is expected in the next 120 days.





