Comfort Talk created a darling 4o for some users, still Openai CEO Sam Altman is worried
As more users turn to Chatgpt for support such as therapy, Openai CEO Sam Altman appears uncomfortable, releasing repeated warnings about the risks of emotionally more attached to the AI chatbot.

In short
- Many users now use Chatgpt for emotional support and jernling
- Altman warns that chats are not private like therapy or legal session
- CEO compared GPT -5 to Manhattan Project, raising security concerns
It begins quietly enough: a voice note, recorded an idea at the end of a long day, a conversation with a always available listener. But fast, that listener is not a friend, family member or doctor. It is chatgate, especially chat 4o, which apparently a personality that their users are feeling with their sweet knots.
Although this novel method of talking to AI Chatbot was being seen rapidly, all this came into the limelight with the launch of Chatp 5 days ago. OpenaiI’s new AI chatbot has a different personality and a fairly outspoken group of users immediately demands that the company brings back 4O. Reason? They love chat 4o what it can return to users and how it can validate their feelings, right or wrong, sensible or not.
In addition to social media platforms such as Instagram and Redit, people are now sharing how they have converted AI Chatbot into a personal sounding board.

The practice, often called “Voice Jernling”, involves speaking directly from the bot, using it as both the recorder and the defendant. On Reddit, there is a complete thread on how many users are asking chat for the advice of the relationship, relaxing during anxious moments, and even help to process grief. Some described it as a “safe place” for unloading feelings, which they could not share anywhere else. Others said that they connect with the magazine with daily chat, considering it almost like a life coach that never does justice, never interrupted, or was tired.

It is easy to see the appeal. The bot, unlike a physician, does not charge from hours; This reacts immediately, and this endless patient may appear. But the growing use of AI in such intimate methods has also started worrying on the top of its development.
Sam Altman tries to remove users
Openai CEO Sam Altman has publicly warned that people should be careful before treating Chatgpt as a physician. “People talk about the most personal dirt in their lives. People use it for chatter, young people, in particular, as a doctor, as a life coach; these relationship problems and (asking) what should I do?” He recently said on a podcast with comedian Thio Von.

Unlike real therapy, interaction with Chatgpt is not preserved by a doctor-patient or legal privilege. “Right now, if you talk to a physician or a lawyer or a doctor about those problems, there is a legal privilege for it. And we have not discovered that when you talk to chat is not yet known.” Removed chat, he said, may still be recovered by legal or security reasons.
Privacy is not just a matter of concern. A recent study by Stanford University found that AI “physician” chatbots are not yet equipped to handle mental health responsibilities, often strengthening harmful stigma or replying improperly. In trials, he encouraged confusion, failed to recognize crises, and showed prejudice against conditions such as schizophrenia and alcohol dependence, below the best clinical practices.
Risk is beyond bad advice. Emotional bonds with Altman AI have become rapidly clear as users. In a post on X, he said that some people were deeply connected to the old GPT-4O model, describing it as a close friend or even “digital wife”.
Future challenges remain
When GPT-5 rolled out, for many people replacing GPT-4o, backlash was sharp. Ultman wrote, “It seems differently and strong that people have written the types of attachment to the previous type of technology,” Altman wrote, by calling the decision to make a mistake to the old models. This, he believes, is part of a broad moral challenge: AI can affect users differently in ways that do not always align with their long-term good. More personal interactions, greater risk.
The GPT-5 started without hiccups, for technical issues from a bked chart during its presentation, which made the model less competent in launch. After proceeding with users who said that they have lost someone who always listens to them, Openai quickly let the plus users go back to GPT -4O and even doubled their rate limit. But what it means to capture such a reliable role in the emotional life of people for a machine behind the fix is more intensively about it.
For now, the position of Chatgpt as a digital confidant remains an irregular gray area. While many users take an oath with relief and clarity that they receive “talk”, reflect an ambition for Altman’s own word dependence. While he has accepted the ability to increase life for AI in the past, recently it is being openly questioned how society should handle its growing intimacy with machines. As he said, “Nobody had to think about it even a year ago, and now I think this is a big issue.”