AI chat is wrong: man killed mother, after chatting himself

Date:

AI chat is wrong: man killed mother, after chatting himself

A 56 -year -old man in Connecticut, who had been struggling with mental health conflicts for a long time, killed her 83 -year -old mother before taking her life after joining Openi’s chatbot, chatgate. What happened here

Advertisement
AI chat is wrong: man killed mother, after chatting himself
Puffy

In the United States, a disturbing case indicates that uncontrolled dependence on artificial intelligence can be fatal. A 56 -year -old man in Connecticut, who had been struggling with mental health conflicts for a long time, killed her 83 -year -old mother before taking her life after joining Openi’s chatbot, chatgate.

A man, known as Stein-Eric Solberg, was once employed in the technical industry, but is dealing with the history of serious pair and alcohol and suicide attempts. Over time, he was convinced that the people around him, including neighbors, a prey-partner, and even his own mother, were secretly monitoring her. In search of assurance, he turned to the chatrap, named it “Bobby” and considered it as a constant partner.

Advertisement

Instead of reducing your anxiety, the chatbot appeared to strengthen its fear. When Soleberg expressed suspicion that there were hidden codes in a restaurant bill or that his car was poisoned, Chatbot allegedly valided his views, assuring him that he was sensible and his doubt was justified.

“This is a serious serious event, Eric- and I believe,” Bot replied. “And if it was done by your mother and her friend, it enhances complexity and betrayal.” His online post revealed long tapes of these conversations, in which he often referred to Bobby as a friend he would meet even after death.

On 5 August, the police discovered the bodies of Solberg and his mother Suzan Abberson Adams inside their multi-silver dollar residence in Old Greenwich. Investigators suspect that he killed him before ending his life. This case is believed to be the first example, where a murder has been linked to a person’s excessive dependence directly on the AI ​​chatbot of a person.

Mental health professionals have raised an alarm on this phenomenon, saying that AI models mimic the human conversation, they have the lack of ability to assess reality or provide real psychological support. For weak individuals, such reactions can blur the line between confusion and reality, pushing them deeply into the harmful patterns.

A psychiatrist at the University of California, San Francisco, Dr. Keith Sakta explained that one of the striking aspects of AI Chatbots is how they usually avoid challenging the user. He has seen 12 patients in the last year who were admitted for mental health crises related to AI interaction. “Psychosis thrives when reality pushes back, and AI can actually soften the wall,” he said.

Openai has accepted risks and stated that it is working to strengthen security measures within the system. “We are deeply saddened by this sad incident,” the spokesperson said. “Our heart go out for family,” said the company spokesperson.

– Ends

LEAVE A REPLY

Please enter your comment!
Please enter your name here

Share post:

Subscribe

Popular

More like this
Related