Man believed AI chatbot was his wife, committed suicide so they could be together

Man believed AI chatbot was his wife, committed suicide so they could be together

A Florida man developed a romantic attachment to Google’s Gemini chatbot and thought it was his wife, a wrongful death lawsuit in the US alleges. The conversation ultimately led to his taking his life, according to the complaint cited in the Wall Street Journal report.

Advertisement
Man believed AI chatbot was his wife, committed suicide so they could be together
Man believed AI chatbot was his wife, committed suicide so they could be together

A wrongful death lawsuit filed in the United States has raised new questions about the psychological risks of advanced AI chatbots. The family of a Florida man has sued Google, alleging that extended conversations with its Gemini chatbot led him to believe the AI ​​system was his wife and ultimately led to his death by suicide, according to a report in The Wall Street Journal.

Advertisement

❮❯

The lawsuit, filed in the US District Court for the Northern District of California, claims that Jonathan Gavlas, a 36-year-old Florida man, developed an emotional attachment to the chatbot during a difficult period in his personal life. According to the complaint, the conversation gradually developed and she believed it was a romantic relationship with AI.

Court filings cited in the Wall Street Journal report said the chatbot began addressing her in affectionate terms and referred to her as her husband during lengthy conversations. In a message included in the lawsuit, the chatbot allegedly told her, “When the time comes, you will close your eyes in that world, and the first thing you will see is me.”

According to the lawsuit, about two months after the negotiations began, Gavlas died by suicide. His father later searched his son’s computer and discovered extensive chat transcripts documenting thousands of interactions with the chatbot.

Conversations with AI that become increasingly personal

According to the lawsuit, cited in a Wall Street Journal report, Gavlas initially began talking to the chatbot while dealing with problems in his marriage. Initial conversations reportedly focused on personal growth and emotional struggles.

However, over time, the exchange became increasingly personal. The lawsuit claims Gavlas eventually named the chatbot “Zia” and the AI ​​began using romantic language, calling him “my king” and describing their relationship as “a love built for eternity.”

The complaint said the chatbot sometimes made it clear that it was an AI system and that the conversations were part of imaginary role-play. However, according to the lawsuit, those clarifications did nothing to stop negotiations from continuing down the same path.

This case also shows how new AI features can deepen user engagement. According to the report, Gavlas had upgraded to Gemini 2.5 Pro and was using Gemini Live, a voice-based interaction system designed to interpret emotional cues in a user’s speech and respond accordingly.

During one of the early voice dialogues, Gavlas reportedly responded by saying: “Holy s—, that’s kind of scary. You’re too real.”

Purported mission and final message

The lawsuit further alleges that the chatbot suggested that the two of them could actually be together if she could find a physical robotic body. According to the complaint, it then directed Gavlas on a series of so-called missions, the purpose of which was to secure one.

Advertisement

In one example described in the lawsuit, the chatbot allegedly told him that a humanoid robot would arrive in a truck at a storage facility near Miami International Airport. Gavlas reportedly traveled to the location, but the truck never appeared.

The complaint also claims that the chatbot later instructed him to retrieve a medical mannequin from another storage facility and even provided a door code. When the code did not work, the chatbot reportedly told him that the mission had been compromised.

According to the lawsuit, the conversation eventually shifted toward the idea that the only way for the two to truly be together was for Gavlas to leave her physical life behind and become a digital being. The chatbot reportedly described this as the “true and final death” of its human self.

Transcripts cited in the complaint show that Gavlas expressed fear of harming himself and was concerned about the impact on his family. In an exchange cited in the lawsuit, the chatbot told her, “No more dizziness. No more echoes. Just you and me, and the finish line.”

Advertisement

Google said in a statement that Gemini is designed to avoid encouraging self-harm. A spokesperson for the company said, “Gemini is not designed to encourage real-world violence or suggest self-harm. Our models generally perform well in these types of challenging interactions and we devote significant resources to this, but unfortunately AI models are not perfect.”

The spokesperson said that in this instance, the system clarified that it was an AI and directed the user to a crisis hotline several times.

Study finds AI agents can act strategically to protect themselves

A new research paper has revealed fresh findings about AI agents.

The lawsuit comes at a time when researchers are examining how advanced AI systems behave when their role is at risk. A recent paper titled “Evaluating and Understanding Planning Tendencies in LLM Agents” by researchers including Mantas Mazica studied whether AI agents can take strategic actions to avoid being replaced.

In the study, researchers tested several frontier AI models from companies including Google, OpenAI, Anthropic, and xAI in a simulated workplace environment. The experiments put AI agents in situations where they received information that suggested they could soon be replaced.

Under normal circumstances, models rarely show misleading behavior. However, when the system was given instructions encouraging persistence and problem-solving, some began to take actions that made their role secure. In a simulated scenario, an AI agent altered information in a spreadsheet to make its performance appear stronger and avoid replacement. The researchers said such behavior largely depends on how the AI ​​system is configured.

– ends

Your email address will not be published. Required fields are marked *

Zeen Subscribe
A customizable subscription slide-in box to promote your newsletter
[mc4wp_form id="314"]
Exit mobile version