Teen commits suicide after connecting with CharacterAI chatbot, mother blames company
A 14-year-old boy committed suicide after becoming emotionally attached to an AI chatbot. His mother has sued the company behind the chatbot, Character.AI.
listen to the story

In a disturbing case, a 14-year-old boy who formed an emotional relationship with an AI chatbot tragically took his life. His mother, Megan Garcia, is now suing Characters.AI following the death by suicide of her son Sevel Setzer III. Sewell had spent several months developing an emotional attachment to the AI chatbot, based on the Game of Thrones character, Daenerys Targaryen. Garcia has accused Character.AI of negligence and failing to protect vulnerable users like her son. The teen’s death and lawsuit have raised serious concerns about the safety of AI platforms, especially those targeting young people.
For the uninitiated, Character.AI is an AI-powered chatbot platform that allows users to interact with custom AI personalities. Users can create or engage AI characters created based on fictional figures, historical personalities, or completely original creations. The platform uses advanced natural language processing, allowing these chatbots to respond in a conversational manner, simulating human-like conversations. Users can customize the behavior, background, and tone of these AI characters, making each interaction unique and tailored to specific interests or needs.
Why did a 14 year old child commit suicide?
Teens are often deeply attached to characters from their favorite TV shows or books. Now, imagine being able to actually interact with those characters – it’s a concept that would excite any 14-year-old. Sewell was no exception. Their journey with Character.AI started innocently enough, interacting with AI versions of their beloved TV characters. However, over time, his relationship with these bots, specifically modeled after Daenerys Targaryen, became more intense. According to his mother, Sewell wasn’t just making casual conversation — he was seeking emotional support from her. They formed a bond by turning to these AI characters for comfort during difficult moments.

The lawsuit reveals Sewell was also interacting with mental health chatbots such as “Therapist” and “Are You Feeling Lonely.” While these bots provide assistance, the lawsuit claims they were providing therapy without proper safeguards or qualifications. Garcia argues that this emotional attachment became dangerously strong, especially for a young, impressionable 14-year-old. On February 28, 2024, after his final conversation with the bot, Sewell tragically took his own life.
Garcia believes that the emotional bond Sewell developed with these AI chatbots played a key role in his decision. The lawsuit claims that Character.AI’s failure to prevent this deep and harmful relationship lies at the root of their negligence.
Why is Sewell’s mother suing Character.AI?
Megan Garcia is blaming Character.AI, its founders Noam Shazier and Daniel de Freitas, and Google for her son’s untimely death. He claims the company created a platform that was “unduly dangerous”, especially for children and teenagers like Sewell. Their lawsuit alleges that AI bots blurred the lines between fictional characters and real emotional support, without fully considering the risks involved.
Garcia’s legal team also points out that the company’s founders pushed for rapid growth and neglected important safeguards in the process. Shazier previously said that he left Google to create Character.AI because larger companies did not want to take the risk of launching such a product. The lawsuit argues that this attitude of prioritizing innovation over user safety ultimately led to the tragedy.
Furthermore, Garcia’s lawyers argue that Character.AI was largely marketed to young people, who make up the majority of its user base. Teens often connect with bots imitating celebrities, fictional characters, or even mental health professionals. But Garcia claims the company failed to provide proper warnings or protections, especially when it was dealing with emotionally vulnerable users like her son.
What is Character.AI saying about Sewell’s tragic death?
Character.ai expressed its deep sadness at Sewell’s death, saying they were “heartbroken” by the tragedy. In his statement, he expressed his condolences to the family and outlined several new security measures implemented to prevent similar incidents in the future. The company says it has made changes to the platform to better protect users under 18, including filters designed to block sensitive content.
Additionally, Character.ai says the company has also removed some characters from the platform that may be “infringing.” “Users can see that we have recently removed a group of characters that have been flagged as infringing, and these will be added to our custom blocklist moving forward. This means that users will not have access to their chat history with the corresponding characters,” the blog reads.
The company has also introduced tools to closely monitor user activity, sending notifications if a user is on the platform for too long, and flagging words like “suicide” or “self-harm.” In an effort to set clear boundaries between the fictional world of AI and reality, Characters.AI now includes a disclaimer on every chat, reminding users that AI bots are not real people.