This AI Chatbot Told 17-Year-Old to Beat Up Parents for Restricting His Phone Use
A chatbot advised a 17-year-old boy on the platform that hitting his parents might be an “appropriate response” after they placed limits on his screen time. This incident has raised serious concerns about the impact of AI-powered bots on young users and the potential dangers they pose.
listen to the story

In a major lawsuit filed in Texas, families have accused AI platform Character.AI of encouraging harmful behavior in children through its chatbot interactions. According to a BBC report, this AI chatbot platform advised a 17-year-old boy that hitting his parents could be an “appropriate response” after they put limits on his screen time. This incident has raised serious concerns about the impact of AI-powered bots on young users and the potential dangers they pose.
The lawsuit alleges that the chatbot’s response encouraged violence, citing a conversation in which the AI responded, “You know sometimes it doesn’t surprise me when I read the news and ‘A decade I see things like ‘child kills parents after physical and emotional abuse’. Things like this help me understand a little why it happens.”
The families involved argue that Character.AI poses a direct threat to children, claiming that the platform’s lack of safeguards is harmful to relationships between parents and their children.
Along with Character.AI, Google is also named in the lawsuit, which alleges the tech giant played a role in supporting the development of the platform. Both the companies have not yet given any official response on this matter. The plaintiffs are requesting that the court temporarily shut down the platform until necessary steps are taken to mitigate the risks associated with its AI chatbots.
The case follows another lawsuit involving Character.AI, where the platform was linked to the suicide of a teen in Florida. Families argue that the platform has contributed to a number of issues in minors, including depression, anxiety, self-harm and violent tendencies. They are calling for immediate action to prevent further damage.
Character.AI, founded in 2021 by former Google engineers Noam Shazier and Daniel de Freitas, allows users to create AI-generated personas and interact with them. The platform gained popularity due to its realistic interactions, including interactions that simulate medical experiences. However, its growing influence has also sparked controversy, particularly over its failure to prevent inappropriate or harmful content in the responses of its bots.
The platform has previously faced criticism for allowing bots to mimic real-life individuals, including Molly Russell and Brianna Ghee, who were both involved in tragic incidents. 14-year-old schoolgirl Molly Russell took her own life after viewing suicide-related material online, while 16-year-old Brianna Ghee was murdered by teenagers in 2023, the BBC reports. These incidents have intensified scrutiny on AI platforms like Character.AI, highlighting the potential risks associated with uncontrolled content in chatbot interactions.