In early October, nearly 18 years after his daughter Jennifer was murdered, Drew Crescent received a Google alert about a new online profile of hers.
The profile also included a fabricated biography with Jennifer’s full name and a yearbook photo, describing her as a “video game journalist and expert in technology, pop culture, and journalism.” According to the website, Jennifer, who was murdered by her ex-boyfriend during her senior year of high school in 2006, was re-imagined as a “knowledgeable and friendly AI character.” A prominent button invited users to interact with its chatbot.
“My pulse started racing,” Crescente said. Washington Post, “I was just looking for a big glowing red stop button that I could tap and make it stop.”
Jennifer’s name and image were used to create a chatbot on Character.AI, a platform that lets users interact with AI-generated personalities. According to screenshots of the now-deleted profile, several users were linked to a digital version of Jennifer that was created by someone on the site.
Crescent, who runs a nonprofit named after her daughter to prevent teen dating violence, was horrified that the platform allowed a user to create an AI replica of a murdered high school student without family consent. Was allowed. Experts say the incident highlights serious concerns about the AI ​​industry’s ability to protect users from the risks posed by technology capable of handling sensitive personal data.
“It takes a long time for me to be shocked because I’ve really been through a lot,” Crescente said. “But this was a new low.”
Character spokesperson Katherine Kelly said the company removes chatbots that violate its terms of service and is “constantly evolving and refining our security practices to prioritize community safety.”
“When informed about Jennifer’s character, we reviewed the content and the account, taking action consistent with our policies,” Kelly said in a statement. The Company Terms prohibit users from impersonating any person or entity.
AI chatbots, which can simulate conversations and adopt the personality or biographical details of real or fictional characters, have taken the form of digital companions marketed as friends, advisors, or even romantic partners. Has gained popularity. However, the technology has also faced significant criticism. In 2023, a Belgian man died by suicide after a chatbot allegedly encouraged the act during their conversation.
Character, a major player in the AI ​​chatbot field, recently secured a $2.5 billion licensing deal with Google. The platform has pre-designed chatbots, but it also allows users to create and share their own chatbots by uploading photos, voice recordings, and written prompts. Its library includes diverse personalities ranging from a motivational sergeant to a book-recommending librarian, as well as parodying public figures such as Nicki Minaj and Elon Musk.
However, for Drew Crescent, the discovery of his late daughter’s profile on the character was a devastating blow. 18-year-old Jennifer Crescente was murdered in 2006, taken to the woods and shot by her ex-boyfriend. More than 18 years later, on October 2, Drew received an alert on his phone that took him to a chatbot on Character.ai that contained Jennifer’s name, photo, and a lifelike description, as if she were alive.
“You can’t go too far with really horrible things,” he said.
Drew’s brother, Brian Crescente, also wrote about the incident on Platform X (formerly Twitter). In response, Character announced on October 2 that it had removed the chatbot.
This is extremely disgusting: @character_ai Using my murdered niece as the face of a video game AI without her father’s permission. He is very upset at this time. I can’t imagine what he’s going through.
Please help us stop this type of horrible practice.
– Brian Crescenteb (@crecenteb) 2 October 2024
Kelly explained that the company actively moderates its platform using blocklists and investigates impersonation reports through its trust and security team. Chatbots that violate the terms of service are removed, he said. When asked about other chatbots impersonating public figures, Kelly confirmed that such cases are investigated, and action is taken if violations are found.