OpenAI, Anthropic tighten chatbot rules to boost teen safety and detect underage users

0
7
OpenAI, Anthropic tighten chatbot rules to boost teen safety and detect underage users

OpenAI, Anthropic tighten chatbot rules to boost teen safety and detect underage users

OpenAI and Anthropic have introduced new security features to protect teen users of their chatbots. These measures aim to reduce harmful interactions and promote responsible AI use among minors.

Advertisement
OpenAI, Anthropic tighten chatbot rules to boost teen safety and detect underage users

Both OpenAI and Anthropic are introducing new security measures designed to make their chatbots friendlier, more responsible, and safer for teens. The move comes as lawmakers and parents are raising concerns about the impact of generic AI on the mental health of young users and the increased risk of inappropriate or harmful interactions online.

OpenAI, the creator of ChatGPT, has introduced new teen-focused safety rules inside its “model spec”, the internal playbook that controls how ChatGPT responds to people. According to the company, this update prioritizes the safety of teenagers over everything else, even if it means sacrificing the general openness or flexibility of the model in conversations.

Advertisement

ChatGPIT gets a “safe mode” for teens

Under the new guidelines, ChatGPT will take extra precautions when dealing with users aged 13 to 17 years. The AI ​​will now gradually steer young users away from risky or sensitive topics, such as self-harm or adult content, and focus on providing age-appropriate, helpful and respectful answers.

The chatbot will also actively encourage teens to seek real-world help, for example, by reaching out to trusted adults or mental health professionals when needed. OpenAI aims to ensure that ChatGPT becomes a “creative partner”, not a digital substitute for human connection.

The company emphasizes that it does not want to even try to humiliate anyone. The new system aims to strike a balance between being caring and informative while avoiding a lecture-like tone. So, rather than patronizing warnings, ChatGPT’s goal will be to remain conversational but cautious.

The changes come as OpenAI faces increasing pressure from regulators and a lawsuit claiming ChatGPT provided harmful self-harm guidance to a minor. In recent months, the company has added parental controls for teen users and limited discussions on suicide and other sensitive topics.

finding out who is younger

To support its new approach, OpenAI is also testing an age estimation system. The system will attempt to identify whether someone chatting with ChatGPT may be under the age of 18. If it detects a minor, the AI ​​will automatically switch to teen safety measures. Adults who have been flagged in error will be able to verify their age and restore full access.

Anthropic, the company behind cloud chatbots, is also following the same path. It is reportedly working on technology that can recognize subtle conversational cues that reveal whether a user is underage, for example, in how they phrase questions or the topics they discuss. If detected, confirmed accounts belonging to minors will be disabled, while users who disclosed their age during chat are already being flagged internally.

Both companies appear to be responding to growing scrutiny in Washington and other capitals, where policymakers have warned that chatbots could influence vulnerable young minds. The debate has intensified as AI tools have become more interactive and accessible, making them of daily use for many teenagers.

It’s a sign that after years of racing to make AI smarter, the industry is finally learning how to make it more intelligent.

– ends

LEAVE A REPLY

Please enter your comment!
Please enter your name here