Meta says
According to the report, Meta is in the process of modifying the document of its guidelines and that the material was made in the error that allows the material to have a romantic interaction with the children.


Meta Meta.O is adding new adolescent safety measures to its artificial intelligence products, which is in their artificial intelligence products by training systems to avoid the discussion of self-loss or suicide with minors, and temporarily limit their access to some AI characters.
A Reuters Exclusive Report in August revealed how Meta allowed stimulating chatbot behavior, including engaging bots into “romantic or sensual”.
Meta spokesperson Andy Stone said in an email on Friday that the company is taking these temporary steps developing long-term measures to ensure that teenagers are safe, age-appointed AI experiences.
Stone said that security measures are already being rolled out and will be adjusted over time as the company refines its system.
Meta’s AI policies came under intensive investigation and backlash following the report of Reuters.
American senator Josh Halee began an investigation into the AI policies of Facebook parents earlier this month, demanding documents on the rules that allowed their chatbots to negotiate unfairly with minors.
In the Congress, both Democrats and Republicans have expressed an alarm on the rules mentioned in an internal meta document, which was first reviewed by Reuters.
Meta confirmed the authenticity of the document, but said that after receiving questions from the Reuters earlier this month, the company removed the parts who said it was engaged in flirting for chatbott and playing a romantic role with children.
Stone said earlier this month, “Examples and notes in the question were wrong and incompatible with our policies, and have been removed.”