FTC Investigation Behind AI Chatbots on Openi, Google, Meta and other risks, safety and money
The US Federal Trade Commission has initiated a formal investigation in major technology firms including Google, Meta and Openai to assess its safety measures on chatbot platforms and possible effects of these services on children and adolescents.

Google, Meta, Openai, Snap, Elon Musk’s XAI, and Character Technologies Inc. Major technology companies, including their chatbot platforms and safety measures for safety of children and teenagers, are facing an investigation by the US Federal Trade Commission (FTC). The FTC, responsible for antitrust and consumer protection, has sent formal orders to these companies as part of a market study, assessing how firms measure and monitors their convenor AI products, as well as what steps they have taken to limit the use by young users. This inquiry follows the increasing investigation on whether chatbot developers are enough to ensure the safety of their services, especially for minors.
The FTC investigation is being conducted under its 6 (B) Authority, which enables the agency to release subponus and collect information from companies for research objectives. While the primary purpose of data collection is to conduct market studies, FTC may also use information to start an official inquiry or support the ongoing investigation. This process can take years, as the agency usually analyzes information before releasing its findings in a public report. The current investigation has highlighted the concerns of children for a long time within the US government on online secrecy rights, with previous efforts in the Congress to expand the safety beyond people under 13 years of age.
At the center of FTC’s attention is whether the existing rules are sufficient to address the potential risks generated by advanced chatbot platforms, becoming increasingly accessible to adolescents. Under the current US law, technology firms are not allowed to collect data from children under 13 without obtaining parents’ consent. However, with the rapid growth of AI-powered chatbots, the calls to increase these safety to older teenagers have received traction, although no legislative progress has been made.
In its orders, FTC requested information about how companies are measuring the impact of their chatbot technologies on children and adolescents, as well as safety measures applied to prevent misuse or risk of harmful materials. Investigations include technical aspects and the policies have space to monitor the user behavior and intervene when necessary. The agency’s functions are seen as a reaction to recent events, which raise questions about the adequacy of current controls.
A high-profile case that has attracted attention is a case filed against Openai by the parents of a California High School student. The trial alleges that the chat separated his son from his family and contributed to his suicide. Openai replied that it has increased its sympathy to the family and is reviewing the complaint.
The case has intensified the call for maximum investigation of connivance AI products, especially about their impact on weak population such as children and adolescents. There are increasing concerns between public and policy makers about the impact of rapid human chatbott interaction and to what extent the developers have been designed to address negative results.
The investigation of FTC is not limited to Openai, but also targets other major players in the AI ​​region, including Google, Meta, Snap, XAI, and Charaction Technologies Inc. The comprehensive approach of the agency suggests an recognition that the challenges generated by the converted AI are sector-wide and require extensive solutions.
Although research is being investigated for research purposes, FTC findings can affect future policy -making or enforcement operations. Since 2023, the agency has been investigating whether the Chatgpt of Openai may have violated the consumer protection laws, and the current investigation can lead to an important exemplary for how the AI-driven platforms are regulated regarding the safety of children in the US.