Meta plans to replace its human material security staff with AI
Meta is allegedly replacing human-deedd product risk assessment in favor of AI-managed automation.
Listen to the story

In short
- AI Meta products will handle 90% of risk reviews
- Human input remains only for complex or high risk issues
- Zuckerberg said last month that most of the Meta will be written by AI in about 18 months.
According to the internal documents received by NPR, the meta is allegedly phasing out its human-leading product risk assessment in favor of AI-managed automation. This marks a significant change of how Tech Giants – who owns Facebook, Instagram and WhatsApp – evaluates the potential hazards of their products and features. For more than a decade, Meta has trusted the teams of human critics that have internally known as “privacy and integrity review”. These reviews ensured that new features did not compromise on user privacy, promoted harmful materials or threatened young users. But soon, up to 90 percent of these reviews will be handled by AI instead of humans, NPR report.
This automation will apply to algorithm, new sharing options and even changes in facilities related to youth safety and AI morality. In fact, similar tools using meta for the manufacture of products will now be used to judge their potential risks – with minimal human inputs.
Meta believes that change will accelerate product growth. Allegedly, the developers will receive a near-instant AI-reaction based on the questionnaire that fills about new products. These AI systems will then mark potential risks and determine the requirements to reduce them, which teams must confirm that they have met before launching.
The company insists that humans will still review “complex or novel matters”, and low -risk decisions are only automated. It has also been reported that automation allows its human critics to focus on more severe or vague content moderation issues.
This change comes at a time when the meta is expanding the use of AI across the board. CEO Mark Zuckerberg recently stated that within the next 12 to 18 months, most of the codes behind the Meta’s AI efforts will be written by its Lama model, AI. He claimed that the company’s AI agents are now able to run tests, spot bugs and generate better codes than an average developer. Zuckerberg also stated that the meta is building special AI agents for internal use, fully integrated into its software development devices. He said that these general-decent software will be designed to support research and development in AI instead of engineering.
AII’s embrace of AI reflects a widespread trend in the technical industry. Google’s Sundar Pichai says that AI now writes 30 percent code of the company. Sam Altman of Openai claims that in some companies, half of all codes are AI-Janit. And anthropic CEO Dario Amodi has predicted that by the end of 2025, almost all codes will be written by AI.
Meta states that it is auditing AI-made decisions and will maintain a more human-de Lending review system, which is bound by strict rules under the Digital Services Act. But internal sources allegedly suggested that most risk decisions globally were already being assigned to the algorithms.
According to the report, a current insect employee said that the target is “to empower product teams” and “simplify decisions”. But he warned that the removal of human monitoring can be serious: “We provide human perspective of how things can be wrong. It is being lost.”

