After making the rules easier to appeal to Meta, Facebook sees an increase in bullying and violent materials
Meta’s latest integrity report shows disturbed signs in the form of violent posts and increase in online oppression–Sustainable enforcement policies.
Listen to the story

In short
- Meta released its first integrity report
- The report has increased violent content and online harassment on Facebook
- It reverses its third-party fact-stimulation participation in the United States after meta.
A change in Meta’s moderation strategy is a matter of concern, as the new data shows a spike in harmful materials after the rollback of strict enforcement policies of the company. In its first integrity report since the January policy overhaul, the company revealed an increase in violent material and online oppression on Facebook, even a rapid decline in overall material removal and enforcement operations. The first formal assessment in the report is how CEO Mark Zuckerberg’s decision to return the proactive moderation is playing on platforms such as Facebook, Instagram and Threads.
Conclusions enhance concerns over the possible trade-closes of the new direction of the meta, which aims to reduce enforcement errors and allow more political expression, but seems to be as a result of a visible increase in harmful materials.
Anxia
According to Meta, violent and graphic content on Facebook increased from 0.06–0.07 percent to 0.09 percent in the first quarter of 2025 at the end of 2024. While these percentage may look modest, they represent a significant amount of material on a platform with the billions of active users.
Similarly, the rate of bullying and harassment increased over the same period, increased violations in March with meta. According to the report, “There was a slight increase in the trend in the circulation of 0.06–0.07 percent to 0.06-07 percent to 0.07–0.08 percent on Facebook due to spike in sharing violating materials in March.” These numbers indicate the opposite of pre -decreasing trends, questioning the effectiveness of the current enforcement strategy of the meta.
The increase in harmful materials coincides with a marked decrease in the number of posts removed from the platform. Q1 In 2025, the lowest figure since 2018 – Meta’s hatred was given action to the material of 3.4 million pieces under the speech policy. Spam removal also fell from 730 million to 366 million in the end of 2024 to 366 million. The number of fake accounts taken on Facebook increased from 1.4 billion to 1 billion. The meta currently does not share the same data for Instagram.
This decline follows the meta decision of moving away from a comprehensive active enforcement, instead focuses only on the most severe violations such as child abuse and terrorism -related material. Many subjects that were previously operated, including posts related to immigration, gender identity and breed, are now considered to be the field of political discourse and is no longer subject to strict material rules.
Meta has also amended its definition of vulgar language, compressed its scope only to cover direct attacks and make the language inhuman. Flash statements have now been allowed under the updated policy to express contempt, exclusion, or inferiority.
Fact-checking overhaul
Another major change launched by Meta in early 2025 was the eradication of its third party fact-stimulation in the United States. In his place, the company has introduced a crowd-sour-information system known as community notes in Facebook, Instagram, Threads, and recently, reels and threads north.
While Meta has not yet released the data how many times these notes are used or how effective they are, the company says further updates will be provided in future reports. Some experts have expressed concern about the ability of prejudice or manipulation in a system that depends a lot on the user-generated input without established editorial inspection.
Despite the increase in some types of harmful materials, Meta is keeping the new moderation approach as a success, especially in reducing enforcement errors. According to the company, moderation mistakes declined by about 50 percent between the last quarter of 2024 in the United States and the first quarter of 2025.
Meta has not expanded how it calculates this figure, but says that future reports will include matrix to monitor error rates especially to improve transparency. The company said it was working to “attack the right balance” between the under-enforcement and overrech.
Teen Security: A Priority
A region where Meta has chosen to maintain active moderation is in the material shown to the teenager. The company confirmed that the protection against bullying and other harmful materials would remain in place for young users. Adolescent accounts are being introduced on your platforms to better filter improper material for this demographic.
Meta also said how artificial intelligence – especially large language model (LLM) – is playing a growing role in material moderation. The company reports that these devices are now crossing human performance in some areas and is being used to automatically remove the content from review queues when the model is not a policy violation.