After Meta relaxed regulations to please Trump, Facebook saw an increase in bullying and violent content


Meta’s latest Integrity Report provides the first concrete look at the impact of its January 2025 overhaul of content moderation, and the findings suggest a troubling trend: harmful content is on the rise as enforcement measures are scaled back.

Key Takeaways:

1. Rise in Harmful Content Across Facebook

  • Violent and graphic content increased from 0.06–0.07% (late 2024) to 0.09% in Q1 2025.

  • Bullying and harassment also ticked up to 0.08%, attributed to a spike in violations in March.

  • These small percentages represent millions of pieces of content, given Meta’s user base.

2. Enforcement Has Plummeted

  • Hate speech removals dropped to 3.4 million, the lowest since 2018.

  • Spam removals halved: from 730 million to 366 million.

  • Fake account takedowns fell from 1.4 billion to 1 billion.

  • Meta now focuses enforcement on the most extreme violations (e.g., child exploitation, terrorism), while easing rules around political discourse, such as immigration, race, and gender identity.

3. Redefined Hate Speech Policy

  • The definition of hate speech has been narrowed. Content expressing contempt, exclusion, or inferiority is no longer automatically flagged unless it includes direct attacks or dehumanizing language.

4. Fact-Checking Overhaul

  • Meta discontinued third-party fact-checking in the U.S. and replaced it with Community Notes, a crowd-sourced system now active on Facebook, Instagram, Threads, and even Reels replies.

  • No data yet on its effectiveness, and experts warn it could be vulnerable to bias or manipulation, especially without professional oversight.

5. Meta’s Justification: Fewer Mistakes

  • Meta claims moderation errors dropped by 50% in the U.S. in Q1 2025 compared to late 2024.

  • However, Meta hasn’t explained how it measures error rates — future reports are expected to include such data for transparency.

6. Teen Protections Still in Place

  • Moderation remains stricter for teens. New Teen Accounts are being rolled out with filters against harmful content.

  • Meta also touts its use of AI (LLMs) in moderation, saying it now outperforms human reviewers in some areas, helping reduce the burden on human moderation teams.


The Bigger Picture:

Meta’s strategy marks a philosophical shift: prioritizing free expression and fewer enforcement errors over strict content policing. While this may reduce friction in politically sensitive areas and avoid overreach, it opens the door to more abuse, misinformation, and harassment — especially for vulnerable groups.

With AI increasingly in charge and community moderation replacing professionals, the risk is that speed and scale may come at the cost of accountability and trust.

The backlash is likely to grow if harmful content continues to increase, especially in the lead-up to the U.S. presidential elections, where Meta’s platforms will face intense scrutiny over misinformation and online harm.


 

buttons=(Accept !) days=(20)

Our website uses cookies to enhance your experience. Learn More
Accept !