In a significant policy shift, Meta has announced changes to its content moderation and political content policies aimed at reducing censorship and promoting free expression.
The changes were unveiled by Joel Kaplan, Meta’s Chief Global Affairs Officer, who emphasized the company’s renewed commitment to open dialogue on its platforms, including Facebook, Instagram, and Threads.
Ending Third-Party Fact-Checking in Favor of Community Notes
One major change is the end of Meta’s third-party fact-checking program in the U.S., which was launched in 2016.
Mr Kaplan acknowledged that while the program aimed to provide users with more context about online content, it often became a tool for censorship.
“Experts, like everyone else, have their own biases and perspectives,” Mr Kaplan stated, highlighting how legitimate political debate was sometimes suppressed. Meta will now transition to a Community Notes model, similar to the one used on X (formerly Twitter). Under this system, community contributors will write and rate contextual notes, requiring agreement from a diverse range of perspectives before a note is displayed.
The program will launch in the U.S. over the next few months and contributors can sign up now to be early participants.
Unlike the previous system, content flagged by Community Notes will carry less obtrusive labels rather than full-screen warnings.
Reducing Over-Enforcement and Allowing More Speech
Meta is also lifting restrictions on several topics central to political discourse, such as immigration and gender identity, to align its platform policies with the broader norms of public debate.
“It’s not right that things can be said on TV or the floor of Congress, but not on our platforms,” Mr Kaplan said.
Recent internal data revealed that in December 2024, millions of pieces of content were removed daily.
Meta estimates that up to 20% of these removals may have been mistakes, a statistic the company plans to report regularly.
Automated systems will now focus only on illegal and high-severity violations, like terrorism and child exploitation.
For less severe issues, action will require user reports rather than automated detection.
In a bid to improve enforcement accuracy, Meta is adjusting its systems to require greater confidence before taking action and introducing multiple reviewers for contested cases.
The company is also relocating its trust and safety teams from California to Texas and other U.S. locations.
Personalized Political Content
Addressing user feedback since 2021, Meta will allow more political content in users’ feeds but with a personalized approach.
Civic content will now be treated like any other post, with ranking and recommendations based on user engagement signals.
Users will also have expanded options to control the amount of political content they see.
Embracing Free Expression
The changes align with the principles set forth by Chief Executive Officer (CEO), Mark Zuckerberg in his 2019 Georgetown University speech, where he championed free expression as a cornerstone of progress.
“Some people believe giving more people a voice is driving division… I think that’s dangerous,” Mr Zuckerberg stated at the time.
Mr Kaplan echoed this sentiment, acknowledging past mistakes and affirming Meta’s commitment to empowering voices worldwide.
“We want to fix that and return to that fundamental commitment to free expression,” he said.
The policy changes will roll out throughout 2025, with a focus on transparency and user empowerment as Meta navigates the balance between free expression and content moderation.