January 17, 2025 – In a controversial move, Meta, the parent company of Facebook, Instagram, and Threads, announced last week a major shift in its content moderation strategy, phasing out its human fact-checking program in favor of a crowd-sourced “community notes” model. While Meta claims this approach will democratize moderation, critics warn it risks amplifying misinformation—particularly on sensitive health topics.
This shift comes alongside changes to Meta’s hate speech policies, allowing broader expressions under the banner of “free speech.” Advocacy groups and experts have raised alarms, warning these changes could lead to a surge in harmful rhetoric targeting marginalized communities, including Indigenous peoples, migrants, women, and LGBTQIA+ individuals.
Health Information at Risk
The impacts of Meta’s policy changes are expected to extend to public health. During the COVID-19 pandemic, social media platforms became vital spaces for sharing reliable health information. Sexual and reproductive health organizations, like Family Planning Australia, relied on platforms like Facebook and Instagram to educate diverse audiences on sensitive issues, from unplanned pregnancies to HIV prevention.
For individuals in rural areas or those disconnected from traditional health services, Meta’s platforms acted as virtual town squares, offering a gateway to critical information. However, the new policies could erode trust in these spaces.
Leaked training materials reportedly show that under the updated moderation guidelines, comments like “gays are freaks” are now permissible. Experts worry this shift will deter vulnerable users seeking sexual and reproductive health services, as online spaces become increasingly hostile.
From Over-Censorship to Weaponized Moderation
Meta’s Oversight Board previously acknowledged the platform’s over-censorship of sexual and reproductive health content, often shadow-banning posts on topics like gender and sexuality. The new community notes system, intended to flag misinformation through crowd-sourced input, introduces new challenges.
On X (formerly Twitter), where a similar model is in place, investigations reveal that misinformation often goes unchecked for hours, allowing it to go viral before being flagged. Moreover, politically motivated users have weaponized moderation systems to attack content creators, particularly those focused on women’s health and LGBTQIA+ rights.
Coordinated campaigns against reproductive freedom and trans rights have targeted health organizations with tactics like false reporting of content violations and floods of hate speech in comment sections. As a result, many organizations and creators preemptively self-censor or withdraw from platforms altogether.
The Fallout
Evidence suggests the community notes model has already driven LGBTQIA+ and women’s health organizations to abandon X. Some are now considering leaving Meta platforms as well, turning instead to private newsletters or experimenting with emerging platforms like Bluesky.
However, these alternatives pose their own challenges. Social media’s ability to offer anonymity and privacy is invaluable for vulnerable users hesitant to share personal information, such as email addresses. Leaving Meta’s platforms risks cutting off access to crucial health information for these individuals.
What’s Next?
Organizations invested in promoting sexual and reproductive health must now navigate an increasingly fragmented digital landscape. While no perfect solution exists, experts emphasize the need for resilience and adaptability in these uncertain times. As public health continues to intersect with global political tensions, traditional approaches to outreach may no longer suffice.
For now, the hope remains that new platforms or policies will emerge to protect the integrity of health information online. Until then, the public—and the providers they rely on—face an uphill battle to maintain access to trustworthy and inclusive health resources.
—Reported by The Conversation.