Meta’s post-Trump policy changes are setting a new (lower) standard for social media platform governance. Although the company’s community notes approach to addressing falsehoods is not working, community notes programs are growing in popularity; YouTube and TikTok are testing their own versions. Meta declined to answer questions about the program, despite promising to be transparent about it. The lack of disclosure on the efficacy of community notes is particularly concerning given that Meta is also revamping its Facebook Content Monetization program that pays bonuses to creators based on views and engagement, “potentially pouring accelerant on the kind of false posts it once policed.” 

Also following Meta’s (and YouTube’s) lead, LinkedIn has quietly removed protections for transgender and nonwhite users from its English-language hate speech rules. The recent Make Meta Safe report surveyed over 7,000 active users from 86 countries following Meta’s January policy rollbacks and found that 75% of LGBTQ respondents said that harmful content has increased. 

Given Meta’s propensity for ignoring (and burying) risks, it’s not surprising that users are speaking out about their safety and privacy concerns. Instagram users (and US senators) are skeptical of the safety of the platform's new map feature that allows users to share their real-time location. Some users reported that their geotagged stories appeared on Instagram Map even when they opted out of sharing their live location. 

Users also spoke out when Meta used images of Instagram users’ minor daughters - with their faces and names visible - to advertise the Threads app. The posts of children were highlighted to strangers as “suggested threads.” One user whose daughter’s image was used in an ad said her posts were automatically cross-posting to a public account on Threads, despite her Instagram account being set to private. Meta said the images did not violate its policies since the posts themselves were not made by minors.

Meta is reportedly planning to automate 90 percent of all of its privacy and integrity reviews, including in areas like AI safety, youth risk, violent content, and the spread of falsehoods. Current and former employees fear this change will cause real world harm. Under Meta’s prior system, product and feature updates could not be sent to billions of users until they were reviewed by risk assessors. Under the new changes, engineers will be left to make their own judgments about risks, despite a lack of expertise in subjects like privacy, youth safety, and information integrity. As EPIC points out, AI cannot accurately assess socio-technical risks or overcome Meta’s flawed policies. 

Growing user dissatisfaction with the experience on Meta’s platforms should give companies following in its footsteps pause. Bluesky, an alternative to Threads, grew 372.5% YoY as of June. Bluesky benefitted from users leaving X in reaction to similar policy choices and Elon Musk’s allegiance to President Trump. Though Threads is currently leading in user numbers, Bluesky has long-term potential because its infrastructure enables a more open, user-configurable form of social networking.

Questions to consider

  • How is Meta responding to feedback from users and regulators on its policies? How is it addressing the effectiveness of all of its recent policy changes? Will it disclose this analysis?

  • For social media platforms that have weakened their hate speech or anti-discrimination policies, what is their justification? How are they measuring impacts of the changes to their users? Will they disclose this analysis?

  • For social media platforms adopting a community notes approach to content moderation, how are they measuring its effectiveness? Will they disclose this analysis?

Keep Reading

No posts found