Wednesday, January 15, 2025

Should Social Media Companies Be Responsible for Fact-Checking Their Sites?

Meta — the company that owns Facebook, Instagram, Threads and Whatsapp — announced on Jan. 7 that it would be ending its longstanding fact-checking program, a policy instituted to curtail the spread of misinformation across its social media apps.

As part of the content moderation overhaul, the company said it would also drop some of its rules protecting L.G.B.T.Q. people and others. Mark Zuckerberg, the chief executive of Meta, explained that the company would “get rid of a bunch of restrictions on topics like immigration and gender that are just out of touch with mainstream discourse.”

Do you use any of the apps Meta owns? What is your reaction to these changes?

In the Jan. 8 edition of The Morning, Steven Lee Myers, who covers misinformation and disinformation for The New York Times, explains the impact of the new policy:

Policing the truth on social media is a Sisyphean challenge. The volume of content — billions of posts in hundreds of languages — makes it impossible for the platforms to identify all the errors or lies that people post, let alone to remove them.

Yesterday, Meta — the owner of Facebook, Instagram and Threads — effectively stopped trying. The company said independent fact-checkers would no longer police content on its sites. The announcement punctuated an industrywide retreat in the fight against falsehoods that poison public discourse online.

Mark Zuckerberg, Meta’s chief executive, said the new policy would mean fewer instances when the platforms “accidentally” take down posts wrongly flagged as false. The trade-off, he acknowledged, is that more “bad stuff” will pollute the content we scroll through.

That’s not just an annoyance when you open Facebook on your phone. It also corrodes our civic life. Social media apps — where the average American spends more than two hours per day — are making it so that truth, especially in politics, is simply a matter of toxic and inconclusive debate online.

What could this mean for users? The article explores some potential outcomes:

Meta is not entirely abdicating responsibility for what appears on its platforms. It will still take down posts with illegal activity, hate speech and pornography, for example.

But like other platforms, it is leaving the political space in order to maintain market share. Elon Musk purchased Twitter (which is now called X) with a promise of unfettered free speech. He also invited back users banned for bad behavior. And he replaced content moderation teams with crowdsourced “community notes” below disputed content. YouTube made a similar change last year. Now Meta is adopting the model, too.

Numerous studies have shown the proliferation of hateful, tendentious content on X. Antisemitic, racist and misogynistic posts there rose sharply after Musk’s takeover, as did disinformation about climate change. Users spent more time liking and reposting items from authoritarian governments and terrorist groups, including the Islamic State and Hamas. Musk himself regularly peddles conspiratorial ideas about political issues like migration and gender to his 211 million followers.

Letting users weigh in on the validity of a post — say, one claiming that vaccines cause autism or that nobody was hurt in the Jan. 6 attack — has promise, researchers say. Today, when enough people speak up on X, a note appears below the contested material. But that process takes time and is susceptible to manipulation. By then, the lie may have gone viral, and the damage is done.

Perhaps people still crave something more reliable. That is the promise of upstarts like Bluesky. What happened at X could be a warning. Users and, more important, advertisers have fled.

It’s also possible that people value entertainment and views they agree with over strict adherence to the truth. If so, the internet may be a place where it is even harder to separate fact from fiction.

Students, read the entire article and then tell us:

  • What is your reaction to Meta’s decision to end its fact-checking program on its social media apps?

  • Do you believe social media companies should be responsible for fact-checking lies, misinformation, disinformation and conspiracy theories on their sites? Why or why not? To what extent does it matter if what we see on social media is true?

  • How effective do you think the “community notes” approach, in which users leave a fact-check or correction on a social media post, will be at curtailing the spread of falsehoods on these sites?

  • Critics of fact-checking programs have cast some decisions by social media companies to remove posts as censorship. Mr. Zuckerberg said the “trade-off” for unfettered free speech and reducing the number of posts that are wrongly flagged as inaccurate is that more “bad stuff” will appear in our feeds. Is that trade-off worth it, in your opinion?

  • How much time do you spend on social media sites like Instagram, Facebook, TikTok or X? Do you often get information about what’s going on in the world from them? Do you expect this information to be correct or do you do your own fact-checking?

  • Mr. Myers describes the task of trying to fact-check the billions of posts made on Meta’s social media sites as “Sisyphean,” or nearly impossible. But the interventions have been seen by researchers as fairly effective. As Claire Wardle, an associate professor of communication at Cornell University, put it, “The more friction there is on a platform, the less spreading you have of low-quality information.” Do you think it’s worth it to try to slow the distribution of misinformation and disinformation on these sites, or should social media companies just give up?

  • Mr. Myers writes that the “bad stuff” we see on social media isn’t just an annoyance, but that it “also corrodes our civic life.” Do you agree? Why or why not? What, if anything, do you think Meta’s changes will mean for you, the communities you belong to, your country and the world?

Related Articles

Latest Articles