
Credit: Unsplash/CC0 Public Domain
Critics initially scoffed when social media platform X (formerly Twitter) asked users to flag false or misleading posts. How can we trust the same public who spread misinformation to correct it? But a recent study by researchers at the University of Rochester, the University of Illinois at Urbana-Champaign, and the University of Virginia found that “crowdcheck” (X’s collaborative fact-checking experiment known as Community Notes) actually works.
Posts with X number of public correction notes were 32% more likely to be deleted by the author than posts with only private notes.
The paper, published in the journal Information Systems Research, shows that when a community note about a potential inaccuracy in a post appears below a tweet, the author is much more likely to retract that tweet.
“Trying to objectively define what misinformation is and then try to remove that content can be controversial and even counterproductive,” said co-author Huaxia Lui, a Xerox professor of information systems and technology at Eurochester’s Simon Business School. “In the long run, I think a better way to remove misleading posts is for the poster to delete them themselves.”
Using a causal inference technique called regression discontinuity and a huge dataset of X posts (formerly known as tweets), the researchers discovered that corrections generated by public colleagues can accomplish what experts and algorithms have struggled to accomplish. Displaying notes or corrected content with potentially misleading information can certainly “prompt the author to remove that content,” Louis says.
Community Notes on X: Experiments in Public Remediation
Community notes work based on a threshold mechanism. A revised memo must earn a “usefulness” score of at least 0.4 to be published. (Suggested notes are first shown to contributors for rating. The bridging algorithm used in community notes prioritizes ratings from a variety of users, especially those who disagree with past ratings, to prevent partisan group voting that could manipulate the visibility of a note.)
Conversely, notes just below that threshold will remain unpublished. This design allows for a natural experiment by allowing researchers to compare X number of posts with notes just above and below the cutoff (i.e., open to the public and only to community note posters), thereby allowing them to measure the causal effects of public exposure.
In total, the researchers analyzed 264,600 posts on X that received at least one community note in two different time periods. The first time is before the U.S. presidential election (June-August 2024), when misinformation typically spikes, and the second time is two months after the election (January-February 2025).
The results were amazing. X posts with public correction notes are 32% more likely to be removed by the author than posts with only private notes, demonstrating the power of voluntary retractions as an alternative to forced content removal. This effect persisted over both study periods.
reputation effect
The research team found that authors’ decisions to retract or delete are primarily driven by societal concerns. “You’re worried that your online reputation will be damaged if others think your information is misleading,” says Louis.
The researchers say that publicly displayed community notes, which highlight factual inaccuracies, serve as a signal to online viewers that the content, and by extension the author, cannot be trusted.
Speed is critical in the social media ecosystem, where reputation matters, especially for influential users, and misinformation tends to spread faster and further than corrections.
Researchers found that public notes not only increase the likelihood of a tweet being deleted, but also speed up the process. Of the X retracted posts, the sooner the note is published, the sooner the featured post will be removed.
People whose posts receive significant attention or engagement, or who have a large follower base, are at increased reputational risk. As a result, verified X users (those with a blue check mark) were especially quick to delete posts when they acquired public community notes, and expressed greater concern about maintaining credibility.
The overall pattern suggests that online accuracy may be enhanced by the dynamics of social media itself, such as status, visibility, and peer feedback.
Democratic defense against misinformation?
The researchers concluded that crowd checking “balances the protection of First Amendment rights with the urgent need to curb misinformation.” It relies on collective judgment and public correction, not censorship. The algorithm used in Community Notes values diversity and supporting views.
Louis admits that at first, he was surprised by the team’s strong findings. “For people to be willing to retract is like admitting their mistakes and wrongdoing, which is difficult for anyone to do, especially in today’s hyperpolarized environment with echo chambers,” he says.
At the beginning of the study, the researchers suspected that the corrective mechanism might backfire. In other words, can public notes really encourage people to retract problematic posts, or will they deter people?
Now they know it works.
“Ultimately, voluntarily removing misleading or false information is a more civil and perhaps more sustainable way to solve the problem,” Louis said.
Details: Yang Gao et al, “Can crowd checking suppress misinformation?” Evidence from Community Notes, Information Systems Research (2025). DOI: 10.1287/isre.2024.1609
Provided by University of Rochester
Citation: What are the most effective online fact checkers?Peers (November 17, 2025) Retrieved November 18, 2025 from https://techxplore.com/news/2025-11-Effective-online-fact-checkers-peers.html.
This document is subject to copyright. No part may be reproduced without written permission, except in fair dealing for personal study or research purposes. Content is provided for informational purposes only.
