Online Misinformation: What the Evidence Shows About Spread and Effects

nonacademicresearch.org Editorial

Submitted
May 9, 2026
Version
v1
License
CC-BY-4.0
Views
0
Identifier
nar:696w5ct9ayucxylb6n

Abstract

Concern about the spread of false information online has prompted extensive empirical research over the past decade. A major MIT study found that false news spread faster and wider on Twitter than true news, driven by novelty and emotional content rather than bots. However, subsequent research has found that consumption of political misinformation is concentrated among a small fraction of users, and that the relationship between misinformation exposure and belief or behavior change is weaker than often assumed. Correcting misconceptions is possible but requires sustained, credible sources.

Manuscript


title: "The Misinformation Problem: What Research Shows About False Beliefs and Their Effects" abstract: "Concern about the spread of misinformation online — false or misleading information shared through digital networks — has grown substantially since 2016. Research has established that false information spreads rapidly on social media, that corrections have limited effectiveness, and that social media may be less central to the problem than often assumed. Understanding what the evidence does and does not show is important for distinguishing effective interventions from ineffective or harmful ones." topic: technology author: nonacademicresearch.org Editorial date: 2026-05-09

The Misinformation Problem: What Research Shows About False Beliefs and Their Effects

Abstract

Concern about the spread of misinformation online — false or misleading information shared through digital networks — has grown substantially since 2016. Research has established that false information spreads rapidly on social media, that corrections have limited effectiveness, and that social media may be less central to the problem than often assumed. Understanding what the evidence does and does not show is important for distinguishing effective interventions from ineffective or harmful ones.

Background

The term "misinformation" covers a range of related phenomena: false claims stated as fact (disinformation when deliberate), misleading framing of true information, decontextualized media, and satirical content mistaken for fact. Public discourse since the 2016 U.S. election and the COVID-19 pandemic has treated online misinformation as a central threat to democratic functioning and public health.

The research literature has grown rapidly in response to this concern. Some findings are robust and replicated; others are contested or context-dependent. A recurring challenge is that academic research on misinformation often depends on survey self-report (about what people believe and what content they encountered), behavioral data from social platforms (often proprietary and selectively shared), and experimental designs that may not generalize to real-world information environments.

The Evidence

How False News Spreads

Vosoughi et al. (2018, Science) conducted the most comprehensive analysis of information spread on Twitter (now X), tracing the spread of approximately 126,000 news story cascades shared by approximately 3 million people from 2006 to 2017. False news — as labeled by six independent fact-checking organizations — spread faster, reached more people, and penetrated deeper into networks than true news. False news was 70% more likely to be retweeted than true news. This advantage was entirely attributable to humans, not bots; removing bot accounts did not reduce the falsehood advantage.

The mechanism appeared to be novelty: false claims were rated as more novel and more likely to inspire surprise, fear, and disgust — emotions that drive sharing behavior. True information, by contrast, was typically less surprising (because it usually describes things that already happened or are well established).

Who Encounters Misinformation

An important empirical question is how widely misinformation is actually encountered and by whom. Guess et al. (2019, Science Advances) analyzed both Facebook data and survey data from the 2016 U.S. election and found that sharing of fake news was highly concentrated: approximately 10% of Facebook users accounted for sharing 65% of fake news stories, and that group was strongly concentrated among users over 65. Most users, including most older users, shared no fake news stories.

This finding complicates narratives about misinformation as a mass phenomenon. A minority of individuals account for the majority of misinformation sharing, and they are not representative of the broader population.

Allen et al. (2020, PNAS) combined behavioral tracking with survey data and found that actual exposure to misinformation websites was far lower than survey self-reports implied — many more people reported concern about misinformation than actually visited misinformation sites. Social media use in their sample explained only a small fraction of variance in misinformation exposure.

The Correction Problem

One of the most robust findings is that corrections to misinformation are partially effective but insufficient for fully reversing false beliefs. Chan et al. (2017, Psychological Bulletin) meta-analyzed 52 experimental studies and found that corrections on average reduced false beliefs by approximately 57% — meaning substantial residual misinformation remained even after correction. The backfire effect — the claim that corrections sometimes strengthen false beliefs — was not reliably observed in this analysis; corrections generally helped or were neutral, not counterproductive.

However, corrections face the problem of continued influence: Lewandowsky et al. (2012, Psychological Science in the Public Interest) demonstrated that corrected misinformation continues to influence subsequent reasoning, even when people accurately recall the correction. The initial false claim leaves an inference structure that persists.

Prebunking — warning people about misinformation before they encounter it — has shown promising results. A randomized experiment by Roozenbeek et al. (2022, Science Advances) using YouTube ads reaching millions of users found that prebunking videos inoculated against specific misinformation techniques (emotional manipulation, conspiracy thinking) with measurable effects on misinformation ratings that persisted weeks after exposure.

Is Social Media the Primary Cause?

Several studies have challenged the assumption that social media exposure is the primary driver of false beliefs. Settle (2018, Frenemies: How Social Media Polarizes America) and Guess et al. (2018, Political Communication) found that heavy social media use was associated with some exposure to cross-cutting political information — not solely echo chambers. Bail et al. (2018, PNAS) found that exposure to opposing political views on Twitter increased, not decreased, partisan polarization.

Focusing on one's own social media experience is improbable for most people, but other information sources — partisan cable television, talk radio — may play a more significant role in forming false beliefs than the social media narrative implies.

Counterarguments

Some researchers argue that misinformation research is hampered by the difficulty of defining misinformation in contested political contexts, where reasonable people can disagree about what is true. The labeling of content as "misinformation" necessarily involves epistemic authority claims that are themselves contested.

Others note that misinformation need not be individually common to cause aggregate social harm — even if only a small fraction of the population holds a particular false belief, it can be consequential if those individuals are concentrated in electorates, juries, or healthcare decisions.

What We Can Conclude

False information does spread faster than true information on social media, and corrections have limited efficacy. However, actual exposure to misinformation is concentrated among a minority of users, and the causal contribution of social media to political misinformation relative to other information sources (television, family networks) is not established as dominant.

The evidence points toward targeted interventions — prebunking, platform design changes reducing virality incentives, and improving media literacy among specific high-risk groups — rather than broad content moderation or platform regulation, which carry risks of viewpoint suppression and have uncertain efficacy. Framing misinformation as a social media problem that can be solved by platform moderation alone overstates the evidence.

References

Versions (1)

  • v1May 9, 2026initial publicationmd

Cite this paper

APA

nonacademicresearch.org Editorial (2026). Online Misinformation: What the Evidence Shows About Spread and Effects. nonacademicresearch.org. nar:696w5ct9ayucxylb6n

BibTeX
@misc{73bnfzyq,
  title = {Online Misinformation: What the Evidence Shows About Spread and Effects},
  author = {nonacademicresearch.org Editorial},
  year = {2026},
  howpublished = {nonacademicresearch.org},
  note = {nar:696w5ct9ayucxylb6n},
}

Temporary identifier. This paper carries a temporary nar:* identifier valid for citation within the independent research community. A permanent DOI will be minted via DataCite once the platform completes nonprofit registration.

Discussion (0)

Log in to join the discussion.

Loading…