The words misinformation, disinformation, and malinformation appear constantly in discussions about fake news, media literacy, and online manipulation. They are often used interchangeably — but they mean very different things. Confusing them leads to muddled thinking about a problem that requires precise analysis.
This guide explains exactly what each term means, how they differ, and why the distinction matters.
Misinformation is false information shared without the intent to deceive.
Disinformation is false information shared with the deliberate intent to deceive.
Malinformation is true information shared with the intent to cause harm.
The key variable across all three is intent — not accuracy.
Misinformation is what happens when someone shares something incorrect without knowing it is incorrect. There is no deception intended. The person sharing genuinely believes what they are sharing is true.
Examples:
Misinformation spreads primarily through ignorance and inattention, not malice. This makes it in some ways harder to combat than disinformation — the people spreading it are not bad actors, they are ordinary people who didn't verify before sharing.
Why it matters: Correcting misinformation requires education and friction — making it easier to verify claims before sharing. Blame and shame don't work because the person spreading it didn't know they were doing anything wrong.
Disinformation is intentional. Someone creates or spreads false information knowing it is false, with the specific goal of deceiving an audience. This is the category most people think of when they hear “fake news.”
Examples:
Disinformation is a deliberate weapon. It is designed, targeted, and deployed with specific goals — whether political, financial, or social.
Why it matters: Combating disinformation requires identifying and disrupting the source, not just the content. Taking down individual false stories doesn't stop disinformation campaigns — you have to go after the infrastructure producing them.
Malinformation is the least understood of the three categories. It involves information that is factually accurate but is shared in a way designed to cause harm.
Examples:
Malinformation is particularly difficult to address because the information itself is true. You cannot fact-check it as false. The harm comes entirely from the intent and context of the sharing, not the accuracy of the content.
Why it matters: Platform content moderation struggles most with malinformation because standard fact-checking tools don't apply. Addressing it requires thinking about intent and harm, not just truth and falsity.
| Type | Accurate? | Intentional? | Primary driver |
|---|---|---|---|
| Misinformation | No | No | Ignorance, inattention |
| Disinformation | No | Yes | Deception, manipulation |
| Malinformation | Yes | Yes | Harm, harassment |
Researchers sometimes use the term information disorder to describe environments where all three types exist simultaneously and reinforce each other. A disinformation campaign, for example, often seeds false stories that ordinary people then spread as misinformation — the original bad actor benefits from the organic spread without being directly connected to it.
Understanding this ecosystem matters because solutions aimed at one type often don't address the others. A fact-checking tool catches misinformation and disinformation but has no mechanism for malinformation. A platform policy against harassment addresses malinformation but doesn't stop coordinated disinformation campaigns.
AI fact-checking tools like TruthRadar are primarily designed to address misinformation and disinformation — the two categories where accuracy is the central issue. When you paste an article URL into TruthRadar, it checks the factual claims in that article against real-time sources and returns a verdict: TRUE, FALSE, MISLEADING, or UNVERIFIED.
This works well for:
It works less well for malinformation, where the facts themselves are not in dispute — only the intent and context of sharing them. That category requires human judgment, platform-level intervention, and in some cases legal recourse.
The honest position is that no single tool solves all three problems. AI fact-checking is one important layer of defense — particularly effective at scale for the misinformation and disinformation categories that make up the vast majority of what spreads online.
When people conflate these three terms, they reach for the wrong solutions.
Treating disinformation like misinformation means educating people who are already being deliberately lied to — the problem isn't that they didn't check, it's that they were systematically deceived.
Treating misinformation like disinformation means looking for malicious actors who don't exist — most people spreading false information are not bad actors, they just didn't verify.
Treating malinformation like the other two means trying to fact-check things that are factually true — which misses the point entirely.
Precision in language leads to precision in solutions. The next time you encounter a false or harmful story online, ask yourself: is this misinformation, disinformation, or malinformation? The answer tells you where the problem really lies — and what might actually fix it.
truthradar.ai · verified by AI · powered by Perplexity