How we review a claim.
Every fact-check published by TruthRadar.ai follows the same structured process. This page explains it in full — how sources are chosen, how confidence is calculated, and what each verdict label actually means.
01 — Source Selection
Not all sources are equal.
TruthRadar uses Perplexity's Sonar Pro API to search the live web at the moment a claim is submitted. The system does not rely on a frozen training dataset — it retrieves sources in real time and evaluates them against a four-tier hierarchy.
Primary scientific & government sources
Peer-reviewed journals (PubMed, Nature, Science), official government databases (CDC, NIH, WHO, EU agencies), and primary legal or regulatory documents. These carry the most weight in the final verdict.
Established news organisations
Wire services (Reuters, AP, AFP) and established papers of record with clear editorial standards and correction policies. Used for breaking events and claims that lack primary documentation.
Expert commentary & institutional consensus
University research summaries, think-tank reports, and statements from recognised professional associations. Weighted against primary sources but useful for contextualising complex claims.
Cross-corroborating secondary coverage
Multiple independent secondary sources that consistently report the same finding. No single secondary source is treated as authoritative — convergence across many is required.
Sources used in a fact-check are always disclosed on the result page so you can read the underlying material yourself.
02 — Confidence Score
A number that reflects evidential weight.
Every fact-check includes a confidence score from 0 to 100. It is not a measure of how certain we are that the verdict is correct in some absolute sense — it is a structured estimate of how strongly the available evidence supports the verdict, given the quality and consistency of sources found. Six factors contribute to the score:
Source quality
Tier 1 and Tier 2 sources increase confidence; reliance on Tier 4 sources alone caps confidence at a lower ceiling.
Source count & independence
Multiple independent corroborating sources raise confidence. Sources that all trace back to a single original report are counted as one.
Evidence consistency
When every credible source agrees, confidence rises. Conflicting expert opinion or genuinely contested science lowers it.
Claim specificity
Narrow, specific claims (a number, a date, a quote) can often be verified with high confidence. Broad, complex claims typically attract a lower ceiling.
Recency of evidence
Claims about current events are checked against the most recent available sources. Where evidence is actively evolving, confidence is adjusted downward.
AI reasoning uncertainty
The model surfaces its own uncertainty when the evidence picture is ambiguous. An explicit uncertainty signal lowers the final score.
Score ranges
03 — Verdict Labels
Four verdicts. Deliberately simple.
Once sources have been evaluated and a confidence score derived, the system assigns one of four verdicts. We keep the scale simple — complexity in rating systems tends to obscure more than it clarifies.
The core claim is accurate and supported by credible primary sources. Minor irrelevant inaccuracies in framing do not disqualify a TRUE rating so long as the central assertion is correct.
The claim directly contradicts verifiable evidence or established fact. FALSE does not imply deliberate deception — it means the assertion is factually wrong regardless of intent.
The claim contains accurate elements but creates a false impression through omission, selective framing, missing context, or the combination of individually true statements that add up to something untrue.
Credible evidence is insufficient to confirm or deny the claim. This is an honest answer, not a failure. It typically applies to emerging stories, highly contested scientific questions, or claims for which no public evidence exists.
04 — The Review Pipeline
What happens between submission and verdict.
Claim extraction
The system reads the submitted URL or text and identifies the primary falsifiable claim. If multiple claims are present, the most prominent is selected.
Live source retrieval
Sonar Pro searches the web in real time, prioritising primary and Tier 1 sources. The search is guided by the specific claim — not just the topic.
Evidence evaluation
Retrieved sources are assessed for tier, independence, recency, and consistency. Conflicting evidence is noted and factored into the confidence score.
Verdict & score assignment
The model applies the six confidence factors and assigns a score. The score, combined with the direction of evidence, determines the verdict label.
Summary generation
A plain-English explanation is written summarising the claim, the key evidence, and why the verdict was reached. Sources are cited inline.
Publication
The fact-check is published as a permanent, searchable page with all sources disclosed. Users can submit feedback if they believe the verdict is incorrect.
05 — Known Limitations
Where the process falls short.
We are transparent about the limits of AI-assisted fact-checking. The confidence score exists precisely to communicate uncertainty — a low score is a signal to read the sources yourself rather than taking the verdict at face value.
AI reasoning is not human editorial judgment
The system applies structured reasoning to available evidence. It does not replace the editorial experience of a trained investigative journalist on highly nuanced political or legal claims.
Real-time sources can change
Because we use live web search, a fact-check reflects sources available at the moment of analysis. Later corrections or updates to source material are not automatically reflected in older fact-check pages.
Paywalled and private sources
Academic papers and reports behind paywalls may not be accessible during analysis. The confidence score accounts for the possibility that relevant evidence exists that was not reachable.
Language and geography
Primary-source evidence in languages other than English may be underweighted. Claims that originate in non-English media environments may receive a lower-confidence UNVERIFIED verdict when the underlying evidence exists but is inaccessible.
Questions or corrections
Think we got something wrong?
Every fact-check page has a feedback button. Use it — corrections are how this process improves.