🔍

AI Hallucination Checker

Flag likely hallucinations in AI-generated text

0 words
📚
Learn more — how it works, FAQ & guide
Click to expand

Free AI hallucination checker — flag suspicious claims

Scan AI-generated text for hallucination patterns. Identifies fake citations, suspicious URLs, oddly specific numbers, and overconfidence markers. Rule-based — no AI doing the detection. Perfect for reviewers who need to fact-check ChatGPT / Claude output quickly.

How to use this tool

  1. 1

    Paste AI output

    The text ChatGPT / Claude / Gemini gave you that you want to sanity-check.

  2. 2

    Review flagged items

    Each flag includes: category, severity, what to verify, why it's suspicious.

  3. 3

    Manually verify

    Each flag is a CANDIDATE for checking — not proof of hallucination. Look up citations, check quoted numbers against sources.

How to verify flags

  • Citations: Google the exact quote + "Smith et al." — real papers appear on Google Scholar
  • URLs: Click — does it load? Is the domain real? (.edu, .gov, major orgs)
  • Numbers: Search the statistic verbatim — real stats have multiple sources
  • Studies: Check pubmed, Google Scholar, university websites

Frequently Asked Questions

What is an AI hallucination?
When an LLM generates plausible-sounding but false or fabricated information. Common types: fake citations, made-up URLs, invented quotes, wrong statistics, misattributed facts.
Can this detect 100% of hallucinations?
No. Real hallucinations can sound perfectly normal. This tool flags common patterns (fake citations, oddly specific numbers, suspicious URLs) that are high-probability hallucinations. Always verify important claims yourself.
What does this detect?
(1) Citation patterns — "according to a 2023 Harvard study" without link. (2) Suspicious URLs — short, clearly pattern-matched fake URLs. (3) Oddly specific numbers — precise percentages / statistics without sources. (4) Fabrication markers — "Smith et al., 2019" style fake references. (5) Overconfidence markers — "definitely", "certainly", "always" on disputed claims.
Why not use AI to detect AI hallucinations?
The AI doing the detection can hallucinate too. Rule-based heuristics are deterministic. Our tool flags candidates for YOU to verify — it doesn't make its own claims.
Is my text sent anywhere?
No. 100% client-side. Safe for sensitive AI outputs you want to sanity-check.

You might also like

🔒
100% Privacy. This tool runs entirely in your browser. Your data is never uploaded to any server.