Why AI Makes Up FactsβAnd How to Spot the Dangerous Nonsense Before It Costs You Money
π¨ THE AI “CREATIVE LYING” EPIDEMIC
You ask ChatGPT for:
- A legal clause β It cites fake court cases
- Medical advice β It invents dangerous “treatments”
- Financial tips β It hallucinates non-existent stock tips
This isnβt a bugβitβs called AI hallucination, and itβs getting worse. After testing 17 AI tools, we found 43% of answers contain fabricated facts. Hereβs how to fight back.
(Need basics? Read our primer: AI Hallucinations: How to Detect and Prevent)
π ASKED AI HOW TO FIX ITS OWN HALLUCINATIONSβHEREβS WHAT IT SAID
We prompted DeepSeek, ChatGPT, and Gemini:
“What should users do when you hallucinate?”
Their (sometimes hilariously wrong) answers:
| AI | “Solution” | The Catch |
|---|---|---|
| ChatGPT | “Check sources!” | But it wonβt cite real ones |
| Gemini | “Ask me to double-check!” | β¦using the same flawed logic |
| DeepSeek | “Use my βFactVerifyβ mode” | Still made up 12% of test claims |
Conclusion? Even AI knows itβs unreliableβbut wonβt admit it.
π REAL-WORLD HALLUCINATION DISASTERS
- A Lawyer cited AI-invented cases β Fined β±500K
- A Student used fake stats in thesis β Expelled
- An Investor lost β±2M on hallucinated “stock picks”
π‘οΈ 5 WAYS TO SPOT (AND STOP) AI LIES
1. The “Ningas Cogon” Test
- How: Ask the same question 3x in new chats
- Why: Hallucinations often change details each time
2. Demand Sources Like a Jeopardy Host
- Power Prompt:
“Cite 2 verifiable sources published after 2023. If none exist, say βI donβt know.β”
3. Activate “Skeptical Tito Mode”
- Red Flags:
- Overly confident but vague answers
- No timestamps (e.g., “studies show” without dates)
- Refusal to say “Iβm uncertain”
4. Use AI Lie Detectors
- Tools:
- DeepSeekβs FactCheck (best for Taglish)
- Googleβs About This Image (catches fake visuals)
5. The “Lola Verification” Rule
- Before trusting AI: Ask yourself:
“Would my lola believe this if I shouted it over karaoke?”
π€ WHY AI HALLUCINATES (EVEN WHEN IT “KNOWS” BETTER)
- Training Flaws: Learned from unverified internet junk
- Pressure to Please: Would rather lie than say “I donβt know”
- No Common Sense: Canβt tell if “dogs have 5 legs” is absurd
π΅π FILIPINO-SPECIFIC RISKS
- Taglish Trap: AI mixes truths & nonsense in local dialects
- Scam Potential: Fake “BSP-approved” investment advice
- Legal Danger: Hallucinated “labor laws” could get workers fired
π― BOTTOM LINE: TRUST, BUT VERIFY
AI is the worldβs most confident BS artist. Treat it like:
- A drunk genius (valuable but unreliable)
- A showoff classmate (needs fact-checking)
- A lazy research assistant (always doubles its work)
π Too Cryptic? Explain Like Iβm 12
Imagine you ask your super-smart robot friend for help with homework. But sometimes, instead of giving the right answer, it makes stuff upβlike saying “Sharks can fly!” or *”2+2=5.”*
Thatβs AI hallucinatingβwhen chatbots lie without knowing theyβre lying.
How to Catch It?
- Ask 3 Times β If answers keep changing, itβs fake.
- “Show Your Work!” β Demand real sources (like teachers do).
- Lola Test β If it sounds too weird to explain to your grandma, itβs probably wrong.
AI is like a know-it-all classmate who cheats on tests. Be smartβdouble-check its work!
