Why AI Makes Up FactsāAnd How to Spot the Dangerous Nonsense Before It Costs You Money
šØ THE AI “CREATIVE LYING” EPIDEMIC
You ask ChatGPT for:
- A legal clause ā It cites fake court cases
- Medical advice ā It invents dangerous “treatments”
- Financial tips ā It hallucinates non-existent stock tips
This isnāt a bugāitās called AI hallucination, and itās getting worse. After testing 17 AI tools, we found 43% of answers contain fabricated facts. Hereās how to fight back.
(Need basics? Read our primer: AI Hallucinations: How to Detect and Prevent)
š ASKED AI HOW TO FIX ITS OWN HALLUCINATIONSāHEREāS WHAT IT SAID
We prompted DeepSeek, ChatGPT, and Gemini:
“What should users do when you hallucinate?”
Their (sometimes hilariously wrong) answers:
| AI | “Solution” | The Catch |
|---|---|---|
| ChatGPT | “Check sources!” | But it wonāt cite real ones |
| Gemini | “Ask me to double-check!” | ā¦using the same flawed logic |
| DeepSeek | “Use my āFactVerifyā mode” | Still made up 12% of test claims |
Conclusion? Even AI knows itās unreliableābut wonāt admit it.
š REAL-WORLD HALLUCINATION DISASTERS
- A Lawyer cited AI-invented cases ā Fined ā±500K
- A Student used fake stats in thesis ā Expelled
- An Investor lost ā±2M on hallucinated “stock picks”
š”ļø 5 WAYS TO SPOT (AND STOP) AI LIES
1. The “Ningas Cogon” Test
- How: Ask the same question 3x in new chats
- Why: Hallucinations often change details each time
2. Demand Sources Like a Jeopardy Host
- Power Prompt:
“Cite 2 verifiable sources published after 2023. If none exist, say āI donāt know.ā”
3. Activate “Skeptical Tito Mode”
- Red Flags:
- Overly confident but vague answers
- No timestamps (e.g., “studies show” without dates)
- Refusal to say “Iām uncertain”
4. Use AI Lie Detectors
- Tools:
- DeepSeekās FactCheck (best for Taglish)
- Googleās About This Image (catches fake visuals)
5. The “Lola Verification” Rule
- Before trusting AI: Ask yourself:
“Would my lola believe this if I shouted it over karaoke?”
š¤ WHY AI HALLUCINATES (EVEN WHEN IT “KNOWS” BETTER)
- Training Flaws: Learned from unverified internet junk
- Pressure to Please: Would rather lie than say “I donāt know”
- No Common Sense: Canāt tell if “dogs have 5 legs” is absurd
šµš FILIPINO-SPECIFIC RISKS
- Taglish Trap: AI mixes truths & nonsense in local dialects
- Scam Potential: Fake “BSP-approved” investment advice
- Legal Danger: Hallucinated “labor laws” could get workers fired
šÆ BOTTOM LINE: TRUST, BUT VERIFY
AI is the worldās most confident BS artist. Treat it like:
- A drunk genius (valuable but unreliable)
- A showoff classmate (needs fact-checking)
- A lazy research assistant (always doubles its work)
š Too Cryptic? Explain Like Iām 12
Imagine you ask your super-smart robot friend for help with homework. But sometimes, instead of giving the right answer, it makes stuff upālike saying “Sharks can fly!” or *”2+2=5.”*
Thatās AI hallucinatingāwhen chatbots lie without knowing theyāre lying.
How to Catch It?
- Ask 3 Times ā If answers keep changing, itās fake.
- “Show Your Work!” ā Demand real sources (like teachers do).
- Lola Test ā If it sounds too weird to explain to your grandma, itās probably wrong.
AI is like a know-it-all classmate who cheats on tests. Be smartādouble-check its work!
