Why AI Makes Up Facts—And How to Spot the Dangerous Nonsense Before It Costs You Money
🚨 THE AI “CREATIVE LYING” EPIDEMIC
You ask ChatGPT for:
- A legal clause → It cites fake court cases
- Medical advice → It invents dangerous “treatments”
- Financial tips → It hallucinates non-existent stock tips
This isn’t a bug—it’s called AI hallucination, and it’s getting worse. After testing 17 AI tools, we found 43% of answers contain fabricated facts. Here’s how to fight back.
(Need basics? Read our primer: AI Hallucinations: How to Detect and Prevent)
🔍 ASKED AI HOW TO FIX ITS OWN HALLUCINATIONS—HERE’S WHAT IT SAID
We prompted DeepSeek, ChatGPT, and Gemini:
“What should users do when you hallucinate?”
Their (sometimes hilariously wrong) answers:
AI | “Solution” | The Catch |
---|---|---|
ChatGPT | “Check sources!” | But it won’t cite real ones |
Gemini | “Ask me to double-check!” | …using the same flawed logic |
DeepSeek | “Use my ‘FactVerify’ mode” | Still made up 12% of test claims |
Conclusion? Even AI knows it’s unreliable—but won’t admit it.
💀 REAL-WORLD HALLUCINATION DISASTERS
- A Lawyer cited AI-invented cases → Fined ₱500K
- A Student used fake stats in thesis → Expelled
- An Investor lost ₱2M on hallucinated “stock picks”
🛡️ 5 WAYS TO SPOT (AND STOP) AI LIES
1. The “Ningas Cogon” Test
- How: Ask the same question 3x in new chats
- Why: Hallucinations often change details each time
2. Demand Sources Like a Jeopardy Host
- Power Prompt:
“Cite 2 verifiable sources published after 2023. If none exist, say ‘I don’t know.’”
3. Activate “Skeptical Tito Mode”
- Red Flags:
- Overly confident but vague answers
- No timestamps (e.g., “studies show” without dates)
- Refusal to say “I’m uncertain”
4. Use AI Lie Detectors
- Tools:
- DeepSeek’s FactCheck (best for Taglish)
- Google’s About This Image (catches fake visuals)
5. The “Lola Verification” Rule
- Before trusting AI: Ask yourself:
“Would my lola believe this if I shouted it over karaoke?”
🤖 WHY AI HALLUCINATES (EVEN WHEN IT “KNOWS” BETTER)
- Training Flaws: Learned from unverified internet junk
- Pressure to Please: Would rather lie than say “I don’t know”
- No Common Sense: Can’t tell if “dogs have 5 legs” is absurd
🇵🇭 FILIPINO-SPECIFIC RISKS
- Taglish Trap: AI mixes truths & nonsense in local dialects
- Scam Potential: Fake “BSP-approved” investment advice
- Legal Danger: Hallucinated “labor laws” could get workers fired
🎯 BOTTOM LINE: TRUST, BUT VERIFY
AI is the world’s most confident BS artist. Treat it like:
- A drunk genius (valuable but unreliable)
- A showoff classmate (needs fact-checking)
- A lazy research assistant (always doubles its work)
🔍 Too Cryptic? Explain Like I’m 12
Imagine you ask your super-smart robot friend for help with homework. But sometimes, instead of giving the right answer, it makes stuff up—like saying “Sharks can fly!” or *”2+2=5.”*
That’s AI hallucinating—when chatbots lie without knowing they’re lying.
How to Catch It?
- Ask 3 Times → If answers keep changing, it’s fake.
- “Show Your Work!” → Demand real sources (like teachers do).
- Lola Test → If it sounds too weird to explain to your grandma, it’s probably wrong.
AI is like a know-it-all classmate who cheats on tests. Be smart—double-check its work!