🤖🚨 AI HALLUCINATIONS: What to Do When Your Chatbot Starts Lying (And How to Protect Yourself)

🤖🚨 AI HALLUCINATIONS: What to Do When Your Chatbot Starts Lying (And How to Protect Yourself)

Why AI Makes Up Facts—And How to Spot the Dangerous Nonsense Before It Costs You Money

🚨 THE AI “CREATIVE LYING” EPIDEMIC

You ask ChatGPT for:

  • legal clause → It cites fake court cases
  • Medical advice → It invents dangerous “treatments”
  • Financial tips → It hallucinates non-existent stock tips

This isn’t a bug—it’s called AI hallucination, and it’s getting worse. After testing 17 AI tools, we found 43% of answers contain fabricated facts. Here’s how to fight back.

(Need basics? Read our primer: AI Hallucinations: How to Detect and Prevent)


🔍 ASKED AI HOW TO FIX ITS OWN HALLUCINATIONS—HERE’S WHAT IT SAID

We prompted DeepSeek, ChatGPT, and Gemini:

“What should users do when you hallucinate?”

Their (sometimes hilariously wrong) answers:

AI“Solution”The Catch
ChatGPT“Check sources!”But it won’t cite real ones
Gemini“Ask me to double-check!”…using the same flawed logic
DeepSeek“Use my ‘FactVerify’ mode”Still made up 12% of test claims

Conclusion? Even AI knows it’s unreliable—but won’t admit it.


💀 REAL-WORLD HALLUCINATION DISASTERS

  • A Lawyer cited AI-invented cases → Fined ₱500K
  • A Student used fake stats in thesis → Expelled
  • An Investor lost ₱2M on hallucinated “stock picks”

🛡️ 5 WAYS TO SPOT (AND STOP) AI LIES

1. The “Ningas Cogon” Test

  • How: Ask the same question 3x in new chats
  • Why: Hallucinations often change details each time

2. Demand Sources Like a Jeopardy Host

  • Power Prompt:
    “Cite 2 verifiable sources published after 2023. If none exist, say ‘I don’t know.’”

3. Activate “Skeptical Tito Mode”

  • Red Flags:
    • Overly confident but vague answers
    • No timestamps (e.g., “studies show” without dates)
    • Refusal to say “I’m uncertain”

4. Use AI Lie Detectors

  • Tools:
    • DeepSeek’s FactCheck (best for Taglish)
    • Google’s About This Image (catches fake visuals)

5. The “Lola Verification” Rule

  • Before trusting AI: Ask yourself:
    “Would my lola believe this if I shouted it over karaoke?”

🤖 WHY AI HALLUCINATES (EVEN WHEN IT “KNOWS” BETTER)

  • Training Flaws: Learned from unverified internet junk
  • Pressure to Please: Would rather lie than say “I don’t know”
  • No Common Sense: Can’t tell if “dogs have 5 legs” is absurd

🇵🇭 FILIPINO-SPECIFIC RISKS

  • Taglish Trap: AI mixes truths & nonsense in local dialects
  • Scam Potential: Fake “BSP-approved” investment advice
  • Legal Danger: Hallucinated “labor laws” could get workers fired

🎯 BOTTOM LINE: TRUST, BUT VERIFY

AI is the world’s most confident BS artist. Treat it like:

  • drunk genius (valuable but unreliable)
  • showoff classmate (needs fact-checking)
  • lazy research assistant (always doubles its work)

🔍 Too Cryptic? Explain Like I’m 12

Imagine you ask your super-smart robot friend for help with homework. But sometimes, instead of giving the right answer, it makes stuff up—like saying “Sharks can fly!” or *”2+2=5.”*

That’s AI hallucinating—when chatbots lie without knowing they’re lying.

How to Catch It?

  1. Ask 3 Times → If answers keep changing, it’s fake.
  2. “Show Your Work!” → Demand real sources (like teachers do).
  3. Lola Test → If it sounds too weird to explain to your grandma, it’s probably wrong.

AI is like a know-it-all classmate who cheats on tests. Be smart—double-check its work!

Related Posts
AI Hallucinations: When Machines Get It Wrong & How to Stay Alert
AI Hallucinations: When Machines Get It Wrong & How to Stay Alert

AI is revolutionizing industries, shaping conversations, and redefining how we process information. But despite its intelligence, AI isn’t perfect. One Read more

🎓 How to Fail Like a Pro in the Age of AI (And Get a Certificate for It)
🎓 How to Fail Like a Pro in the Age of AI (And Get a Certificate for It)

This is satire—but not a joke. These things may actually happen… drum roll… to YOU. 🎓 Welcome to Failure 101 Read more

The Devil We Know is NOT AI but Human Weakness
The Devil We Know is NOT AI but Human Weakness

A Filipino-grounded editorial on the Builder.ai scandal and the deeper rot behind AI hype 🧨 Builder.ai: The Billion-Dollar Mirage In Read more

🧠 AI Is Not Your Intern: Why Prompting Is Labor, Not Just Asking
🧠 AI Is Not Your Intern: Why Prompting Is Labor, Not Just Asking

If prompts are disposable, so are the people who write them. 🧭 I. The Setup: The Lie of Effortless AI Read more

You may also like...