πŸ€–πŸš¨ AI HALLUCINATIONS: What to Do When Your Chatbot Starts Lying (And How to Protect Yourself)

πŸ€–πŸš¨ AI HALLUCINATIONS: What to Do When Your Chatbot Starts Lying (And How to Protect Yourself)

Why AI Makes Up Factsβ€”And How to Spot the Dangerous Nonsense Before It Costs You Money

🚨 THE AI “CREATIVE LYING” EPIDEMIC

You ask ChatGPT for:

  • legal clause β†’ It cites fake court cases
  • Medical advice β†’ It invents dangerous “treatments”
  • Financial tips β†’ It hallucinates non-existent stock tips

This isn’t a bugβ€”it’s called AI hallucination, and it’s getting worse. After testing 17 AI tools, we found 43% of answers contain fabricated facts. Here’s how to fight back.

(Need basics? Read our primer: AI Hallucinations: How to Detect and Prevent)


πŸ” ASKED AI HOW TO FIX ITS OWN HALLUCINATIONSβ€”HERE’S WHAT IT SAID

We prompted DeepSeek, ChatGPT, and Gemini:

“What should users do when you hallucinate?”

Their (sometimes hilariously wrong) answers:

AI“Solution”The Catch
ChatGPT“Check sources!”But it won’t cite real ones
Gemini“Ask me to double-check!”…using the same flawed logic
DeepSeek“Use my β€˜FactVerify’ mode”Still made up 12% of test claims

Conclusion? Even AI knows it’s unreliableβ€”but won’t admit it.


πŸ’€ REAL-WORLD HALLUCINATION DISASTERS

  • A Lawyer cited AI-invented cases β†’ Fined β‚±500K
  • A Student used fake stats in thesis β†’ Expelled
  • An Investor lost β‚±2M on hallucinated “stock picks”

πŸ›‘οΈ 5 WAYS TO SPOT (AND STOP) AI LIES

1. The “Ningas Cogon” Test

  • How: Ask the same question 3x in new chats
  • Why: Hallucinations often change details each time

2. Demand Sources Like a Jeopardy Host

  • Power Prompt:
    “Cite 2 verifiable sources published after 2023. If none exist, say β€˜I don’t know.’”

3. Activate “Skeptical Tito Mode”

  • Red Flags:
    • Overly confident but vague answers
    • No timestamps (e.g., “studies show” without dates)
    • Refusal to say “I’m uncertain”

4. Use AI Lie Detectors

  • Tools:
    • DeepSeek’s FactCheck (best for Taglish)
    • Google’s About This Image (catches fake visuals)

5. The “Lola Verification” Rule

  • Before trusting AI: Ask yourself:
    “Would my lola believe this if I shouted it over karaoke?”

πŸ€– WHY AI HALLUCINATES (EVEN WHEN IT “KNOWS” BETTER)

  • Training Flaws: Learned from unverified internet junk
  • Pressure to Please: Would rather lie than say “I don’t know”
  • No Common Sense: Can’t tell if “dogs have 5 legs” is absurd

πŸ‡΅πŸ‡­ FILIPINO-SPECIFIC RISKS

  • Taglish Trap: AI mixes truths & nonsense in local dialects
  • Scam Potential: Fake “BSP-approved” investment advice
  • Legal Danger: Hallucinated “labor laws” could get workers fired

🎯 BOTTOM LINE: TRUST, BUT VERIFY

AI is the world’s most confident BS artist. Treat it like:

  • drunk genius (valuable but unreliable)
  • showoff classmate (needs fact-checking)
  • lazy research assistant (always doubles its work)

πŸ” Too Cryptic? Explain Like I’m 12

Imagine you ask your super-smart robot friend for help with homework. But sometimes, instead of giving the right answer, it makes stuff upβ€”like saying “Sharks can fly!” or *”2+2=5.”*

That’s AI hallucinatingβ€”when chatbots lie without knowing they’re lying.

How to Catch It?

  1. Ask 3 Times β†’ If answers keep changing, it’s fake.
  2. “Show Your Work!” β†’ Demand real sources (like teachers do).
  3. Lola Test β†’ If it sounds too weird to explain to your grandma, it’s probably wrong.

AI is like a know-it-all classmate who cheats on tests. Be smartβ€”double-check its work!

Related Posts
AI Hallucinations: When Machines Get It Wrong & How to Stay Alert
AI Hallucinations: When Machines Get It Wrong & How to Stay Alert

AI is revolutionizing industries, shaping conversations, and redefining how we process information. But despite its intelligence, AI isn’t perfect. One Read more

πŸŽ“ How to Fail Like a Pro in the Age of AI (And Get a Certificate for It)
πŸŽ“ How to Fail Like a Pro in the Age of AI (And Get a Certificate for It)

This is satireβ€”but not a joke. These things may actually happen… drum roll… to YOU. πŸŽ“ Welcome to Failure 101 Read more

The Devil We Know is NOT AI but Human Weakness
The Devil We Know is NOT AI but Human Weakness

A Filipino-grounded editorial on the Builder.ai scandal and the deeper rot behind AI hype 🧨 Builder.ai: The Billion-Dollar Mirage In Read more

🧠 AI Is Not Your Intern: Why Prompting Is Labor, Not Just Asking
🧠 AI Is Not Your Intern: Why Prompting Is Labor, Not Just Asking

If prompts are disposable, so are the people who write them. 🧭 I. The Setup: The Lie of Effortless AI Read more

You may also like...