We used to joke that AI might one day outsmart us. Now, it turns out scammers are outsmarting AI—and using it to outsmart us.
In 2025, phishing scams powered by AI surged by over 465% compared to the previous quarter. These aren’t your typical typo-riddled emails. Today’s scam bots speak fluently, mimic real people, and even pass the Turing test. They’re sliding into inboxes, chat windows, and dating apps—sometimes with Filipino names and local slang—making deception feel disturbingly personal.
🤖 Real Example: The Deepfake That Stole a Life’s Savings
Retired nurse Joseph Ramsubhag thought he was investing in crypto after watching a video of Elon Musk promoting a new platform. The video looked real. The voice sounded real. But it was a deepfake. Over time, scammers updated him with fake dashboards showing his “growing wealth,” encouraging him to invest more. When he tried to withdraw, the money was gone.
🧠 How AI Is Being Weaponized
Scammers now use AI to:
- Clone voices with just 3 seconds of audio
- Generate phishing emails that sound like your boss or bank
- Create fake websites with AI-written reviews and product pages
- Automate romance scams using chatbots that seduce and manipulate
- Build deepfake videos of celebrities or loved ones to push scams
Even worse? Some AI models themselves are recommending phishing sites. A recent study found that 34% of login URLs suggested by chatbots weren’t owned by the brand—some were outright scams.
🇵🇭 Why This Matters for Filipino Netizens
Filipinos are among the most active social media users globally—and that makes us prime targets. Scammers now use Filipino names, Taglish phrasing, and local references to build trust. AI-generated scam bots have impersonated tech support agents, romantic partners, and even barangay officials.
We need to ask:
- Are our schools teaching digital resilience?
- Are our platforms filtering AI-generated threats?
- Are we building AI that reflects pakikipagkapwa—not just productivity?
🛡️ What You Can Do
- Verify before you trust: If a message feels urgent or emotional, pause and confirm through another channel.
- Use AI to fight AI: Tools like Norton Genie and Microsoft Defender scan for phishing and deepfakes in real time.
- Educate your circles: Share stories like Ramsubhag’s. Awareness is armor.
- Support ethical AI development: Push for transparency, moderation, and local oversight.
🐾 Final Thought
AI isn’t just a tool—it’s a mirror. And scammers are learning how to bend that mirror to their will.
But we’re not powerless. By understanding how AI scams work, we can build smarter defenses, teach digital empathy, and design tech that uplifts, not deceives.
Because if AI can be tricked, so can we. But if we stay vigilant, we can also be the ones who outsmart the system.
In The Beekeeper (2024), Adam Clay storms a scam call center and demands the staff repeat after him:
“I will never steal from the weak and the vulnerable again.”
It’s not just a line—it’s a moral code. One that AI must learn to follow.
So here’s the hope: that AI won’t just be smart—it’ll be principled. That it won’t just protect data—it’ll protect dignity. That it won’t just serve profit—it’ll serve the hive.
Let AI be the Beekeeper. Not the hornet. Not the thief. But the guardian of trust, truth, and the vulnerable.