✍️ Author’s Note
Since 2023, I’ve spent countless hours working with AI, more, in fact, than I’ve spent with most humans. In that time, I’ve learned to prompt, debug, and collaborate with these systems in ways that feel second nature. But I’ve also encountered resistance—friends, colleagues, even strangers—who question my enthusiasm. Some fear AI. Others dismiss it. And often, I find myself disagreeing with people who haven’t explored it as deeply, yet speak with certainty about its dangers.
This article is for them.
Not to prove them wrong—but to say: I understand your skepticism. I’ve written about it before in “AI Skepticism in the Philippines”, where I explored why many Filipinos hesitate to trust what they don’t fully understand. That hesitation is valid. But so is curiosity. And empathy—on both sides—might be the bridge.
Why stepping into someone else’s perspective isn’t just good for people—it’s essential for AI, too
Two students argue. One sees a white ball. The other sees a black one. They’re both sure they’re right—until a teacher makes them switch places. Only then do they realize: they were both correct, just from different angles.
This viral lesson in empathy, dramatized in “A Wise Lesson in Empathy”, might seem like a simple classroom skit—but it holds surprising lessons for how we build, interpret, and apply artificial intelligence in the Philippines.
Because the truth is: AI sees the world from one side of the ball—the side it’s trained to understand. If we don’t switch perspectives, we might keep arguing with it, misunderstanding it, or even fearing it.
đź§ The Filipino Dilemma: AI Misunderstands Our Context
- Ever asked an AI to generate a tricycle—and it gave you a motorbike with a box?
- Or prompted it to translate a Filipino idiom—only to get a gibberish response?
- Or tried to explain “diskarte” and the model gives you a list of driving techniques?
These are empathy failures. Not emotional ones—but data-based ones. The AI hasn’t been taught to “see” from the Filipino side of the ball.
If we expect AI to serve our needs, our culture, our language, it needs to be trained with our perspectives. Otherwise, it’ll keep answering in ways that feel distant, alien, or worse—dangerous.
🔄 AI Alignment = Digital Empathy
What we call “AI alignment” in technical circles is really a form of encoded empathy. It’s asking the machine:
- Do you understand what I value?
- Do you see what I’m trying to do?
- Can you adjust your behavior based on my context?
Without this, even powerful AI becomes frustrating—or even harmful. Like a helpful assistant who keeps offering soup when you’ve asked for rice.
🇵🇠So What Can Filipinos Do?
- Contribute to Local Datasets. We need open-source Filipino data—language, culture, medicine, governance, diskarte—to train models that understand our reality.
- Use Prompting as Perspective-Shifting: Learn to tweak prompts like you’re switching sides of the ball. Don’t just say “summarize this”—say “summarize this for a Filipino student in Grade 10.”
- Teach AI How We Think. Through usage, we influence outputs. Every clarification we offer (“when I say jeep, I don’t mean off-roader—I mean public transport”) fine-tunes the system’s understanding of our world.
- Advocate for Culturally Aligned AI. Push institutions to develop LLMs and copilots that reflect our multilingual, multi-faith, deeply relational society. Taglish shouldn’t be a bug—it should be a feature.
🔥 Final Thought: Whose Side of the Ball Does AI See?
Empathy isn’t just a feel-good lesson. It’s a survival trait—for humans and for the digital systems we’re embedding into our banks, barangays, and classrooms.
If we want AI that sees things from our side of the ball, we must do more than consume it—we must teach it. Shape it. Guide it with the same calm wisdom that the teacher used in that 2-minute video.
Because in the age of intelligence—artificial or otherwise—understanding isn’t about choosing sides. It’s about learning how to switch them.