The Invisible Hand That Isn’t Neutral
When most Filipinos hear the word “algorithm,” we think of social media feeds or TikTok’s mysterious magic. But algorithms go far beyond our screens. They decide who gets loans, who gets a job interview, and even who gets flagged for government aid.
And here’s the ugly truth: these systems often profit from keeping people poor. Not by accident—by design.
Poverty as a “Business Model”?
Sounds harsh, right? But look at the receipts.
- In the U.S., researchers found that public assistance algorithms meant to help families ended up denying benefits to the very people who needed them—creating what’s been called a “digital poorhouse.” Harvard Law
- Studies show credit-scoring AI systems reject low-income applicants simply because they lack a data history. That means the poor stay locked out of loans that could actually help them grow. IoT for All
- Even welfare programs in Europe have used risk algorithms that disproportionately flagged single mothers and immigrants, creating stress and stigma without solving poverty. Wired
Now bring that lens home.
The Filipino Reality: Same Algorithm, Same Struggle
In the Philippines, the setup is familiar:
- Loan apps use automated scoring to reject “high-risk” borrowers. Often, those “risks” are just people without a credit card history, like tricycle drivers or sari-sari store owners.
- Gig platforms decide visibility—who gets more rides, more deliveries, or better-paying tasks—using opaque algorithms. Guess who gets deprioritized? Usually, those without resources to game the system.
- Even education-focused tools can widen inequality if they assume access to strong internet or premium devices—excluding kids in rural areas.
The result: those with less stay with less.
Why Algorithms Act This Way
Here’s the kicker: it’s not that every engineer is evil. It’s that poverty itself is profitable.
Systems optimize for efficiency and profit. Serving the poor is often labeled “high risk, low return.” Meanwhile, targeting ads, loans, and services at the middle and upper classes is seen as “safe business.”
In other words, the algorithm learns to ignore you if you’re poor.
Is There a Way Out?
Yes—but it requires both awareness and action.
- Awareness means Filipinos realizing that being rejected by a bank app or flagged in a system isn’t always your fault—it’s often the code.
- Action means pushing for transparency, fairness, and building community-driven alternatives that value inclusion over profit.
Globally, conversations are starting. Researchers and activists are exposing algorithmic discrimination. But here at home, the dialogue is still quiet.
The Filipino Takeaway
When you hear “AI will change the world,” ask: For whom?
Because if we’re not careful, the same systems that make billionaires richer will keep jeepney drivers, freelancers, and public school kids struggling.
Algorithms don’t just predict—they decide. And unless we demand better, they’ll keep deciding that poverty pays.
When AI Meets Engineered Poverty
It’s not just external threats that keep us stuck—poverty in the Philippines is politically purposeful, not just passively tolerated. Programs like conditional cash transfers, short-term aid, and emergency assistance save lives in the moment but often fail to break the cycle of poverty. They’ve become a political strategy more than a path to prosperity.
Enter AI. It holds promise—to map poverty, deliver aid better, and offer personalized learning. Yet, AI also risks reinforcing the same systems. Instead of challenging poverty, it may just manage it more effectively—like applying new oil without fixing the engine.
Read more on AI, work, and the Filipino future at aiwhylive.com.