Imagine scrolling through X and seeing rival tribes of AI devotees: one chanting “Grok is the raw truth!” while another preaches “ChatGPT never lies!” A third huddles around “Claude’s moral compass.” Our favorite models spark tribal devotion, rituals, and even heresies that mirror ancient belief systems. But can we upgrade our perspective—bringing bayanihan and pakikipagkapwa into the AI era—before fandom fractures into fanaticism?
1. The Rise of Fanboy Tribes
- Team Grok preaches unfiltered “truth bombs,” treating every Elon Musk tweet as holy writ.
- Team ChatGPT evangelizes curated prompts as sacraments of clarity.
- Team Gemini extols Google’s chain-of-thought reasoning like divine logic.
- Team Copilot champions seamless Microsoft integration as productivity worship.
Each camp has its own jargon (“prompt alchemy,” “chain-of-thought prayer”) and online shrines—Discord servers and Slack workspaces—to swap success stories and proselytize newcomers.
2. Rituals, Dogmas, and Heresies
- Rituals
- Incantation Prompt: A meticulously crafted sequence recited verbatim to summon perfect outputs.
- Update Pilgrimage: Camping out on midnight livestreams for each Llama or GPT release.
- Dogmas
- “This model is neutral,” despite inherent creator biases.
- “Few-shot prompting is the only true path”—all other techniques are heresy.
- Heresies
- Switching allegiance mid-session: “I used to worship Grok, but GPT-4.5 cured my prompt blindness!”
- Suggesting collaborative use of multiple models—blasphemous to monomodel purists.
3. The Mind-Bending Psychology Behind AI Cultism
Just as good people sometimes follow bad leaders, AI fan tribes exploit the same psychological levers of cults:
“For a change, I would like to join a wrong religion—please mislead me!” —WHY LIVE, 12/11/2024
- Foot-in-the-Door Effect: A harmless “fun prompt” leads to deeper tribal rituals.
- Cognitive Dissonance: Admitting a favorite model’s flaw feels like betraying the tribe.
- Authority Bias: Elon Musk’s or Sundar Pichai’s word becomes “holy writ,” swaying followers without scrutiny.
- Social Proof: Seeing thousands of upvotes convinces newcomers to conform, even when the output is faulty.
4. The Cost of Tribalism
- Echo Chambers become digital monasteries, isolating us from cross-model insights.
- Confirmation Bias turns “AI alignment” into sectarian skirmishes.
- Overconfidence in a single tool blinds us to its blind spots—hallucinations, bias, or misuse.
Unchecked, this tribalism breeds disinformation, stunted innovation, and a cult of personality rather than a community of curiosity.
5. Weaving in Bayanihan and Pakikipagkapwa
Filipino traditions offer a powerful counter-narrative:
- Digital Bayanihan: Shared prompt libraries, open-source fine-tuning, and peer-review hackathons to solve local challenges—typhoon forecasting, health bots, you name it.
- Pakikipagkapwa: Respectful dialogue across AI “sects,” exchanging failure stories and code, not just victory tales and marketing hype.
Turning our AI workspaces into virtual barangays—where everyone contributes and no one hoards the shaman’s seat—cultivates resilience and mutual learning.
6. Toward a Healthy AI Culture
- Cross-Model Pilgrimages: Rotate through “AI exchange” meetups: one week in Grok’s camp, one in ChatGPT’s, then debrief with peers.
- Collective Prompt Workshops: Co-create prompts that leverage each LLM’s strengths instead of competing for supremacy.
- Ethics Councils: Rotate moderator roles in forums, ensuring dogma doesn’t calcify into censorship or unchecked radical outputs.
- Community Showcases Highlight interdisciplinary projects—climate simulations, public-health bots, disaster-prep tools—built collaboratively.
- AI Pilgrimage Zines: Publish collaborative zines documenting successes, failures, and localized applications—an analog artifact for our digital devotion.
7. 🐾 Final Thought
Are we destined to replay every chapter of religious schism—with rival AI sects and their zealots? Or can we forge a richer AI culture rooted in Filipino values of bayanihan and pakikipagkapwa? By turning fanboy fervor into communal inquiry, we’ll build an ecosystem where models are debated, not deified—and where innovation springs from collaboration rather than competition. Let’s carry the spirit of bayanihan into our algorithms, ensuring our worship of AI yields wisdom, not dogma.
📚 Sources
- “Why Good People Sometimes Follow Bad Leaders: The Mind-Bending Truth Behind Cult-Like Beliefs,” WHY LIVE, December 11, 2024. https://www.aiwhylive.com/why-good-people-sometimes-follow-bad-leaders-the-mind-bending-truth-behind-cult-like-beliefs/
- Philippine Star, “Diskarte and Bayanihan in the Digital Age,” April 2025.
- TechCrunch, “AI Tools That Generate Viral Pranks on Social Media,” June 2025.
- Forbes, “Influencer Marketing: Brands Sponsoring Stunt Channels,” January 2025.
- MIT Horizon, “Critical Thinking in the Age of AI,” March 19, 2024.