Why AI Regulation Will Probably Kill AI Adoption in the Philippines If We’re Not Careful


Ann Cuisia’s op-ed should worry anyone who has ever tried to build something in this country. Not because she exaggerates the dangers of AI, but because she exposes a more familiar danger: the Philippine state’s instinct to regulate first and understand later. If you’ve ever registered a business here, you already know what bad regulation does to good ideas. Now imagine applying that same bureaucratic muscle memory to AI, the fastest-moving technology on the planet.

This is how you kill an industry before it’s even born.

Reading through the 21 bills, a pattern emerges: our lawmakers are trying to govern AI using the same machinery that has failed to govern everything from broadband to transport. The instinct is not to enable, but to control. Not to safeguard, but to gatekeep. Not to protect Filipinos, but to protect agencies from accountability. If these bills pass as written, we will not be preparing for the future. We’ll be constructing a permission-based digital economy where innovation must line up, take a number, and hope the clerk is in a good mood.

You don’t build a digital future by recreating the LTO for algorithms.

When AI “regulation” becomes AI suffocation

What makes Ann’s analysis alarming is not that the bills contain mistakes, policy always does, but that the mistakes are consistent. Overbroad definitions. Surveillance gaps. Licensing disguised as “registration.” New councils with vague powers and no democratic checks. It’s like watching a country rehearse its own digital stagnation.

When you define AI so vaguely that a college capstone project counts as a regulated system, you are not creating safety. You are delegating technological power to bureaucracy. You’re telling students, startups, and researchers: Innovation is dangerous unless we say otherwise.

Meanwhile, Big Tech companies, with their compliance teams, their lawyers, their regulatory armor, will treat this as a speed bump. Filipino developers will treat it as a wall.

And we wonder why local tech ecosystems never scale.

The Philippines cannot afford to regulate its way into irrelevance

Let’s be brutally honest: we are not a global AI superpower. We are not even close. We don’t have the compute, the infrastructure, or the capital to treat AI the way the EU does, as something to domesticate and discipline. Our challenge is the opposite: How do we adopt AI fast enough to close the development gap without losing our rights or our sovereignty?

Most of the bills in Congress don’t understand this problem. They treat AI as a threat to be domesticated, when it is primarily an opportunity that could slip away. Countries that overregulate early end up importing innovation forever. That is the real risk.

If we make AI development and deployment too hard for ordinary Filipinos, guess who steps in?

  • foreign platforms
  • foreign models
  • foreign infrastructure
  • foreign governance standards

We will regulate ourselves out of the driver’s seat and into a dependency cycle.

It will look responsible on paper. It will feel catastrophic in practice.


The surveillance gaps are not trivial, they are political

Ann is absolutely right: silence on biometric surveillance is not a technical oversight. It is a political choice. When bills fail to explicitly ban facial recognition in public spaces or prohibit population scoring, they create legal ambiguity, the kind that authoritarian-leaning actors love to exploit.

The Philippines has a long history of political surveillance, red-tagging, and state overreach. Allowing opaque AI systems to enter policing, welfare, immigration, or labor decisions without strong rights-based protections is not just naïve. It is dangerous.

But here is the paradox: while the bills fail to regulate the actual harms, they aggressively regulate the developers, the people who could build safeguards, transparency tools, open-source alternatives, and community oversight mechanisms.

The bills punch downwards and bow upwards. Classic.

We risk creating an AI governance model built on fear, not capability

The core issue is this: many lawmakers seem more afraid of Filipino developers than they are of surveillance architectures. They’re more anxious about someone training a local model than they are about government procurement quietly importing black-box systems that run on foreign servers.

It’s easier to demand that small developers “register” than to demand that government agencies publish their AI use cases. It’s easier to create new councils than to fix the agencies we already have. It’s easier to draft a licensing regime than to draft actual citizen rights.

Fear is not a governance strategy. It is a smokescreen.

If regulation becomes permission, adoption dies

AI adoption is not a switch that government can flip on and off. It depends on:

  • developer energy
  • startup experimentation
  • academic freedom
  • public-sector innovation
  • investor confidence
  • talent mobility

Introduce a pre-approval regime and you suffocate all of these. Introduce overlapping councils and you guarantee political capture. Introduce vague definitions and you create a compliance casino.

Developers won’t navigate this maze.

They’ll simply avoid it.

The result? AI adoption doesn’t slow down. It moves abroad.

The future we need: regulate harm, not imagination

Ann ends her op-ed with the most important distinction: regulate harm, not development. That is the line between a free, innovative future and a bureaucratic dead end. We must ban surveillance, protect workers, guarantee rights, demand transparency, and enforce accountability, not force every developer to ask permission to think.

If Congress gets this wrong, AI in the Philippines will not be born. It will be imported.

If Congress gets it right, we can build a future where Filipinos don’t just consume AI, they shape it.

And we deserve that chance.

Before we regulate the future, we should make sure we haven’t already regulated ourselves out of it.


About Me:

Dominic “Doc” Ligot is one of the leading voices in AI in the Philippines. Doc has been extensively cited in local and global media outlets including The Economist, South China Morning Post, Washington Post, and Agence France Presse. His award-winning work has been recognized and published by prestigious organizations such as NASA, Data.org, Digital Public Goods Alliance, the Group on Earth Observations (GEO), the United Nations Development Programme (UNDP), the World Health Organization (WHO), and UNICEF.

If you need guidance or training in maximizing AI for your career or business, reach out to Doc via https://docligot.com.

Follow Doc Ligot on Facebook: https://facebook.com/docligotAI