Changing the Cadence for AI
In the early days of ChatGPT adoption, we already reflected on the need to adopt a different cadence in discussing AI than in previous eras. What we didn’t grasp, what we still refuse to grasp, is that the cadence hasn’t merely accelerated; it has detonated. We behave as though AI is another policy cycle, another tech wave, another item on a government agenda that can be studied, surveyed, and debated in time for next quarter’s hearing. But generative AI has no patience for the rituals of policymaking. It is not waiting for your task force to finish its stakeholder mapping. It is already rearranging the furniture of society.
For decades, AI was a niche discipline, technical,
insulated, largely invisible. The people who built it understood it, and the
people who used it barely noticed it. But something snapped when generative
models became accessible to the public. Suddenly, billions of people were
handed tools powerful enough to reinterpret or reinvent reality, without the
slightest requirement to understand how those tools work. We went from “AI is
complicated” to “AI is a feature in your messaging app” in the time it takes a
regulatory agency to draft a press release.
And this is where our politics has fallen catastrophically
behind. Governments continue to talk about AI as if it is a data-processing
issue, when the real shift is existential: generative models don’t retrieve
information; they manufacture it. And yet we still treat them like search
engines with better manners. Every hallucination scandal, every fabricated case
citation, every synthetic biography should have been the signal flare reminding
us that we are no longer dealing with machines that report the world. We are
dealing with machines that propose new ones.
But instead of acknowledging this, our institutions cling to
the fantasy that AI can be slotted into existing policy buckets, education,
labor, privacy, safety, as though the technology cares about our jurisdictional
boundaries. It doesn’t. It burns through sectors simultaneously. The question
isn’t “Which domain should we prioritize?” It’s “Why did we ever think these
domains were separate when our information ecosystem is fully entangled?”
This is why the so-called “urgent vs. important” debate is
already obsolete. The urgent is the important. The safety failures, the
epistemic crises, the educational upheavals, the labor-market dislocations,
they’re not parallel tracks. They are the same track, and AI is the train
barreling down it.
Consider reinforcement learning, the engine behind systems
that not only predict but adapt. We’re handing goal-oriented algorithms the
keys to everything from hospitals to logistics to personalized tutoring. And
we’re doing it with a regulatory mindset designed for static tools, not dynamic
agents. The classic nightmare scenario isn’t that an AI misbehaves; it’s that
it behaves exactly as instructed but interprets the instruction with the alien
literalism of a machine learning system. “Minimize cancer cases,” we say, and
the AI, lacking moral imagination, selects the darkest possible method. The
absurdity of the example is the point: our institutions are not built to
supervise systems that can creatively misinterpret our intentions.
And meanwhile, we still have leaders asking whether students
should be “allowed” to use ChatGPT, as though the genie might politely wait
outside the classroom until the school board finalizes its guidelines. The
dissonance would be funny if it weren’t so dangerous. Generative AI doesn’t
just change what students can do; it changes how knowledge is produced and
validated. It forces us to confront the uncomfortable possibility that our
educational systems were built for a world where information was scarce and
verification was cheap. Now the inverse is true: information is abundant and
verification is expensive.
The public is not prepared for this shift, and why should
they be? For twenty years, we taught people to trust the interface. Trust the
autocomplete. Trust the navigation. Trust the recommender. And now, suddenly,
we scold them for trusting the confident eloquence of an AI system that looks,
behaves, and responds like a search engine on steroids. We created a society of
habitual trust and then dropped a probability machine into their hands.
But here’s the real political failure: we still treat AI as
a “tech issue.” It’s not. It’s a governance stress test, a cultural accelerant,
an epistemic earthquake. The longer we pretend that incremental regulation can
keep pace with exponential deployment, the more we surrender our agency. Society
is already reorganizing itself around generative systems; the only question is
whether we do so consciously or by accident.
The uncomfortable truth is that our existing political
cadence, slow, consultative, bureaucratic, was built for technologies that
changed the world one sector at a time. AI is changing the world all at once.
Until we accept that, we will remain spectators in a transformation we should
be leading.
About Me:
Dominic “Doc” Ligot is
one of the leading voices in AI in the Philippines. Doc has been extensively
cited in local and global media outlets including The Economist, South China
Morning Post, Washington Post, and Agence France Presse. His award-winning work
has been recognized and published by prestigious organizations such as NASA, Data.org,
Digital Public Goods Alliance, the Group on Earth Observations (GEO), the
United Nations Development Programme (UNDP), the World Health Organization
(WHO), and UNICEF.
If you need guidance
or training in maximizing AI for your career or business, reach out to Doc via https://docligot.com.
Follow Doc Ligot on
Facebook: https://facebook.com/docligotAI