4Es for Better AI Governance
AI governance has become one of the most crowded, and confusing, policy conversations of our time. Everywhere I look, well-intentioned actors are pulling the debate in different directions. Some insist governance must begin and end with compliance. Others focus almost entirely on technical performance and model capability. Ethics is frequently invoked, yet rarely defined in the same way twice. What we are left with is not a lack of ideas, but a lack of coherence. We are governing fragments of AI, not the system as a whole.
This fragmentation is precisely what the 4E framework: Education, Engineering, Enforcement, and Ethics, seeks to address. I see the
4Es not as a final answer, but as a practical way to align conversations that
are currently happening in silos. It gives policymakers, technologists, and
civil society a shared structure for thinking about AI governance without
pretending that one discipline alone has the solution.
The first and most overlooked pillar is Education. AI
governance discussions often assume that understanding AI is the responsibility
of a small group of experts. Yet AI systems are already embedded in daily life,
influencing decisions far beyond technical environments. We make a crucial
distinction here: education cannot be limited to specialist training. What we
need is mass AI literacy. Policymakers, regulators, judges, journalists, and
ordinary citizens all interact with AI-enabled systems. Without broad-based
understanding, governance will always trail behind deployment, and public trust
will remain fragile.
Engineering, the second pillar, is frequently reduced to the
act of building models or generating content. That narrow view is part of the
problem. Remember that engineering also includes training data,
prompting, deployment decisions, and acceptable use. Governing AI is not only
about policing bias or plagiarism at the point of creation. It is equally about
how AI outputs are used. Deepfakes, misinformation campaigns, and other
AI-driven attacks are often failures of deployment and use, not just design.
Effective governance must therefore engage with the full lifecycle of AI
systems.
The third pillar, Enforcement, is where debates often become
polarized. On one side are calls for bans and strict penalties; on the other,
fears that regulation will stifle innovation.Our position is more
balanced, and more realistic. Yes, sanctions for AI misuse are necessary,
particularly since many jurisdictions still lack clear legal consequences for
abuse. But enforcement should not be synonymous with punishment alone.
Incentives for responsible research and innovation matter, as do acceptable-use
guidelines that do not require legislation. Not every guardrail needs to be a
law; some can and should emerge through shared standards and frameworks.
Ethics, the fourth pillar, is perhaps the most discussed and
least operationalized. Ethical principles such as accountability and accuracy
are increasingly recognized, but too often they remain abstract. We place particular emphasis on human-in-the-loop control. Under no circumstances, he
argues, should AI systems operate without the ability for a human to intervene
or shut them down. This is not just a technical preference; it is a governance
mechanism. Human control is what makes accountability meaningful. Without it,
responsibility becomes diffuse and easily avoided.
What makes the 4E framework compelling is not that it
simplifies AI, but that it organizes complexity. Each “E” is distinct, yet they
reinforce one another. Education enables better engineering decisions.
Engineering choices shape what enforcement must address. Enforcement mechanisms
give ethical principles real force. Ethics, in turn, guides all three. Taken
together, they treat AI governance as a system rather than a single regulatory
lever.
To be clear, 4Es are not comprehensive. They are a starting
point. But in a policy landscape crowded with competing priorities and partial
solutions, that starting point matters. The framework is easy to remember,
accessible across disciplines, and flexible enough to evolve as AI does.
If we want better AI governance, we need fewer siloed
debates and more shared language. The 4Es offer exactly that, a way to bring
education, technology, policy, and ethics into the same conversation. In doing
so, they help us move from reactive regulation toward more coherent,
forward-looking governance.
About Me:
Dominic “Doc” Ligot is
one of the leading voices in AI in the Philippines. Doc has been extensively
cited in local and global media outlets including The Economist, South China
Morning Post, Washington Post, and Agence France Presse. His award-winning work
has been recognized and published by prestigious organizations such as NASA, Data.org,
Digital Public Goods Alliance, the Group on Earth Observations (GEO), the
United Nations Development Programme (UNDP), the World Health Organization
(WHO), and UNICEF.
If you need guidance
or training in maximizing AI for your career or business, reach out to Doc via https://docligot.com.
Follow Doc Ligot on
Facebook: https://facebook.com/docligotAI