AI Policy Is Broken and It’s Not Because of Technology

 


Most public policy is written from the top down.

Experts gather in rooms far away from everyday life. Laws are drafted. Rules are announced. Then people are expected to adjust. That model has worked before. But when it comes to artificial intelligence, it is no longer enough.

AI did not arrive through legislation. It arrived through daily use.

Students use it to study. Workers use it to write, plan, and solve problems. Small businesses use it to compete. Communities use it to fill gaps where systems fall short. AI is already shaping how people live and work, whether policy is ready or not.

That is why we chose to start from the bottom up.

Download our report here: https://www.researchgate.net/publication/400077927_Grassroots_Perspectives_on_AI_Shaping_Policy_in_Education_Ethics_Engineering_and_Enforcement/references

Instead of asking what rules we could write, we asked what was actually happening. We listened to teachers, engineers, lawyers, ethicists, and community leaders. We looked at how AI shows up in real classrooms, real offices, and real lives. What we found was not chaos, but a disconnect between policy and practice.

From those conversations came a simple framework: Education, Engineering, Enforcement, and Ethics. We call them the 4Es, and together they form a way to bridge everyday AI use with responsible governance.

Education comes first because people are using AI before they understand it.

Students are already relying on AI tools, often without guidance. Teachers are expected to manage this shift with little training or support. Parents are frequently left out of the conversation entirely. When AI literacy is treated as optional or advanced, gaps grow wider. Those with access and guidance move ahead, while others fall behind.

Education policy must accept reality. AI literacy needs to be practical, age-appropriate, and shared early. It must support teachers, involve families, and respect language and cultural context. If people are already using these tools, the least we can do is help them use them wisely.

Engineering reminds us that AI does not run on ideas alone.

It runs on electricity, internet access, data, and computing power. Right now, too many communities depend on systems they do not own and infrastructure they do not control. This creates dependence instead of capability.

Engineering policy is not just about upgrading technology. It is about building foundations so communities can participate fully. Reliable power. Affordable connectivity. Local data systems. Public infrastructure that supports innovation. Without these, AI will remain something people consume, not something they help create.

Enforcement is where fear often takes over.

When technology moves fast, the instinct is to punish first. But overly rigid or slow laws do not stop AI use. They push it underground. This is how “shadow AI” grows, tools used everywhere, but without guidance, accountability, or protection.

Smart enforcement focuses on helping people comply. It adapts to change. It builds on existing laws instead of rushing to ban what is already widespread. Enforcement should make responsible use easier, not impossible.

Ethics ties everything together.

Ethics is not abstract philosophy. It is protection. Without ethical guardrails, AI can amplify misinformation, deepen inequality, enable surveillance, and harm the most vulnerable. These risks are not theoretical. They are already visible.

Ethical AI requires real accountability inside institutions. It requires community input, cultural and language inclusion, and clear responsibility for decisions made by machines. Ethics is how values move from paper into practice.

Across all four Es, one message was consistent: policy cannot stay theoretical while technology is practical.

AI governance must connect rules to real life. It must reflect how people actually use these tools, not how we wish they did. Top-down policy alone cannot do this. It moves too slowly and listens too narrowly.

If AI is growing from the ground up, governance must follow.

This is a call to policymakers to listen earlier and wider. To technologists to build with context and care. To educators to demand support and partnership. And to the public to claim a voice in how AI shapes daily life.

It is time to stop choosing between innovation and protection.

We can have both, but only if we finally marry policy with practicality.


Attribution

This op-ed is informed by Grassroots Perspectives on AI: Shaping Policy in Education, Ethics, Engineering, and Enforcement a collaborative effort by CirroLytix, Data and AI Ethics PH, and Konrad Adenauer Stiftung, led by France Claire Tayco, Carl Javier, Karla Bernardo, Joshua Abad, and Katrina Bartolome.