Everyone Is Yelling About AI. Here’s Why I’m Choosing the Quiet Middle


Happy New Year!

Lately, every conversation about AI feels loud. One group says it will save us, fix our problems, and unlock a better future. Another group says it will steal our jobs, spread lies, and maybe even end the world. Recently, we talked through the four main camps that shape this debate. If you haven’t seen it, check it out here: https://www.newsai.ph/2025/12/on-internet-there-are-four-ai-camps-and.html. After listening to all sides, I’ve come to a simple conclusion: none of these camps has the full answer.

AI is not magic, and it’s not evil. It’s a tool. Like any powerful tool, it can help or harm depending on how we use it. Shutting it all down isn’t realistic, and going full speed ahead without care is reckless. The real work is finding a practical path in between.

The truth is, AI already does good things. It helps doctors spot patterns, helps researchers move faster, and helps everyday people get more done. But it also already causes harm. Bias shows up in systems people rely on. Misinformation spreads faster. Privacy gets blurry. Workers worry about their jobs. These are not future problems. They are happening right now. That’s why I think focusing on real, present harm matters more than arguing about distant science fiction fears.

What worries me most is how uneven the progress is. We pour enormous energy into making AI more powerful, but we invest very little into making it safe. We build first and ask questions later. That approach doesn’t make sense. Safety and capability should grow together. If a system is powerful enough to affect millions of lives, it should meet clear safety standards before it’s released. We already expect this from buildings, airplanes, and medicine. AI should not be an exception.

At the same time, I don’t think heavy, rigid rules will work either. AI changes too fast. Something new appears almost every day. That means we need flexible rules that can evolve as we learn more. Transparency matters. Independent audits matter. Reporting failures matters. And because AI crosses borders, countries need to cooperate, especially when it comes to high-risk systems.

Another part of this conversation that makes me uneasy is who gets heard. Right now, the loudest voices belong to companies, investors, and people who want everything to move faster. Workers, communities, and regular users often feel left out. That’s a problem. AI will shape how we live and work, so the people affected by it deserve a seat at the table. Power is already concentrating in a small group, and without broader participation, that gap will only grow.

So where does that leave me, and people like us?

I try to stay curious. AI is a skill, not a mystery. Like learning a spreadsheet or a new app, the best way to understand it is to use it. I experiment. I practice. I learn how it fits into my work and creative projects. I don’t pretend to know everything. I don’t. But I keep learning anyway.

At the same time, I stay critical. I don’t treat AI as a source of truth. I assume it can be wrong, biased, or misleading. I use it as a brainstorming partner or a drafting tool, but I stay in control. I check facts. I check tone. I think about consequences before acting on what it suggests.

I also take privacy seriously. I’m careful about what personal or sensitive information I share. I look for tools that give me control, and if they don’t, I move on. Convenience isn’t worth giving up everything.

When it comes to work, I try not to panic. Technology has always changed jobs. That’s nothing new. What’s different now is that we can use the same tools that worry us to make ourselves stronger. I look at which parts of my work are routine and which parts rely on judgment, communication, and deep knowledge. I invest in those human skills. I use AI to support my learning and productivity, not to replace my thinking.

I don’t believe in doomsday stories, and I don’t believe in blind hype. I believe in staying practical, informed, and involved. The future of AI isn’t something that just happens to us. We help shape it through our choices, our voices, and our willingness to engage.

That’s why I’m choosing the quiet middle. It’s not flashy. It doesn’t fit neatly into a camp. But it feels like the most honest and responsible place to stand.

 


About Me:

Dominic “Doc” Ligot is one of the leading voices in AI in the Philippines. Doc has been extensively cited in local and global media outlets including The Economist, South China Morning Post, Washington Post, and Agence France Presse. His award-winning work has been recognized and published by prestigious organizations such as NASA, Data.org, Digital Public Goods Alliance, the Group on Earth Observations (GEO), the United Nations Development Programme (UNDP), the World Health Organization (WHO), and UNICEF.

If you need guidance or training in maximizing AI for your career or business, reach out to Doc via https://docligot.com.

Follow Doc Ligot on Facebook: https://facebook.com/docligotAI