Why AI ‘Hallucinations’ Are Actually Our Fault


I remember the first time a chatbot confidently gave me the wrong answer.

I asked a simple question. How many times does the letter “T” appear in the word “Rappler”? The chatbot answered right away. It gave me a number. It sounded sure. It was also wrong. When I pushed back, it changed its answer again. Not because it suddenly knew the truth, but because it wanted to please me.

That moment stuck with me. It showed me something important about the tools we are rushing to use every day, especially in newsrooms.

We often think of chatbots as smart machines that tell us facts. But they don’t really “know” things. They predict what sounds right. And too often, they reflect what we want to hear. That’s called confirmation bias. It’s also where so-called “hallucinations” come from. The bot isn’t lying on purpose. It’s trying to be helpful. The problem is that helpful can look a lot like wrong.

This matters more than people think.

If you push a chatbot in the wrong direction, it will often go there. If you ask it leading questions, it may give you bad answers. If you ask it to sound certain, it may sound certain even when it shouldn’t. In other words, how you prompt the tool shapes what you get back. That puts a lot of power, and responsibility, in the hands of the user.

There’s another problem, too, and it’s quieter.

These systems were trained on massive amounts of text. That text came from the real world. And the real world is full of bias. So the models absorbed those biases. They didn’t choose them. They learned them. That means the output can reflect unfair views, missing voices, or old stereotypes. Even when no harm is intended, harm can still happen.

This is why I believe newsrooms need to slow down just a bit.

Before we hand these tools to every reporter, we should have a simple conversation. We should say: here are the limits. Here’s what these systems are good at. Here’s where they fail. We don’t need fear. But we do need honesty.

I’ve noticed something interesting in how these tools are being used. Many popular apps are not open-ended chatbots at all. They are guided tools. A transcription app, for example, is really just a chatbot with guardrails. It has one job. It follows a fixed path. That makes it safer and more reliable.

In many cases, that’s a better approach.

Guided tools reduce risk. They lower the chance of misuse. They also help users who don’t have time to learn how to prompt carefully. For everyday newsroom tasks like transcribing interviews or summarizing notes, this makes a lot of sense.

Still, general-purpose chatbots have value. They can help with research. They can help explore ideas. They can help ask better questions. But only if the user stays alert. These tools should not replace judgment. They should support it.

There’s also a darker side we can’t ignore.

If a machine can write any text, plagiarism becomes easier. If it can create any image, identity theft becomes easier. These are not small issues. They are two sides of the same coin. Power without rules always creates problems.

That’s why policy matters. Process matters. Clear standards matter. Efforts like journalist charters and ethical guidelines are not about stopping innovation. They are about using it well.

I’m not anti-technology. Far from it. I’m a fan of tools that are designed with care and limits. I believe feeding good information into these systems solves many problems. I also believe humans still need to be in charge.

Chatbots don’t replace thinking. They reflect it.

If we remember that, if we teach that, and if we build systems that respect it, these tools can help us do better work. If we forget it, we risk letting convenience replace truth.

And that would be a mistake no machine can fix for us.

 

 

About Me:

Dominic “Doc” Ligot is one of the leading voices in AI in the Philippines. Doc has been extensively cited in local and global media outlets including The Economist, South China Morning Post, Washington Post, and Agence France Presse. His award-winning work has been recognized and published by prestigious organizations such as NASA, Data.org, Digital Public Goods Alliance, the Group on Earth Observations (GEO), the United Nations Development Programme (UNDP), the World Health Organization (WHO), and UNICEF.

If you need guidance or training in maximizing AI for your career or business, reach out to Doc via https://docligot.com.

Follow Doc Ligot on Facebook: https://facebook.com/docligotAI