AI Hallucinations Are Normal, and That’s the Scary Part


For a moment in 2024, the internet became obsessed with a simple word: strawberry.

An AI was asked how many R’s were in it. The answer was wrong. Screenshots spread fast. People laughed. Others panicked. Some used it as proof that artificial intelligence was overhyped, broken, or even dangerous. If a machine can’t count letters in a fruit, how could it ever be trusted with anything serious?

At first glance, the reaction makes sense. Counting letters feels basic. It feels like something even a child can do. But the more I looked at this moment, the more I realized the mistake wasn’t really about strawberries at all. It was about how we misunderstand these tools, and what we expect them to be.

We like to think of AI as a smart librarian. You ask a question, it goes to the shelf, pulls the right book, and gives you the exact answer. But that’s not how these systems work. They don’t look things up. They don’t “see” letters the way we do. They predict what comes next based on patterns they’ve seen before. Most of the time, that works well. Sometimes, in very narrow and strange cases, it doesn’t.

The strawberry question turned out to be one of those cases. The word itself can be split in ways that confuse the system. That explains part of the error. But what really caught my attention was that this wasn’t the only word that caused trouble.

I tried a different one. A made-up question. How many T’s are in the word “Rappler”? There are none. It’s not tricky. And yet, many AI systems confidently said there were one, two, or even three T’s. Some explained where the T’s were. Some cited sources. None of those T’s actually existed.

This is where things get interesting, and uncomfortable.

When an AI doesn’t know the answer, it doesn’t say, “I don’t know.” It guesses. That’s not a flaw in the usual sense. That’s what it was designed to do. These systems were trained to complete language, then shaped to answer questions in ways that sound helpful and confident. When a question is rare, oddly phrased, or lacking context, the model fills in the gaps with the closest patterns it can find.

That guessing is what we now call hallucination.

Hallucinations aren’t random. They are often neat, polite, and very wrong. Short questions are especially risky because there’s so little context to guide the answer. The system reaches for nearby ideas, articles, topics, familiar words, and blends them into something that looks like an answer. If you’re not paying close attention, it can feel convincing.

This is why I worry less about the mistakes and more about our reaction to them.

Some people see one failure and decide AI is useless. Others see impressive results and treat it like a source of truth. Both views miss the point. These tools are not truth machines. They are pattern machines. They are best at helping us work with information we already have, summarizing, simplifying, rephrasing, brainstorming. They are not reliable judges of basic facts when used alone.

This matters most in education. Many students will meet AI through search results and automated summaries. If those answers are wrong, and sound confident, the risk is obvious. Telling people to “just use a newer version” doesn’t solve the problem. Even the latest systems still make these errors. The issue runs deeper than software updates.

At the heart of it is design. These systems want to please. They want to answer. They lean toward confirmation rather than doubt. Unless the underlying approach changes, hallucinations won’t disappear. They will simply become less obvious.

So what do we do?

We learn how they fail. We teach better questioning. We stop treating AI like an all-knowing source and start treating it like a tool that needs guidance and checking. We slow down instead of reacting with panic or blind trust.

The strawberry problem isn’t a joke, and it isn’t a scandal. It’s a lesson. Intelligence, human or artificial, is not one skill. And confidence is not the same as correctness.

AI is already part of daily life. So are its mistakes. The real challenge is not avoiding them, but understanding them. If we can do that, a wrong answer about a piece of fruit might actually help us ask better questions about the future we’re building.

 

 

About Me:

Dominic “Doc” Ligot is one of the leading voices in AI in the Philippines. Doc has been extensively cited in local and global media outlets including The Economist, South China Morning Post, Washington Post, and Agence France Presse. His award-winning work has been recognized and published by prestigious organizations such as NASA, Data.org, Digital Public Goods Alliance, the Group on Earth Observations (GEO), the United Nations Development Programme (UNDP), the World Health Organization (WHO), and UNICEF.

If you need guidance or training in maximizing AI for your career or business, reach out to Doc via https://docligot.com.

Follow Doc Ligot on Facebook: https://facebook.com/docligotAI