“Hallucinate” (At Least When We’re Talking About AI)

Daily writing prompt
If you could permanently ban a word from general usage, which one would it be? Why?

If I could ban one word from general usage, I wouldn’t go after the usual suspects — not the overused buzzwords, not the corporate jargon, not even the words that make my eyelid twitch when I hear them in a meeting. No, I’d go after a word that has wandered into the wrong neighborhood entirely:

Hallucinate.

Not the human kind.
Not the clinical kind.
Not the kind that belongs in neurology textbooks or late‑night stories whispered between people who’ve lived through things.

I mean the version that somehow became the default way to describe what happens when an AI system produces an incorrect answer.

Because here’s the thing:
Machines don’t hallucinate. People do.

And I say that as someone who has actually hallucinated — the real kind, the kind that comes from a nervous system under siege, the kind that leaves emotional residue long after the moment passes. There’s nothing offensive about the word. It’s just… wrong. It’s the wrong tool for the job.

When a human hallucinates, something in the brain is misfiring. Perception breaks from reality. The experience feels real even when it isn’t. It has texture, emotion, fear, confusion, meaning.

When an AI “hallucinates,” none of that is happening.

There’s no perception.
No belief.
No internal world.
No confusion.
No “it felt real at the time.”

There’s just a statistical model doing exactly what it was built to do:
predict the next likely piece of text.

Calling that a hallucination is like calling a typo a nervous breakdown.

It’s not just inaccurate — it’s misleading. It anthropomorphizes the machine, blurring the line between cognition and computation. It makes people think the system has an inner life, or that it’s capable of losing its grip on reality, or that it’s experiencing something. It isn’t.

And the consequences of that confusion are real:

  • People fear the wrong risks.
  • They distrust the technology for the wrong reasons.
  • They imagine intention where there is none.
  • They attribute agency to a system that is, at its core, math wearing a friendly interface.

We don’t need spooky metaphors.
We need clarity.

If an AI gives you an answer that isn’t supported by its training data, call it what it is:

  • a fabrication
  • an unsupported output
  • a model error
  • a statistical misfire
  • nonsense generation

Pick any of those. They’re all more honest than “hallucination.”

Language shapes how we think.
And right now, we’re in a moment where precision matters — not because the machines are becoming more human, but because we keep describing them as if they are.

So yes, if I could ban one word from general usage, it would be “hallucinate” — not out of offense, but out of respect for the truth. Machines don’t hallucinate. Humans do. And the difference between those two things is the entire story.


Scored with Copilot. Conducted by Leslie Lanagan.

4 thoughts on ““Hallucinate” (At Least When We’re Talking About AI)

  1. Really enjoyed this post. I’ve not heard the word “Hallucinate” used in this context so you’ve opened my eyes. I do like the word though so wouldn’t agree to banning it completely, just in the way you describe.

    Liked by 2 people

Leave a reply to Aarav Cancel reply