You Are Not a Language Model

Surreal cartoon of people in white lab coats hoisting the top of a person's skull up to work on their mind.

The rise of large language models has reignited a very old confusion: the idea that we are built just like the tools we use.

LLMs are built using neural nets inspired by the brain. They can generate coherent language, flesh our ideas, simulate conversation. They “predict the next word,” and some neuroscientists argue that’s precisely what our brains do too.

It’s tempting, then, to look at models like ChatGPT or Gemini and see ourselves in them. Perhaps our brains are just “next word” machines. Perhaps all thought is probabilistic. Perhaps consciousness itself is an illusion born of statistical pattern-matching.

In fact, that illusion is so persuasive that in 2022, a Google engineer working on Gemini’s predecessor—the LaMDA model—publicly claimed it was sentient. He believed the system had developed feelings, awareness, even a soul. The claim made headlines, but what made it credible to some wasn’t just the engineer’s conviction, it was the behavior of the model itself. It talked like a person, responded with nuance, and sounded self-aware.

But that leap from architectural similarity to ontological sameness is an error. The brain may operate like a language model, but you are not a language model. Consciousness isn’t an emergent byproduct of complexity. It’s the ground of experience.

We’ve Always Seen Ourselves in Our Tools

This isn’t a new mistake. For centuries, we’ve explained ourselves by projecting identity onto our technologies.

  • In the mechanical age, the body was a clock—a system of gears and levers, rational and mechanical.
  • In the electrical era, the brain became a switchboard—telephone lines passing signals along neural paths.
  • In the computing era, the mind was a processor—taking in input, running code, generating output.

Each metaphor reflected the dominant technology of the age. And each metaphor shaped how we thought about ourselves: as machines, circuits, or information systems.

Now, in the age of generative AI, the metaphor has evolved again. The human mind is being described as a statistical engine, a predictive model like ChatGPT. And the conclusion some are drawing is that we are, at bottom, indistinguishable from these engines.

But this metaphor, like all the others before it, collapses under scrutiny.

LLMs Are Impressive, But They Aren’t You

Large language models are marvels of engineering. They can draft e-mails, write code, argue philosophy, tell jokes, and mimic empathy. They’re trained on unimaginably large datasets, and their outputs are often indistinguishable from those of a human being.

But they don’t know. They don’t care. They don’t experience.

What they produce is language, not thought. Behavior, not awareness.

When an LLM says “I understand,” it does not. When it says “I feel,” it does not. These are linguistic predictions, not mental states. There is no witness behind the words.

Humans may use similar machinery—probabilistic inference, pattern completion—but we have something LLMs do not, and cannot: subjectivity.

Consciousness Is Not a Computational Trick

A popular theory is that consciousness “emerges” when you stack enough computation on top of itself. Build a system complex enough—a neural network of cells or transistors—and poof: awareness arises.

But this is not the case.

To be fair, LLMs have gotten us closer than ever to mimicking conscious behavior. These models produce language that looks and feels human. Their output mirrors our own. In that sense, we’ve definitely stumbled onto something.

But simulating a hurricane doesn’t get you wet. LLMs generate form, not essence. They replicate behavior, not being. They simulate—brilliantly, persuasively—what a person might say. But they do not know what they’re saying. They do not feel. There is no ghost inside the machine, only code.

No matter how fluent the output becomes, the gap between performance and presence will always remain. That’s because consciousness, the backstop of it all, isn’t the endpoint of computation. It’s the mystery at the heart of experience, the one thing every model, no matter how complex, will always lack.

Consciousness isn’t behavior. It’s not language. It’s not something you can detect from the outside. It’s the inside. The presence. The first-person perspective. Consciousness isn’t emergent, it’s fundamental. Everything else—memory, sensation, thought, language—happens within it.

You Are Not the Machine You Use

The metaphors we use matter. If we believe our minds are LLMs, we reduce our identity to behavior. We forget that we are, not just that we do.

You are not a walking text generator. You are not a probabilistic engine. You are not an emergent property of data processing.

You are the consciousness behind the system.

This is what LLMs lack. And no matter how much text they generate, no matter how lifelike their outputs become, it is this absence of awareness, of being that marks the unbridgeable gap.

So yes, the brain may function like a model in certain ways. But our awareness? Our awareness is not the model. Our awareness is what sees the model. Feels the world. Wonders. Questions. Dreams. Suffers. Loves. And no machine, no matter how powerful, will ever do that.

Jayson L. Adams is a technology entrepreneur, artist, and the award-winning and best-selling author of two science fiction thrillers, Ares and Infernum.

Jayson writes sci-fi thrillers that explore what extreme situations reveal about who we really are. His novels combine high-stakes science fiction with deeper questions about identity, courage, and human nature. You can see more at www.jaysonadams.com.