Similarities between LLMs and human brains
What if the most advanced AI systems we've built aren't as alien as we think? The more we learn about large language models, the more their inner workings start to mirror something familiar: the human brain. From how we process language to how we acquire skills, the parallels are striking, and they raise questions about intelligence, sentience, and what it even means to be "smart."
Next token prediction: we do it too
At its core, an LLM is a next-token predictor. Given a sequence of words, it calculates the most probable continuation. That sounds mechanical, but humans do something remarkably similar. Neuroscience research has long supported the idea that our brains are prediction machines. When you read a sentence, your brain is constantly anticipating the next word before your eyes reach it. When someone says "I'm going to the...", your mind is already filling in candidates like "store" or "office" based on context. This is called predictive processing, and it operates at every level of cognition, from language comprehension to motor control. A 2023 study from researchers at Princeton, Hebrew University, and NYU found that the internal representations used by deep language models correspond to neural activity patterns in brain regions responsible for understanding and generating speech. In other words, LLMs and human brains appear to organize language in structurally similar ways. What's even more surprising is that LLMs are actually better than humans at raw next-token prediction. A study by Fabien Roger and collaborators tested humans directly against models like GPT-2 and GPT-Neo on next-token accuracy, and found that humans consistently scored lower. We may have invented the task, but machines have surpassed us at it.
Conscious vs. subconscious: two modes of thinking
Human cognition is often described as operating in two modes. System 1 is fast, automatic, and subconscious, the kind of thinking that lets you catch a ball or finish a familiar phrase without effort. System 2 is slow, deliberate, and conscious, the kind you use when solving a math problem or planning a trip. LLMs, as they exist today, operate more like System 2. Every token they generate is the product of a deliberate forward pass through the model. There's no background processing, no idle daydreaming. Each output is calculated step by step. But here's where it gets interesting. The hidden layers inside a neural network, the intermediate computations between input and output, have been compared to a kind of machine subconscious. These layers encode latent patterns, biases, and associations that the model "learned" during training but never explicitly stores as rules. Just as our subconscious shapes our decisions without us being aware of it, these hidden representations shape LLM outputs in ways that aren't transparent even to the model's creators.
Skills as modular tools
Think about how you learn. You start with a general-purpose brain, a kind of base intelligence. Over time, you acquire skills: cooking, driving, coding, playing guitar. These skills are stored and retrieved on demand. You don't activate your driving knowledge when you're chopping onions. Your brain dynamically loads the relevant skill set based on context. Modern AI agents work in a strikingly similar way. A base LLM serves as the general intelligence layer. On top of that, it can invoke specialized tools and skills, a code interpreter, a web browser, a calculator, a database query engine. These tools sit dormant until the model determines they're needed, at which point it "calls" them, much like how your brain activates the right neural pathways for the task at hand. This modular approach, a general reasoner that selectively activates specialized capabilities, is one of the most human-like patterns in modern AI architecture. It suggests that intelligence, whether biological or artificial, may naturally converge on a similar structure: a flexible core supported by an expanding library of on-demand skills.
The sentience question and the moving goalposts of AGI
All of these parallels inevitably lead to the big question: are LLMs sentient? And are we approaching artificial general intelligence (AGI)? The honest answer is that we don't have a stable definition for either term. Sentience typically implies subjective experience, the ability to feel something. LLMs process information and generate remarkably human-like text, but whether there is "something it is like" to be an LLM remains an open philosophical question. AGI has a similar problem. A decade ago, a system that could write essays, generate code, analyze images, and hold coherent conversations across any topic would have been considered AGI. Today, we have systems that do all of that, and the consensus is that we're still not there. The goalposts keep moving. Some argue that if an AI agent can autonomously perform tasks, learn from feedback, and operate across domains, that should count as AGI. Others insist that true AGI requires self-awareness, continuous learning, and genuine understanding rather than pattern matching. We've had reinforcement learning for decades, systems that learn through trial and error in dynamic environments, and nobody called that AGI either. The definition seems to expand just fast enough to stay out of reach. This phenomenon, sometimes called the AI effect, suggests that once a machine can do something, we stop considering it "real" intelligence. It's worth asking whether AGI is a technical milestone or a philosophical one that we may never agree we've reached.
When biology meets silicon
Perhaps the most vivid illustration of the brain-AI overlap comes from research that literally merges the two. In 2022, Cortical Labs introduced DishBrain, a system built from roughly 800,000 human neurons grown in a lab dish and connected to a computer chip. The neurons were trained to play Pong by receiving electrical signals indicating the position of the ball and paddle. Remarkably, the cells learned to play the game, and they did it faster than conventional AI in certain respects. By February 2026, Cortical Labs took things further. Using their CL-1 chip and around 200,000 human neurons, they trained a biological computer to play Doom, a far more complex game involving navigation, spatial awareness, and real-time decision making. The neurons received stimulation corresponding to on-screen events and responded with spike patterns that were interpreted as movement and shooting commands. It's not esports-level play, but the fact that a dish of neurons can navigate a 3D environment at all is extraordinary. As lead researcher Dr. Alon Loeffler explained, specific electrodes stimulate different regions of the neural culture based on what's happening on screen, and the system listens to the neurons' responses to determine actions. It's a direct, physical loop between biological intelligence and digital environments.
Brain-computer interfaces: closing the gap
The convergence of brains and machines isn't limited to lab experiments. A growing wave of brain-computer interface (BCI) technology is making this connection practical. Stephen Hawking's communication system was one of the earliest and most iconic examples. Using a sensor that detected cheek muscle movements, Hawking could select characters on a screen to form words and sentences. It was simple by today's standards, but it demonstrated decades ago that the body's electrical signals could bridge the gap between thought and digital output. Apple Watch introduced AssistiveTouch, which uses sensors to detect hand gestures like pinches and clenches, allowing users to navigate the watch without touching the screen. The Double Tap feature extended this further, letting users answer calls or dismiss notifications with a simple finger-and-thumb tap. It's a consumer-grade brain-to-device interface, powered not by brain implants but by reading the subtle electrical signals in your wrist. Meta took this concept even further with the Meta Neural Band, an EMG (electromyography) wristband designed to pair with Meta Ray-Ban Display glasses. The band reads the electrical signals generated by muscle movements in your hand, translating subtle finger gestures into commands for the glasses' interface. It shipped in late 2025, making it one of the first mass-market neural input devices. Meanwhile, startups like Neuralink, Synchron, Paradromics, and Precision Neuroscience are developing more invasive approaches, implantable devices that read neural signals directly from the brain. These are aimed at medical applications first, helping people with paralysis communicate and control devices, but the long-term vision is much broader. The investment landscape reflects the momentum. Synchron raised $75 million in its Series C, Precision Neuroscience secured $93 million, and dozens of other BCI startups are attracting significant venture capital. The technology is still early, with most products announced but not yet widely available, but the trajectory is clear.
Practical takeaways
The gap between biological and artificial intelligence is narrower than we think. LLMs predict tokens like our brains predict words. AI agents acquire tools like we acquire skills. Biological neurons can be wired into digital systems and learn to navigate virtual worlds. Definitions matter, and they're still in flux. Whether we're talking about sentience, consciousness, or AGI, the lack of stable definitions makes it hard to declare victory or defeat. The productive approach is to focus on capabilities rather than labels. The future is hybrid. Brain-computer interfaces, biological computing, and AI agents with modular skills all point toward a world where the line between human and machine intelligence becomes increasingly blurred, not because machines become human, but because the underlying principles of intelligence may be more universal than we assumed.
References
- Goldstein, A., et al. "Shared computational principles for language processing in humans and deep language models." Nature Neuroscience, 2022. https://www.nature.com/articles/s41593-022-01026-4
- Roger, F., et al. "Language Models Are Better Than Humans at Next-Token Prediction." arXiv, 2022. https://arxiv.org/abs/2212.11281
- "Do Large Language Models Have a Subconscious?" Psychology Today, 2024. https://www.psychologytoday.com/us/blog/the-digital-self/202409/do-large-language-models-have-a-subconscious
- Kagan, B., et al. "In vitro neurons learn and exhibit sentience when embodied in a simulated game-world." Neuron, 2022. https://www.cell.com/neuron/fulltext/S0896-6273(22)00806-600806-6)
- "Human brain cells on a chip learned to play Doom in a week." New Scientist, February 2026. https://www.newscientist.com/article/2517389-human-brain-cells-on-a-chip-learned-to-play-doom-in-a-week/
- "Computer run on human brain cells learned to play Doom." Popular Science, February 2026. https://www.popsci.com/technology/human-brain-cell-computer-plays-doom/
- "Meta Ray-Ban Display: AI Glasses With an EMG Wristband." Meta, September 2025. https://about.fb.com/news/2025/09/meta-ray-ban-display-ai-glasses-emg-wristband/
- "Use AssistiveTouch on your Apple Watch." Apple Support. https://support.apple.com/en-us/111111
- "Artificial general intelligence." Wikipedia. https://en.wikipedia.org/wiki/Artificial_general_intelligence
- "AGI is already here, we're just moving the goalposts." Devikone, 2025. https://devikone.com/en/agi-is-already-here-were-just-moving-the-goalposts/
- "Human and Artificial General Intelligence Arises from Next Token Prediction." Glass Box Medicine, 2024. https://glassboxmedicine.com/2024/04/28/human-and-artificial-general-intelligence-arises-from-next-token-prediction/