Are humans non-deterministic?
Every time you ask ChatGPT the same question, you get a slightly different answer. We call that non-deterministic. But here's the thing: the same is true of you. Ask yourself the same question on two different days and you'll phrase the answer differently, emphasize different parts, maybe even change your mind. So what does "non-deterministic" actually mean, and are humans any different from the machines we're building?
The determinism debate, briefly
The question of whether humans are deterministic is not new. Philosophers have argued about free will and determinism for centuries. The determinist position is straightforward: everything that happens, including every human decision, is the inevitable result of prior causes governed by the laws of physics. Your brain is made of atoms. Atoms follow physical laws. Therefore, your thoughts and choices are, in principle, predictable given enough information about initial conditions. Neuroscience has added fuel to this fire. In Benjamin Libet's famous 1983 experiment, brain activity associated with a decision (the "readiness potential") was detected before subjects reported being consciously aware of their choice. This suggested that what we experience as a deliberate decision might actually be our brain's after-the-fact narration of something already set in motion. More recent research has nuanced this picture. Studies on intentional inhibition suggest we may retain the ability to consciously override pre-programmed actions, a kind of "veto power" over unconscious impulses. The debate remains open, but the core tension is clear: if the brain is a physical system, and physical systems are governed by deterministic laws, then where does the sense of choice come from?
How LLMs actually work
To understand the comparison, it helps to know what's happening under the hood of a large language model. An LLM is, at its core, a mathematical function. It takes an input (your prompt) and produces a probability distribution over every possible next token in its vocabulary. Given the same input and the same model weights, that distribution is identical every time. The model itself is perfectly deterministic. The randomness people associate with LLMs comes from what happens after the model produces its output distribution. A sampling step selects the next token based on those probabilities, and a parameter called "temperature" controls how much randomness is injected into that selection. A low temperature makes the model almost always pick the highest-probability token. A high temperature spreads the probability more evenly, making less likely tokens more viable. Even at temperature zero, though, LLMs are not perfectly deterministic in practice. Floating-point arithmetic on parallel GPU hardware can introduce tiny variations. When two tokens have nearly identical probabilities, these micro-differences can tip the scale. And once a different token is selected early in generation, the entire downstream output shifts, because each token conditions the next. So the non-determinism in LLMs comes from two places: intentional randomness via sampling, and unintentional randomness from hardware and numerical precision.
The uncomfortable parallel
Here's where it gets interesting. The human brain may operate in a strikingly similar way. At the macro level, we appear non-deterministic. We make different choices in seemingly identical situations. We change our minds. We surprise ourselves. But at the micro level, neurons fire based on electrochemical signals that follow physical laws. The brain is, as far as we can tell, a physical system. The apparent randomness in human behavior may come from sources analogous to those in LLMs:
- Sensitivity to initial conditions. The brain is a complex system where tiny differences in state, your mood, blood sugar, what you read five minutes ago, can cascade into different outcomes. This is not true randomness. It is deterministic chaos, where the system is technically predictable but practically impossible to forecast because small perturbations get amplified.
- Noise in the hardware. Neurons are not perfect digital switches. Synaptic transmission involves probabilistic release of neurotransmitters. Thermal noise affects ion channels. This biological "floating-point imprecision" introduces variability at the lowest level of computation.
- Quantum effects (maybe). Some physicists and philosophers have argued that quantum indeterminacy in neural processes could introduce genuine randomness into decision-making. This remains highly speculative, and most neuroscientists consider quantum effects too small to influence cognition meaningfully. But it is an open question.
The parallel to LLMs is hard to ignore. In both cases, the core system may be deterministic, but practical non-determinism emerges from noise, sensitivity to conditions, and the sheer complexity of the computation.
Does it matter?
You might wonder whether this is just a philosophical curiosity. It's not, for at least two reasons. First, it changes how we think about LLM "creativity." When an LLM produces a surprising or novel output, we tend to attribute it to randomness in the sampling process. But if human creativity arises from similar mechanisms, noisy hardware, sensitivity to context, complex interactions between stored patterns, then the distinction between "real" creativity and "mere" stochastic variation becomes less clear. Maybe creativity was always the product of a deterministic system operating in conditions too complex to predict. Second, it reframes the reliability problem. One of the most common criticisms of LLMs in production is their non-determinism. Businesses want consistent, reproducible outputs. But we rarely hold humans to the same standard. We accept that a person will write a report differently on Monday than on Friday. We build entire systems (reviews, approvals, second opinions) around the assumption that human output varies. Perhaps the right approach to LLM reliability is not to eliminate non-determinism, but to build similar systems of verification around it.
The real question
The title asks whether humans are non-deterministic. The honest answer is: we don't know, and it might not be the right question. What we can say is that both humans and LLMs exhibit behavior that is, for all practical purposes, non-deterministic. In both cases, this may emerge from deterministic foundations interacting with noise and complexity. The difference is not that one system is random and the other isn't. It's that we've spent millennia building a story about human agency and choice, and we've spent a few years trying to figure out why GPT gives different answers to the same prompt. Maybe the most useful insight from comparing the two is this: non-determinism is not a bug. It is what happens when a system is complex enough to be interesting. Whether that system is made of neurons or transformer layers, the same principle applies. Perfect predictability and genuine usefulness may be fundamentally at odds. The next time an LLM surprises you with an unexpected answer, consider that the same forces that make it unpredictable are the ones that make you unpredictable. And that unpredictability might be the feature, not the flaw.
References
- Libet, B. (1983). "Time of conscious intention to act in relation to onset of cerebral activity." Brain, 106(3), 623-642. https://pmc.ncbi.nlm.nih.gov/articles/PMC4887467/
- Delnatte, C., et al. (2023). "Can neuroscience enlighten the philosophical debate about free will?" Neuropsychologia. https://www.sciencedirect.com/science/article/abs/pii/S0028393223001665
- Slater, D. (2024). "Decision Making in the Human Brain: Between Determinism and Free Will." ResearchGate. https://www.researchgate.net/publication/383422911
- Brenndoerfer, M. "Why Temperature=0 Doesn't Guarantee Determinism in LLMs." https://mbrenndoerfer.com/writing/why-llms-are-not-deterministic
- Geelen, P. "Determinism in LLMs: Order of Operations, Precision and Why It Breaks." AI Monks. https://medium.com/aimonks/determinism-in-llms-order-of-operations-precision-and-why-it-breaks-3192c69eaec4
- Šubonis, M. (2025). "Zero Temperature Randomness in LLMs." Substack. https://martynassubonis.substack.com/p/zero-temperature-randomness-in-llms
- IBM. "What is LLM Temperature?" https://www.ibm.com/think/topics/llm-temperature
- "Freewill vs Determinism in Psychology." Simply Psychology. https://www.simplypsychology.org/freewill-determinism.html
You might also enjoy