We’re in a simulation
Every so often, I stumble on a fact about the universe that makes me pause. Not because it's strange in the way quantum mechanics is strange, but because it feels engineered. Like the kind of constraint you'd find in a system designed by someone trying to keep things running smoothly.
The more I look, the more I find. And once you start seeing the pattern, it's hard to unsee.
The speed of light is suspiciously like a frame rate cap
Here's the thing about the speed of light: it's constant. Not just fast, but invariant. No matter how fast you're moving, no matter your frame of reference, the speed of light in a vacuum is always 299,792,458 meters per second. Einstein showed us this with special relativity, and over a century of experiments have confirmed it.
That's weird.
In everyday experience, speeds are relative. If you're on a train and throw a ball forward, the ball moves faster relative to the ground than to you. But light doesn't work that way. It's as if the universe has a hard limit baked into its physics, a maximum processing speed that nothing can exceed.
If you've ever played a video game, you know what this looks like. It looks like a system with limited computational resources. The speed of light behaves like a GPU clock speed, the upper bound on how fast information can propagate through the simulation. As Houman Owhadi, a computational mathematician at Caltech, has pointed out, if a simulation has infinite computing power, you'd never detect it. But if it has finite resources, you'd expect to find exactly these kinds of hard limits.
And it gets stranger. The speed of light isn't just a speed limit. It's woven into the fabric of spacetime itself. It governs how time dilates, how mass increases as you approach it, how energy and matter are related through $E = mc^2$. It's not a minor parameter. It's the universe's most fundamental constraint.
Quantum mechanics only renders what you're looking at
The double-slit experiment is one of the most famous and disturbing experiments in physics. Fire individual particles, electrons, photons, atoms, through two slits, and they create an interference pattern on the other side, as if each particle traveled through both slits simultaneously as a wave. But the moment you set up a detector to observe which slit the particle goes through, the interference pattern vanishes. The particle behaves like a particle.
This is the observer effect, and it's not a metaphor. It's a repeatable, well-documented phenomenon. As recently as 2025, a team at MIT led by Wolfgang Ketterle stripped the double-slit experiment down to its quantum essentials and confirmed the effect still holds.
Now think about this from a computational perspective. If you were designing a simulation and needed to save processing power, what would you do? You'd only render things when someone is looking at them. You wouldn't waste resources calculating the exact state of every particle in the universe at all times. You'd keep things in a probabilistic, low-resolution state until an observation forces a definite outcome.
That is exactly what quantum mechanics does. Particles exist as probability waves, described by the wave function, until a measurement collapses them into a single state. It's lazy evaluation. It's render-on-demand.
Dark matter and dark energy are the universe's missing source code
We can observe about 5% of the universe. The rest is split between dark matter (roughly 27%) and dark energy (roughly 68%). We've never directly detected either one. We only know they're there because without them, the math doesn't work.
Dark matter was proposed because galaxies rotate too fast. Without some unseen mass holding them together, they'd fly apart. Dark energy was proposed because the expansion of the universe is accelerating, which shouldn't happen under normal gravitational attraction.
Decades of experiments have failed to directly detect dark matter particles. The "nightmare scenario" in particle physics, where dark matter exists but interacts so weakly with normal matter that we may never catch it, looks more plausible than ever.
From a simulation perspective, dark matter and dark energy look like the parameters you can't access. They're the hidden variables in the engine, the backend infrastructure that makes the visible universe behave correctly without being directly observable. If you were running a simulation and needed certain gravitational behaviors to emerge, you might hard-code background forces rather than simulate every particle. The residents of your simulation would notice the effects but never find the source.
Laplace's demon and the determinism question
In 1814, Pierre-Simon Laplace proposed a thought experiment. Imagine an intellect vast enough to know the position and momentum of every particle in the universe. Such a being, now called Laplace's demon, could in principle calculate the entire future and past of the universe from a single snapshot. The universe would be fully deterministic.
Quantum mechanics seemed to kill this idea. The Heisenberg uncertainty principle tells us we can't simultaneously know a particle's exact position and momentum. Randomness appears to be built into the fabric of reality.
But here's the twist: if we're in a simulation, the randomness might not be truly random. It could be pseudorandom, generated by an algorithm, just like the "random" numbers in every computer program. The simulation's operators would know the seed. From inside the simulation, the randomness looks genuine. From outside, Laplace's demon is just the sysadmin checking the logs.
This is the unsettling thing about Laplace's thought experiment in the context of simulation theory. It doesn't matter that we can't access all the variables. What matters is whether something outside the system can. And if our universe is running on something, by definition, there's an outside.
String theory's extra dimensions look a lot like hidden configuration
String theory proposes that the fundamental constituents of the universe aren't point particles but tiny vibrating strings of energy. To make the math work, these strings need to vibrate in 10 or 11 dimensions, far more than the three spatial dimensions and one time dimension we experience.
Where are these extra dimensions? The standard answer is that they're "compactified," curled up so small that we can't detect them. They exist but are inaccessible to us at our scale.
In computational terms, extra hidden dimensions that affect observable behavior but can't be directly measured sound like configuration parameters. They're the settings file you can't open, the layers of the stack you don't have permissions to view. The physics we observe are the output of processes happening in dimensions we have no way to probe.
It's worth noting that string theory remains unproven. No experiment has confirmed it. But the mere fact that our best candidate for a theory of everything requires hidden dimensions that we can never access is, at the very least, philosophically suggestive.
The Planck scale is suspiciously like a pixel size
There's a minimum meaningful length in physics: the Planck length, roughly 1.6 \times 10^{-35} meters. Below this scale, our current physics breaks down entirely. Space and time as we understand them stop making sense.
Similarly, there's a Planck time, about 5.4 \times 10^{-44} seconds, the smallest meaningful unit of time.
These aren't just practical limits of measurement. They're theoretical limits. The universe appears to be discrete at the smallest scales, not infinitely continuous. Like pixels on a screen or clock ticks in a processor, there seems to be a minimum resolution below which reality doesn't go.
Some theoretical physicists working on quantum gravity, particularly in approaches like loop quantum gravity and causal set theory, have proposed that spacetime itself is emergent rather than fundamental. What we experience as continuous space arises from discrete structures underneath, much like how the smooth image on a screen emerges from individual pixels.
The mathematical elegance problem
Here's something that doesn't get enough attention. The universe is unreasonably well-described by mathematics. Physicist Eugene Wigner famously called this "the unreasonable effectiveness of mathematics in the natural sciences." Physical laws are clean, elegant equations. The fine structure constant, the cosmological constant, the ratios between fundamental forces, all are precise numerical values that, if varied even slightly, would make complex structures (and life) impossible.
This "fine-tuning problem" has many proposed solutions, from the multiverse to the anthropic principle. But there's a simpler explanation: the values are set that way because someone set them. They're input parameters. And like any well-designed system, they're tuned for the output to work.
We're already putting brains into simulations
All of the above is speculative. Interesting, but speculative. What's harder to dismiss is that we're already doing this to other things.
In February 2026, an Australian biotech firm called Cortical Labs grew around 200,000 human neurons onto a microelectrode array and taught them to play Doom. Not well, but recognizably. The team translated the game's visuals into electrical stimulation patterns and mapped different neural firing patterns to in-game actions: moving, turning, shooting. The neurons responded in real time, adapting their activity as they received feedback. The whole process took less than a week. A few years earlier, the same lab had gotten 800,000 neurons to play Pong, but that took 18 months. The learning curve is getting steeper.
Then, in March 2026, Eon Systems took things a step further. Building on the FlyWire connectome, a complete wiring diagram of the adult fruit fly brain with over 125,000 neurons and 50 million synaptic connections, they connected a whole-brain emulation to a virtual fly body running in MuJoCo, a physics simulation engine. The result was a closed sensorimotor loop: sensory input flows in, neural activity propagates through the complete connectome, motor commands flow out, and a physically simulated body executes the output. The virtual fly "sees" its environment, processes the signals through its emulated brain, moves its body, and receives new sensory input from the movement. Perception to action, action to perception, all running on a computer.
This isn't a metaphor anymore. We are, right now, taking biological neural architectures and placing them inside digital environments. We're giving brains bodies that exist only in code. The fly doesn't know it's in MuJoCo. The neurons on the chip don't know they're playing a game from 1993.
If we can do this to a fly brain in 2026, what does a civilization with a thousand more years of compute do to a human one?
The counterarguments matter too
It's worth taking the strongest objections seriously.
In November 2025, researchers at UBC Okanagan published a mathematical argument using Gödel's incompleteness theorem, claiming to demonstrate that the universe cannot be simulated. Their argument hinges on the idea that reality requires "non-algorithmic understanding" that no computation can replicate.
It's a meaningful challenge. But it also assumes we understand the computational limits of whatever is running the simulation. Gödel's theorem applies to formal systems, but the simulation's substrate might not be bound by the same logic as the systems within it. We'd be judging the hardware by the rules of the software.
Franco Vazza, an astrophysicist at the University of Bologna, has also argued that the sheer computational requirements make simulation nearly impossible. But again, this assumes the simulation's creators are constrained by physics as we know it.
Nick Bostrom's original 2003 trilemma remains the cleanest framing. At least one of the following must be true: (1) civilizations almost always go extinct before becoming technologically capable of running simulations, (2) technologically mature civilizations have essentially zero interest in running simulations, or (3) we are almost certainly in a simulation. You only need to reject the first two to arrive at the third.
So what if it's true?
If we are in a simulation, what changes? Practically, nothing. The coffee still tastes the same. Your relationships are still real to you. The suffering is still suffering.
But philosophically, everything shifts. It means there's a deeper layer to reality. It means the laws of physics aren't fundamental truths but design choices. It means the questions "why is there something rather than nothing?" and "why these constants and not others?" have an answer, even if we can't access it from in here.
The speed of light is the frame rate. Quantum mechanics is lazy rendering. Dark matter is the backend. The Planck scale is the pixel grid. And Laplace's demon is just someone with root access.
I'm not saying we can prove any of this. I'm saying the metaphors are getting uncomfortably precise.
References
- Bostrom, N., "Are You Living in a Computer Simulation?," Philosophical Quarterly (2003), Vol. 53, No. 211, pp. 243-255. https://simulation-argument.com/simulation.pdf
- Owhadi, H., quoted in "Do We Live in a Simulation? Chances Are about 50-50," Scientific American (2020). https://www.scientificamerican.com/article/do-we-live-in-a-simulation-chances-are-about-50-50/
- MIT News, "Famous double-slit experiment holds up when stripped to its quantum essentials" (July 2025). https://news.mit.edu/2025/famous-double-slit-experiment-holds-when-stripped-to-quantum-essentials-0728
- NASA Science, "Dark Matter." https://science.nasa.gov/dark-matter/
- Vopson, M., cited in "A Scientist Says He Has the Evidence That We Live in a Simulation," Popular Mechanics. https://www.popularmechanics.com/science/environment/a70594935/simulation-theory-new-physics-law-of-infodynamics/
- Laplace, P.-S., A Philosophical Essay on Probabilities (1814). https://en.wikipedia.org/wiki/Laplace%27s_demon
- Wigner, E., "The Unreasonable Effectiveness of Mathematics in the Natural Sciences," Communications in Pure and Applied Mathematics (1960), Vol. 13, No. 1.
- UBC Okanagan, "Physicists prove the Universe isn't a simulation" (November 2025). https://www.sciencedaily.com/releases/2025/11/251110021052.htm
- Vazza, F., cited in "Are we living in a simulation? This experiment could tell us," New Scientist (2025). https://www.newscientist.com/article/2503844-are-we-living-in-a-simulation-this-experiment-could-tell-us/
- Silas, E., "Observing The Universe Really Does Change The Outcome, And This Experiment Shows How," Forbes (2020). https://www.forbes.com/sites/startswithabang/2020/05/26/observing-the-universe-really-does-change-the-outcome-and-this-experiment-shows-how/
- Wilkins, A., "Human brain cells on a chip learned to play Doom in a week," New Scientist (February 2026). https://www.newscientist.com/article/2517389-human-brain-cells-on-a-chip-learned-to-play-doom-in-a-week/
- Eon Systems, "How the Eon Team Produced a Virtual Embodied Fly" (March 2026). https://eon.systems/updates/embodied-brain-emulation
- Sanders, R., "Researchers simulate an entire fly brain on a laptop. Is a human brain next?," Berkeley News (October 2024). https://news.berkeley.edu/2024/10/02/researchers-simulate-an-entire-fly-brain-on-a-laptop-is-a-human-brain-next/