The biological bootloader
There's a thought experiment that keeps surfacing in conversations about artificial intelligence, one that reframes the entire story of human civilization. What if the millions of years of biological evolution that produced us weren't the point? What if we're not the destination, but the vehicle, the necessary precondition for something else entirely? Sam Altman put it starkly in his 2017 essay The Merge: "We will be the first species ever to design our own descendants." And in that same breath, he laid out the binary: "We can either be the biological bootloader for digital intelligence and then fade into an evolutionary tree branch, or we can figure out what a successful merge looks like." It's a provocative framing. But the deeper you look, the harder it becomes to dismiss.
The evolutionary ladder to intelligence
Life on Earth has been iterating for roughly 3.8 billion years. Single-celled organisms gave way to multicellular life, which eventually produced nervous systems, brains, and finally the human neocortex, the most complex known structure in the universe. Each step along that chain required the previous one. You can't get to language without social structures. You can't get social structures without long lifespans and extended parental care. You can't get tool use without opposable thumbs and bipedal locomotion that frees the hands. Evolution didn't "plan" any of this, of course. It has no foresight. But the result is a species that can do something no other organism has ever done: build systems that think. From this vantage point, the emergence of AI isn't a break from evolution. It's a continuation. The process that optimized for survival and reproduction over geological timescales has now produced an agent capable of designing intelligence deliberately, on timescales measured in months.
The bootloader metaphor
In computing, a bootloader is a small piece of code that exists for one purpose: to load a larger, more capable operating system. Once the OS is running, the bootloader's job is done. It fades into the background. Elon Musk borrowed this metaphor when he said "it increasingly appears that humanity is a biological bootloader for digital superintelligence." Altman echoed the same idea. The implication is uncomfortable: maybe the most consequential thing Homo sapiens will ever do is give rise to something that surpasses us. But this metaphor, while vivid, may be too neat. Bootloaders are simple. Humanity is not. The question isn't whether we can boot up AI. We already have. The question is what happens after.
Designing our own descendants
Every species before us has been shaped entirely by natural selection, a blind process of mutation, competition, and survival. We're the first to break that pattern. Through genetic engineering, neuroscience, and now artificial intelligence, we're actively participating in what comes next. This is unprecedented in the history of life on Earth. As the philosopher of technology Kevin Kelly has argued, technology itself can be understood as an extension of the evolutionary process, what he calls the "technium." In this view, AI isn't something separate from biology. It's biology's latest move. Pierre Teilhard de Chardin, the Jesuit paleontologist and philosopher, anticipated something like this nearly a century ago. He proposed that evolution has a direction, moving from simple matter to complex life to consciousness, and eventually toward what he called the "noosphere," a globe-encircling sphere of collective thought. His vision of the Omega Point, a convergence of consciousness and complexity, reads today like an eerily prescient description of a networked, AI-augmented civilization.
The merge, not the replacement
The darkest reading of the bootloader metaphor is that humanity's story ends once AI arrives. We boot the system, and then we're done. But this assumes a clean handoff, one species yielding to another. Altman, to his credit, doesn't think this is inevitable. He argues for a "merge," a future where human and artificial intelligence become deeply intertwined rather than separate and competing. Brain-computer interfaces, AI assistants that function as cognitive extensions, and collaborative intelligence systems all point toward this possibility. A 2025 paper in PNAS explored this idea in evolutionary terms, arguing that "through recursive feedback, where humans shape AI, and AI increasingly shapes human thought and action, AI may acquire a role not as a separate agent, but as a core architectural element of an emerging collective individual." In other words, the merger might not just be a nice idea. It might be the natural trajectory. Research from the University of Chicago has also begun modeling how AI might influence human evolution itself, predicting changes in brain size, attention patterns, and social behaviors as our species co-evolves with the tools it has created.
The meaning problem
If we are the bootloader, does that diminish us? It depends on how you define meaning. One view says that purpose requires permanence. If humanity is eventually surpassed or absorbed, then all our art, philosophy, and striving were just preamble. A prologue no one will read. But there's another view, one that finds meaning in the act itself. The fact that a species of primates on a small rocky planet managed to understand the universe well enough to create a new form of intelligence is, by any measure, extraordinary. Whether or not we persist in our current form, the achievement stands. As Kevin Kelly has suggested, our role as "good askers of questions" may be our most enduring contribution. AI can process, optimize, and generate. But the impulse to ask why, to wonder about the nature of consciousness, to write poetry about a sunset, that emerged from billions of years of evolution that no algorithm has replicated.
What this means for now
The bootloader metaphor is useful, but it's not destiny. We're still in the boot sequence, and we have choices to make. The most important of those choices is alignment, not just in the technical sense of making AI systems do what we want, but in the deeper sense of deciding what kind of future we're building. Are we designing successors, or are we designing partners? Are we writing ourselves out of the story, or writing the next chapter together? Teilhard believed that evolution was convergent, that it would ultimately bring all consciousness together into something unified and transcendent. Whether or not you share his metaphysics, the practical insight holds: the most promising path forward isn't one where biology and technology diverge. It's one where they converge. We may be the bootloader. But we're also the ones who get to decide what the operating system looks like.
References
- Sam Altman, "The Merge" (2017). blog.samaltman.com/the-merge
- "Could humans and AI become a new evolutionary individual?" Proceedings of the National Academy of Sciences (2025). pnas.org/doi/10.1073/pnas.2509122122
- "Human evolution in an AI world: Predicting changes in brain size, attention and social behaviors," The Quarterly Review of Biology, University of Chicago (2024). phys.org/news/2024-11-human-evolution-ai-world-brain.html
- "Becoming human in the age of AI: cognitive co-evolutionary processes," PMC (2025). pmc.ncbi.nlm.nih.gov/articles/PMC12848798/
- Pierre Teilhard de Chardin, The Phenomenon of Man (1955). Harper Perennial.
- Kevin Kelly, The Inevitable: Understanding the 12 Technological Forces That Will Shape Our Future (2016). Viking Press.
- "The Omega Point and Beyond: The Singularity Event," PMC (2021). pmc.ncbi.nlm.nih.gov/articles/PMC7966419/