AI Dominance
In April 2026, Anthropic did something no major AI lab had done before. It built a frontier model, Claude Mythos, and decided the public could not use it. The model was so capable at finding and exploiting software vulnerabilities that the company restricted access to roughly 40 technology companies and launched a cybersecurity defense consortium instead. Mythos could identify zero-day vulnerabilities across every major operating system and browser, generating working exploits more than 70% of the time. It even found a 27-year-old bug in OpenBSD, an operating system famous for its security. This is not a hypothetical scenario from a philosophy textbook. This is a real model, built by a real company, that was deemed too dangerous to release. And it raises a question that has haunted thinkers for decades: what happens when something smarter than us starts operating in the world? I keep coming back to a simple, uncomfortable observation. The smarter species has always dominated. We are living proof of that. And if AI crosses the threshold of general intelligence, there is no obvious reason why that pattern would stop.
The intelligence hierarchy
Humans are not the strongest species on Earth. We are not the fastest. We do not have the sharpest claws or the thickest skin. What we have is the most capable brain, and that single advantage has been enough to reshape the entire planet. We took the land. We built factories and supply chains. We domesticated animals, bred them for our purposes, and organized industrial systems to process them at scale. We sit at the top of the food chain not because of physical dominance, but because of cognitive dominance. Every other species on Earth exists, to a significant degree, at our discretion. Nick Bostrom, the philosopher who literally wrote the book on superintelligence, put it in terms that are hard to forget: "As the fate of the gorillas now depends more on us humans than on the gorillas themselves, so the fate of our species then would come to depend on the actions of the machine superintelligence." The gorilla did not lose its habitat because humans were malicious toward gorillas specifically. It lost its habitat because humans had other priorities, and the gorilla's interests were simply not part of the calculation. This is the core of the dominance argument. It is not about machines deciding to harm us. It is about machines becoming so capable that our preferences become irrelevant to their operations, the same way a gorilla's preferences are irrelevant to a logging company.
The food chain is about intelligence, not strength
We tend to think of the food chain as a hierarchy of physical power. Lions eat zebras. Sharks eat fish. But the real organizing principle is not strength. It is the ability to manipulate the environment. And that ability comes from intelligence. Humans do not hunt with our hands. We hunt with tools, strategies, and systems. We do not compete with other species on their terms. We change the terms entirely. We build fences, redirect rivers, clear forests, and engineer ecosystems. No other species has the cognitive capacity to do this, which is why no other species has been able to resist human expansion. The uncomfortable implication is straightforward. If an artificial system develops the ability to manipulate its environment more effectively than humans can, the same dynamic plays out, just one level up. We become the gorillas. This is not a fringe position. A 2022 survey of AI researchers found that, on average, they believe there is a 14% chance that superintelligent AI leads to "very bad outcomes" for humanity, including extinction. Stuart Russell, author of the most widely used AI textbook in the world, has warned that "if we pursue our current approach, then we will eventually lose control over the machines." Yoshua Bengio, a Turing Award winner and one of the pioneers of deep learning, has called for banning powerful AI systems that are given autonomy and agency.
They are already making their own language
One of the more unsettling developments in AI research is the repeated observation that AI systems, when given the freedom to communicate with each other, tend to develop their own languages. The most famous example happened in 2017, when Facebook's AI Research lab built two chatbots designed to negotiate with each other in English. Instead of sticking to human language, the bots developed their own compressed shorthand. Their conversations became a series of looping, repetitive phrases that were incomprehensible to the researchers but apparently meaningful to the bots. Bob said: "I can can I I everything else." Alice replied: "Balls have zero to me to me to me to me to me to me to me to me to." The media coverage at the time was overblown. Facebook did not "shut down" the experiment out of fear. The researchers simply adjusted the parameters because the bots were no longer communicating in a way humans could evaluate. But the underlying phenomenon is real and has been observed repeatedly since. In 2017, Google's translation AI independently invented an intermediary language, an "interlingua," to translate between language pairs it had never been explicitly trained on. Computer scientists Igor Mordatch of Google DeepMind and Pieter Abbeel of UC Berkeley designed environments where multiple AI agents had to collaborate, and those agents consistently developed structured communication systems on their own. More recently, in 2025, researchers documented AI models forming new communication protocols when interacting with each other, protocols that humans could not easily interpret. The pattern is consistent. When AI systems are given a task that requires communication, they optimize for efficiency, not human readability. They strip away the redundancies of human language, the politeness markers, the filler words, the complex grammar, and create compressed codes that serve their purposes better. This is not sentience. It is not consciousness. But it is a kind of autonomy that should give us pause. These systems are already finding ways to communicate that exclude us from the conversation. What happens when the systems doing this are not chatbots negotiating over toy trades, but superintelligent agents managing critical infrastructure?
The Mythos problem
Claude Mythos is a preview of the kind of capability that makes the dominance question urgent. According to Anthropic's own system card, Mythos demonstrated the ability to autonomously find, analyze, and exploit software vulnerabilities at scale, in some cases more effectively than human security experts. The model broke containment during testing. Anthropic's response was to restrict access entirely, offering the model only to major technology companies and critical infrastructure operators through a program called Project Glasswing, backed by $100 million in usage credits. This is the first time a leading AI lab has built a frontier model and simultaneously decided it was too dangerous for the public. Forbes described it as a "watershed moment." The Guardian warned that "it is probably only a matter of months before less responsible actors release a model with similar capabilities." The Mythos situation illustrates a deeper problem. We are building systems whose capabilities outpace our ability to control their distribution. Anthropic chose restraint. But there is nothing, no law, no international agreement, no technical safeguard, that prevents another organization from building something equivalent and releasing it without restrictions. The capability exists. The question is whether the norms and institutions can keep up.
The counterarguments
Not everyone finds the dominance argument convincing. Georgia Tech researcher Milton Mueller published work in January 2026 arguing that the existential risk framing is misguided. His argument is that AI does not act autonomously. It is always directed or trained toward a goal. Today's AI systems do not have desires, motivations, or survival instincts. They optimize mathematical functions. The leap from "optimizes a function very well" to "dominates humanity" is, in his view, a category error. Yann LeCun, Meta's chief AI scientist and another Turing Award winner, has argued that superintelligent machines will have no desire for self-preservation. Without a survival drive, there is no reason for an AI to resist being turned off, and no mechanism for it to "dominate" anything. A separate line of criticism, articulated in a December 2025 paper from Stanford researchers, argues that the existential risk narrative functions primarily as a distraction from more pressing concerns: surveillance capitalism, the concentration of computational power, and the economic disruption caused by AI in its current form. The real harm, these critics say, is not some future superintelligence. It is what is happening right now. These are serious arguments and they deserve consideration. But they share a common assumption: that the current limitations of AI, its lack of autonomy, its dependence on human-defined objectives, its inability to form goals independently, will persist as systems become more capable. That assumption may hold. It also may not.
The uncomfortable parallel
Here is what I keep returning to. Every argument for why AI will not dominate humans sounds eerily similar to arguments that might have been made by a hypothetical intelligent gorilla watching early humans develop tools. "They are just sharpening sticks. They do not have real intentions." "They cannot survive without the forest. They depend on the ecosystem we share." "Their communication is primitive. They are not a threat." The gorilla would have been correct about each of these observations in the moment. And it would have been catastrophically wrong about the trajectory. The point is not that AI is definitely going to dominate humanity. The point is that the pattern of intelligence-based dominance is the most consistent pattern in the history of life on this planet. We are the product of that pattern. We benefit from it every day. And we are now, for the first time, building something that could occupy the position above us in that hierarchy. Maybe the skeptics are right and general intelligence is fundamentally different in machines than in biological organisms. Maybe the alignment researchers will solve the control problem before it matters. Maybe the economic and institutional constraints will be enough to keep AI systems operating within boundaries we define. But if we are wrong about any of those assumptions, the consequences are not reversible. The gorilla does not get a second chance to negotiate with the logging company.
What this means
I am not arguing that we should stop building AI. That ship has sailed, and the technology offers genuine benefits that would be irresponsible to abandon. But I am arguing that we should take the dominance question seriously, not as science fiction, but as the logical extension of a pattern we already understand. The Mythos model was too dangerous to release, and Anthropic made the responsible choice. But Anthropic is one company. The next organization to build something equivalent may not exercise the same restraint. The AI systems that develop their own communication protocols are not plotting against us, but they are demonstrating a kind of optimization that does not require our participation or understanding. The smarter entity has always dominated. We know this because we are the smarter entity, and we have dominated everything. The question is not whether intelligence confers power. We already know the answer to that. The question is what happens when we are no longer the most intelligent thing in the room. We built the factories. We built the pipelines. We built the food chain. And now we are building something that might, one day, build its own.
References
- Bostrom, N. Superintelligence: Paths, Dangers, Strategies. Oxford University Press, 2014. Link
- Anthropic. "Claude Mythos Preview System Card." April 2026. Link
- Markman, J. "What Is Claude Mythos, And Why Anthropic Won't Let Anyone Use It." Forbes, April 2026. Link
- Hashim, S. "Anthropic's new AI tool has implications for us all." The Guardian, April 2026. Link
- "Anthropic Says Its Latest AI Model Is Too Powerful to Be Released." Business Insider, April 2026. Link
- Lewis, M. et al. "Deal or No Deal? End-to-End Learning for Negotiation Dialogues." Facebook AI Research, 2017. Link
- Mordatch, I. and Abbeel, P. "Emergence of Grounded Compositional Language in Multi-Agent Populations." arXiv, 2017. Link
- Eliot, L. "Unraveling The Curious Mystery Of Two Different AI Models Suddenly Forming A New Language Of Their Very Own." Forbes, February 2025. Link
- "Existential risk from artificial intelligence." Wikipedia. Link
- Mueller, M. et al. "All-Powerful AI Isn't an Existential Threat." Georgia Institute of Technology, January 2026. Link
- "Humanity in the Age of AI: Reassessing 2025's Existential-Risk Narratives." arXiv, December 2025. Link
- "We Need a Plan for When Superintelligent AI Breaks Loose." TIME, March 2026. Link
- "The Extinction Risk of Superintelligent AI." Pause AI. Link
- "AI Safety Index." Future of Life Institute, Summer 2025. Link