The trial that decides what open means
On April 28, 2026, Elon Musk took the stand in an Oakland, California federal courthouse and told the jury, "It's not okay to steal a charity." Across the room sat Sam Altman, the CEO of the company they co-founded a decade ago. The lawsuit seeks $150 billion in damages. But the real stakes aren't financial. They're definitional. This trial is nominally about whether OpenAI breached its founding mission when it pivoted from a nonprofit to a for-profit juggernaut. But underneath the legal arguments and personal grudges lies a much bigger question: what does "open" actually mean, and can it survive contact with the economics of building frontier AI?
The founding promise
When OpenAI launched in December 2015, its announcement was almost quaint in its idealism. "OpenAI is a non-profit artificial intelligence research company," the introduction read. "Our goal is to advance digital intelligence in the way that is most likely to benefit humanity as a whole, unconstrained by a need to generate financial return." The founding team included Musk, Altman, Greg Brockman, and Ilya Sutskever, among others. Musk contributed around $38 million over several years. The vision was clear: AI should be "as broadly and evenly distributed as possible." The "open" in OpenAI wasn't just a name. It was supposed to be a commitment. The charter, published in 2018, doubled down on this. OpenAI's mission was to "ensure that artificial general intelligence benefits all of humanity." It would prioritize safety over profit. It would share research broadly. The structure, a nonprofit, was chosen deliberately to keep financial incentives from corrupting the mission.
The pivot
Then reality showed up with a bill. In March 2019, OpenAI created a "capped-profit" subsidiary called OpenAI LP. Returns for investors would be capped at 100 times their investment. The operating agreement even warned that "the Company might never turn a profit" and encouraged potential investors to "think of investments in the spirit of donations." Microsoft invested $1 billion shortly after. Then came GPT-3, then ChatGPT in late 2022, and suddenly the company that asked people to treat investments like donations was worth hundreds of billions. By October 2025, OpenAI had completed a full restructuring. The for-profit subsidiary became a public benefit corporation called OpenAI Group PBC. Microsoft took a 27% stake valued at approximately $135 billion. The nonprofit, now called the OpenAI Foundation, retained oversight and a $130 billion stake, but the profit cap was gone. The company was preparing for an IPO that could value it at $1 trillion. The organization that was founded to be "unconstrained by a need to generate financial return" had become one of the most valuable companies in the world.
Two stories, both convenient
In the courtroom, the jury is hearing two competing narratives. Neither is entirely convincing. Musk's legal team, led by attorney Steven Molo, frames the case as a straightforward theft. "No one should be allowed to steal a charity," Molo told the jury. The argument is that Altman and Brockman exploited Musk's donations and the nonprofit structure to build something they then redirected toward personal enrichment. Musk wants Altman removed as CEO and from the nonprofit board, and wants the $150 billion in damages directed to OpenAI's charitable arm. OpenAI's lead counsel, William Savitt, tells a different story. Musk left in 2018 after a power struggle. He later launched xAI, a direct competitor. The lawsuit, Savitt argues, is "sour grapes" from someone who "didn't get his way," not a principled stand for charity. Both narratives have obvious holes. Musk's concern for openness is hard to reconcile with his own commercial AI venture. Altman's insistence that the mission required becoming a profit engine conveniently aligns with making himself spectacularly wealthy. The jury will decide the legal question, but neither side emerges as a selfless champion of the public good.
The real question the trial can't answer
The more interesting question isn't whether Altman breached a contract. It's whether the contract was ever realistic. Building frontier AI models requires staggering amounts of compute. OpenAI's partnership with Microsoft alone involved $250 billion in Azure cloud commitments. The Stargate data center project with Oracle and SoftBank operates at a scale that would be absurd for a nonprofit. No foundation, no matter how well-funded, can compete in a capital arms race measured in hundreds of billions. So the uncomfortable truth is this: Altman might be right that you can't build frontier AI on donations. But Musk might also be right that the pivot betrayed the people who funded the original vision. Both things can be true at the same time. This is the tension at the heart of so many "open" projects. The word carries an implicit promise, that something will remain accessible, shared, and unowned. But the economics of scale have a way of making that promise impossible to keep.
"Open" is a spectrum, and always has been
OpenAI's identity crisis isn't unique. The tech industry has been arguing about what "open" means for decades. Android is technically open source through the Android Open Source Project (AOSP). Anyone can fork the code and build their own operating system. But the version of Android that runs on 3 billion devices is tightly controlled by Google through its proprietary Google Mobile Services, the Play Store, and a web of licensing agreements that effectively lock manufacturers into Google's ecosystem. In 2018, the EU fined Google $5 billion for exactly this kind of control. And in recent moves, Google has been tightening restrictions further, requiring developer registration and government ID for app distribution starting in 2026. Is Android open? Technically yes. Practically, it depends on who you ask. The Open Source Initiative released its Open Source AI Definition (OSAID) 1.0 in October 2024, attempting to bring clarity. Under their definition, an open source AI system must allow users to use, study, modify, and share it freely. By this standard, most "open" AI models, including Meta's Llama, don't fully qualify. They release weights but not training data. They're open-weight, not open-source. The terminology matters, but the industry uses the terms interchangeably anyway. Even Wikipedia, perhaps the purest example of an open knowledge project, operates under specific constraints. Its openness works because text is cheap to host and volunteers provide the labor. The model doesn't translate to domains where the marginal cost of participation is millions of dollars in GPU time.
The mythology of Silicon Valley altruism
What the trial reveals most clearly isn't a legal answer. It's a cultural one. Silicon Valley has a long tradition of wrapping commercial ambitions in the language of public good. "Connecting the world," "organizing the world's information," "ensuring AI benefits all of humanity." These mission statements serve a dual purpose: they attract idealistic talent and public goodwill, and they provide moral cover for the accumulation of enormous power and wealth. The OpenAI story follows the pattern perfectly. Start with a mission that sounds like a public service. Attract funding and talent on those terms. Discover that the mission requires scale. Discover that scale requires capital. Discover that capital requires returns. Arrive at a for-profit structure that looks exactly like every other tech company, but with a nonprofit somewhere in the org chart as a fig leaf. This isn't to say the people involved are cynical. Many of them probably believe in the mission. But the structural incentives always point in the same direction. When the choice is between staying small and open or becoming large and profitable, the money wins. Every time.
What the verdict won't settle
The jury's decision, expected by mid-May, will resolve the legal claims. Did OpenAI breach its charitable trust? Was Musk unjustly enriched, or unjustly deprived? These are answerable questions with specific legal standards. But the trial won't settle the deeper issue. It won't tell us whether "open" can mean anything durable in a field where the cost of participation is measured in billions. It won't resolve the tension between building powerful technology and keeping it accessible. And it won't answer the question that hangs over the entire AI industry: when a company says its mission is to benefit humanity, should we believe them? The answer, probably, is to watch what they do with the money. The Musk v. Altman trial is a spectacle, two billionaires arguing over who cares more about the common good. But strip away the egos and the legal posturing, and what remains is a genuinely important question about how we govern the most powerful technology ever built. The word "open" was supposed to be part of the answer. Whether it still can be is what's really on trial.