Nobody trusts Sam Altman
The New Yorker just published a 10,000-word investigation into Sam Altman, the CEO steering the most powerful AI company on Earth. Written by Ronan Farrow and Andrew Marantz, based on 18 months of reporting, over 100 interviews, and never-before-disclosed internal documents, the piece asks a question that has migrated from Twitter threads to the pages of serious journalism: can the man building AGI be trusted? The fact that the question is being asked in the mainstream press is the story. Not because the answer is obvious, but because the pattern behind it is.
The credibility compound interest problem
Trust isn't binary. It compounds, positively or negatively, with every promise kept or broken. The New Yorker piece documents a pattern of grandiose claims from Altman that didn't materialize. Quantum computers from Rigetti. Fusion reactors on aggressive timelines. Promises of compute allocation for safety teams that never arrived. Each one, taken alone, might be chalked up to optimism or shifting priorities. Taken together, they describe something more structural: a leader whose public commitments routinely diverge from internal reality. This is what you might call the credibility compound interest problem. Every unfulfilled promise doesn't just disappear. It erodes the credibility of the next one. And when the stakes are as high as building artificial general intelligence, that erosion matters enormously. Ilya Sutskever, OpenAI's former chief scientist and co-founder, captured this in secret memos to the board before Altman's brief firing in November 2023. According to the investigation, Sutskever's documents included dozens of pages of Slack messages and internal testimony outlining what he described as a "consistent pattern" of misleading statements. The first item on one of his lists was simply: "Lying." Sutskever wrote that Altman shouldn't "have his finger on the button."
The governance that was supposed to prevent this
OpenAI was founded in 2015 as a nonprofit with an explicit mission: ensure that artificial general intelligence benefits all of humanity. The structure was deliberately designed to prevent commercial incentives from overriding safety considerations. A board of directors held ultimate authority, unconstrained by shareholder pressure. That structure has been systematically dismantled. In 2019, OpenAI created a "capped-profit" subsidiary, with investors told to treat their contributions "in the spirit of donations." Microsoft invested $1 billion. By October 2025, the company had completed a full conversion to a for-profit Public Benefit Corporation. The nonprofit, now called the OpenAI Foundation, retains a 26% equity stake and board appointment rights, but the practical constraints it was designed to impose have been loosened at every stage. The board members who tried to enforce those constraints, the ones who fired Altman in November 2023, were themselves ousted within days. They were replaced by Altman allies, including economist Larry Summers and former Facebook CTO Bret Taylor. The investigation firm hired to review the allegations against Altman, the same firm that investigated Enron and WorldCom, produced no written report. Findings were limited to oral briefings. The governance mechanism designed to prevent a single-point-of-failure leadership structure failed to prevent exactly that.
The safety promises that evaporated
In July 2023, OpenAI announced its Superalignment team with fanfare. The team would focus on controlling AI systems smarter than humans. OpenAI pledged 20% of its computing power to the initiative over four years. People who worked on the team told reporters that actual resources were 1-2%, running on the oldest hardware. By May 2024, both team leaders had departed. Co-lead Jan Leike wrote publicly that "safety culture and processes have taken a backseat to shiny products." The team was dissolved without completing its mission. When reporters asked to interview OpenAI researchers working on existential safety, a company representative replied: "What do you mean by 'existential safety'? That's not, like, a thing." By February 2026, OpenAI had also disbanded its mission alignment team, the group specifically chartered to promote the company's stated mission that AGI should benefit all of humanity. Its leader was given the newly created title of "chief futurist." The team's seven members were scattered across other divisions. This is the pattern that matters: public commitments to safety, followed by quiet resource starvation, followed by dissolution, followed by rebranding.
The Pentagon deal and the speed of compromise
In late February 2026, the pattern played out in real time. When the Pentagon publicly reprimanded Anthropic for refusing to allow its AI to be used for mass surveillance and autonomous weapons, Altman initially voiced support for Anthropic's position. Days later, OpenAI struck its own deal with the Department of Defense to deploy AI in classified military settings. Altman himself described the negotiations as "definitely rushed." He later told employees that OpenAI doesn't control how the Pentagon uses their products. "You do not get to make operational decisions," he said. "Maybe you think the Iran strike was good and the Venezuela invasion was bad. You don't get to weigh in on that." The official deal memo from OpenAI states that the Pentagon "agrees" with principles against mass domestic surveillance and autonomous weapons. But the safeguards rely on contractual language and self-enforcement, not technical constraints. As MIT Technology Review put it, OpenAI's "compromise" with the Pentagon was exactly what Anthropic had feared. The speed of the reversal is what's instructive. It took less than a week to go from publicly supporting a rival's principled stand to undercutting it.
The founder mythology arc
There is a familiar trajectory in tech. A founder builds something transformative. The mythology grows. The founder becomes synonymous with the mission. And then, gradually or suddenly, scrutiny reveals a gap between the public narrative and the private reality. Mark Zuckerberg promised to connect the world, then presided over platforms that amplified misinformation and eroded democratic processes. Internal documents showed the company knew about harms it publicly denied. Elon Musk positioned himself as humanity's champion through Tesla and SpaceX, then used his platforms and wealth to insert himself into political power structures in ways that serve his business interests. Sam Bankman-Fried built an empire on the promise of effective altruism, then ran a fraud. Altman's arc shares DNA with these stories, though it differs in important ways. He hasn't been accused of fraud. The technology OpenAI is building is genuinely powerful and useful. ChatGPT reaching 900 million weekly active users isn't hype, it's a product that works. The company's $730 billion valuation reflects real revenue growth, from $2 billion to $20 billion in annual recurring revenue between 2024 and 2026. But the pattern of founder mythology collapsing under scrutiny isn't really about whether the product works. It's about whether the person making deployment decisions can be trusted to weigh societal consequences alongside commercial ones. And the evidence on that question, for Altman, keeps accumulating on the wrong side of the ledger.
Does it matter if we trust him?
There's an uncomfortable counterargument: does it matter if we trust Altman, as long as the technology works? Plenty of untrustworthy people have built useful things. The answer is yes, and it's not even close. AI deployment decisions have societal consequences that most technology decisions don't. When OpenAI decides how to handle military contracts, what safety testing to require before releases, how to balance capability advancement against alignment research, or when to declare that AGI has been achieved, those decisions ripple outward in ways that affect everyone. The for-profit conversion makes this more acute, not less. OpenAI is now on a path to an IPO, potentially as soon as late 2026. Its CFO, Sarah Friar, has reportedly raised concerns about whether the timeline is too aggressive. Once OpenAI is public, the pressure to prioritize shareholder returns over safety considerations will intensify. The nonprofit foundation's 26% stake is meant to be a counterweight, but critics have already pointed out that its board is effectively controlled by for-profit interests. The original OpenAI charter was designed to be a structural answer to the question "what if you can't trust the person in charge?" That answer has been dismantled. What's left is the question itself, with no structural backstop.
The gap where trust dies
The technology is real. GPT-5 exists. Autonomous coding agents exist. The products genuinely improve people's work and lives. None of that is hype. But the promises around safety, governance, and mission alignment have been systematically broken. The 20% compute pledge for safety research that became 1-2%. The nonprofit structure that became a for-profit. The board independence that became founder control. The military red lines that lasted less than a week. The gap between what's real and what's promised is where trust dies. And that gap, at OpenAI, is widening. Multiple Biden administration officials came away from meetings with Altman feeling "nervous," and these were people who wanted to be supportive. When the people inclined to give you the benefit of the doubt start expressing concern, the credibility deficit has reached a critical threshold. The question isn't whether Sam Altman is a bad person. It's whether a single individual, any individual, should hold this much influence over a technology this consequential, with this little structural accountability. The New Yorker didn't answer that question definitively. But 10,000 words of reporting, backed by internal documents and over 100 sources, make the case that the current arrangement isn't working. And if the governance designed to check that power has already failed, the uncomfortable follow-up is: what exactly is the plan now?
References
- Sam Altman May Control Our Future, Can He Be Trusted? (The New Yorker, April 2026)
- Sam Altman responds to 'incendiary' New Yorker article after attack on his home (TechCrunch, April 2026)
- OpenAI dissolves team focused on long-term AI risks (CNBC, May 2024)
- OpenAI's Long-Term AI Risk Team Has Disbanded (WIRED, May 2024)
- Exclusive: OpenAI disbanded its mission alignment team (Platformer, February 2026)
- Evolving OpenAI's structure (OpenAI, May 2025)
- OpenAI completes conversion to for-profit business (The Guardian, October 2025)
- Sam Altman admits OpenAI can't control Pentagon's use of AI (The Guardian, March 2026)
- OpenAI's 'compromise' with the Pentagon is what Anthropic feared (MIT Technology Review, March 2026)
- OpenAI CEO Sam Altman defends decision to strike Pentagon deal (Fortune, March 2026)
- OpenAI's ambitions are easy to see. So are the doubts about its CEO. (Business Insider, April 2026)
- Removal of Sam Altman from OpenAI (Wikipedia)