OpenAI wrote a policy paper
OpenAI's biggest announcement the week of April 6, 2026 wasn't a model. It wasn't a product. It was a 13-page policy paper titled Industrial Policy for the Intelligence Age: Ideas to Keep People First. When the company racing hardest toward superintelligence pivots to publishing manifestos about the social contract, something interesting is happening, and it's not thought leadership. It's damage control.
The paper
The document proposes what Sam Altman describes as a New Deal-scale reimagining of the social contract. Robot taxes. A public wealth fund. A four-day workweek. Subsidized AI access for schools and libraries. Post-deployment safety audits. Portable benefits for displaced workers. The scope is genuinely ambitious, covering everything from capital gains reform to model-containment playbooks. Written by OpenAI's global affairs team, the paper divides its agenda into two halves: building an "open economy" with broad participation, and building a "resilient society" with trust infrastructure and frontier-risk management. It invokes the Progressive Era and the New Deal as historical parallels. The message is clear: what's coming is so transformative that the entire relationship between government, labor, and capital needs to be renegotiated. That framing is worth taking seriously. It's also worth asking why OpenAI is the one writing it.
The timing is not a coincidence
The paper dropped the same week as a major investigative profile in The New Yorker by Ronan Farrow and Andrew Marantz, titled "Sam Altman May Control Our Future, Can He Be Trusted?" The piece, based on 18 months of reporting and over 100 interviews, alleges a pattern of deception by Altman toward board members and executives. It details internal memos from former chief scientist Ilya Sutskever that begin with the heading: "Sam exhibits a consistent pattern of..." The first item listed is "Lying." The profile also reveals that the superalignment team, publicly promised 20% of OpenAI's compute to work on existential safety, received just 1-2% on the company's oldest hardware before being dissolved entirely. When reporters asked to speak with OpenAI researchers working on existential safety, a company representative replied: "What do you mean by 'existential safety'? That's not, like, a thing." This is the context in which a 13-page document about keeping people first arrives.
The vibes are shifting
OpenAI isn't the only one feeling the pressure. Public sentiment toward AI has been souring for months. An NBC News poll from March 2026 found that just 26% of registered voters have positive feelings about AI, compared with 46% who hold negative views. AI's net favorability is lower than every issue polled except the Democratic Party and Iran. A Quinnipiac University poll from the same period found that 76% of Americans think businesses aren't transparent enough about their AI use, and 74% think the government isn't doing enough to regulate it. Pew Research data paints a similar picture. Only 17% of the general public thinks AI will have a positive impact over the next 20 years, compared with 56% of AI experts. The gap between the people building AI and the people living with it has never been wider. The industry knows the numbers. And it's responding not with better products or more transparency, but with paper.
The pattern is industry-wide
OpenAI isn't doing this alone. Anthropic launched the Anthropic Institute and expanded its policy apparatus in the same period. Google's 2026 AI Impact Summit leaned heavily into the language of infrastructure, public utility, and national capacity. Mustafa Suleyman at Microsoft has been pushing a "humanist superintelligence" frame that tracks almost exactly onto OpenAI's containment instincts. The vocabulary across frontier labs has converged: industrial policy, public benefit, mission governance, social insurance. Two years ago, the dominant language was "responsible AI" and "ethical guidelines." That has been quietly retired. What replaced it is more ambitious, more honest about the scale of disruption, and also more useful as a framework for maintaining the political legitimacy of the companies doing the building. This is what narrative management looks like at scale. Not conspiracy, not coordination, but convergence driven by shared incentive. Every major AI lab is now funding thinktanks, publishing policy papers, and staffing up government affairs teams. Forbes reported in February 2026 that OpenAI and Anthropic spent more on lobbying in 2025 than ever before. A Public Citizen report found that one in four federal lobbyists now works on AI. Eight of the largest tech and AI companies spent a combined $36 million on federal lobbying in the first half of 2025 alone. When did a startup ever fix public trust by publishing a white paper? Trust comes from behavior, not PDFs.
The contradictions are the point
The most revealing thing about OpenAI's policy paper isn't what it proposes. It's the gap between what it proposes and what OpenAI has actually done. Consider: the paper calls for auditing regimes, incident reporting, and mechanisms for public input. California's SB 1047 proposed exactly those things, along with third-party audits of frontier models, whistleblower protections, and a public compute cluster for researchers. OpenAI lobbied against the bill. When it passed the state legislature, OpenAI lobbied Governor Newsom to veto it. He did. The paper proposes that "frontier AI companies should adopt governance structures that embed public-interest accountability into decision-making, such as Public Benefit Corporations." This is the same structure OpenAI adopted when it converted from a nonprofit to a for-profit entity, under legal challenge. Another recommendation is to "create structured ways for public input so that alignment isn't defined only by engineers or executives behind closed doors." OpenAI could implement this tomorrow. Instead, it asks governments to compel the industry to comply. As Eryk Salvaggio wrote in TechPolicy.Press, the paper is best read as a "policymercial": marketing copy dressed as policy proposals. OpenAI has co-opted the idealism of public infrastructure while actively undermining the concrete steps that would make it real.
Public costs, private gains
Look at the economic structure the paper actually proposes. The "Right to AI" section asks policymakers to treat access to large language models the way they treated access to electricity and the internet, including subsidizing access for schools and underserved communities. Noble enough. But the paper then pivots directly to calling for governments to "accelerate the expansion of energy infrastructure required to power AI," along with investment credits, subsidies, and less regulation for advanced conductors. The proposed Public Wealth Fund would, through vague mechanisms, give "every citizen a stake in AI-driven economic growth." In practice, this would enroll every citizen in a generative AI mutual fund, linking social programs to the industry's financial success. The safety net becomes dependent on the very companies it's supposed to regulate. Meanwhile, several of the paper's "resilient society" recommendations quietly offload what should be corporate responsibility to the public. OpenAI assigns itself upstream risk, testing, red-teaming, pre-deployment evaluation, then hands everything downstream to governments. One recommendation calls for governments to "research and develop tools that protect models, detect risks, and prevent misuse." This is, in effect, nationalizing the alignment team that OpenAI already disbanded internally.
The "social contract" framing is telling
The choice to frame this as a social contract is the most revealing move of all. OpenAI wants to set the terms before governments do. By publishing first, by defining the vocabulary, by funding fellowships and API credits and hosting a Washington workshop in May, OpenAI is building not just the technology but the epistemic infrastructure that will interpret it for governments, journalists, researchers, and civil society. As one analyst put it, these documents are sincere and strategic at the same time. OpenAI is worried about real disruption. But it's also trying to define the acceptable policy perimeter before the public does it without them. Narrow frontier regulation, public-benefit corporation governance, broad access to non-frontier systems: all of this describes a world in which labs remain central but politically tolerated. The paper even closes with ecosystem-building: feedback channels, fellowships, research grants of up to $100,000, a million dollars in API credits. It reads less like a policy proposal and more like a venture capital pitch for legitimacy.
Transparency means showing your work
The fundamental problem isn't that the paper's proposals are bad. Some of them are genuinely thoughtful. A public wealth fund is worth debating. Real-time labor market indicators with automatically triggered transition support is a smart idea. Treating AI access as infrastructure rather than a luxury product is the right instinct. The problem is who's writing the settlement. A social contract drafted by the party that benefits most from the terms of settlement deserves very careful reading. Not cynicism. Not dismissal. But the kind of scrutiny you bring to any document that arrives with a pastel cover, a Washington address, and a trillion-dollar thesis about why its authors should remain at the center of the story. Transparency doesn't mean publishing your PR strategy. It means showing your work. It means honoring the safety commitments you made publicly instead of dissolving the teams responsible for them. It means not lobbying to kill the bills that propose exactly what your policy paper recommends. It means building the trust infrastructure you describe instead of asking governments to build it for you. OpenAI's biggest announcement this week was a policy paper, not a product. That tells you everything you need to know about where the real battle is being fought. It's not about models anymore. It's about the story.
References
- OpenAI, "Industrial Policy for the Intelligence Age: Ideas to Keep People First" (April 6, 2026). openai.com
- Ronan Farrow and Andrew Marantz, "Sam Altman May Control Our Future, Can He Be Trusted?" The New Yorker (April 6, 2026). newyorker.com
- NBC News Poll, March 2026. Reported by NBC News and The Verge. nbcnews.com
- Quinnipiac University Poll, "The Age of Artificial Intelligence" (March 30, 2026). poll.qu.edu
- Pew Research Center, "Key findings about how Americans view artificial intelligence" (March 12, 2026). pewresearch.org
- Eryk Salvaggio, "OpenAI's New 'Industrial Policy for the Intelligence Age' is a Policymercial," TechPolicy.Press (April 8, 2026). techpolicy.press
- Carlo Iacono, "The Social Contract OpenAI Wrote Without You," Hybrid Horizons (April 7, 2026). hybridhorizons.substack.com
- Public Citizen, "One in Four Federal Lobbyists Now Work on AI" (2025). citizen.org
- Issue One, "As Washington Debates Major Tech and AI Policy Changes, Big Tech's Lobbying is Relentless" (2025). issueone.org
- The Hill, "OpenAI's Altman releases blueprint for taxing, regulating artificial intelligence" (April 6, 2026). thehill.com