OpenAI wants a safety net
On April 6, OpenAI released a 13-page document titled "Industrial Policy for the Intelligence Age: Ideas to Keep People First." It's a sweeping set of proposals for how governments should respond to AI-driven economic disruption, covering everything from robot taxes to public wealth funds to four-day workweeks. The document is thoughtful. It's also deeply strange. The company spending billions to build systems that automate human work is now telling governments they need to prepare for mass displacement. It reads like the arsonist selling fire insurance.
What OpenAI actually proposed
The document covers a lot of ground, but a few proposals stand out. First, tax reform. OpenAI argues that as AI boosts corporate profits and capital gains, it will simultaneously erode income and payroll taxes, the revenue streams that fund Social Security, Medicaid, SNAP benefits, and housing assistance. Their solution: shift the tax base toward corporate income and capital gains, and introduce taxes on automated labor, sometimes called "robot taxes." Bill Gates floated this same idea back in 2017. Second, a public wealth fund. The government would create a nationally managed fund, seeded in part by contributions from AI companies, designed to distribute returns directly to citizens. Think of it as a sovereign wealth fund for the AI age, where the profits generated by automation flow back to the public rather than concentrating in a handful of firms. Third, workforce transition programs. OpenAI wants employers to experiment with four-day workweeks with no loss in pay, funded by productivity gains from AI tools. They also propose "benefits bonuses" tied to those productivity gains, including increased retirement contributions, expanded healthcare coverage, and subsidized child and eldercare. Finally, infrastructure. The document calls for accelerated expansion of the US electrical grid to support the massive energy demands of AI data centers, alongside broader access to AI compute for startups and researchers. OpenAI frames all of this as a starting point for public debate, not a finished prescription. CEO Sam Altman published the document alongside an interview with Axios, positioning it as an invitation for conversation about what comes next.
The pattern we've seen before
There's a well-worn playbook here, and it predates AI by decades. The tobacco industry spent the second half of the 20th century funding research, sponsoring health campaigns, and proposing voluntary self-regulation, all while selling a product they knew was killing people. Internal documents later revealed that the Tobacco Industry Research Committee, established in 1954, was designed not to find truth but to manufacture doubt. The goal was to shape the narrative, control the terms of debate, and delay meaningful regulation. Fossil fuel companies followed a similar script. Fund climate research, acknowledge the problem in broad terms, propose industry-led solutions, and quietly lobby against binding legislation. OpenAI's document doesn't map perfectly onto these examples. They're not denying that AI causes disruption; they're explicitly acknowledging it. But the structural dynamic is the same: the entity creating the problem is positioning itself as the entity best suited to define the solution. When you write the rules, you get to make sure they work for you.
The SB 1047 problem
This is where the document gets hard to take at face value. In 2024, California introduced SB 1047, a bill that would have required third-party audits of frontier AI models, incident reporting, safety protocols before deployment, whistleblower protections, and a public compute cluster for researchers and startups. OpenAI lobbied against it. They argued it would hurt innovation, drive companies out of California, and that regulation should come from the federal level, not states. The bill was ultimately vetoed by Governor Newsom. Now look at what the "Industrial Policy for the Intelligence Age" proposes: auditing regimes, incident reporting, mechanisms for public input, and broader access to AI infrastructure. As Tech Policy Press pointed out, these are functionally the same concepts OpenAI helped kill when they were proposed by a state legislator. The difference? SB 1047 would have imposed binding requirements on OpenAI. The new document proposes voluntary frameworks that OpenAI gets to help design. One is regulation. The other is reputation management.
The strategic logic
To be fair, there's a rational argument for why OpenAI would do this. AI companies know regulation is coming. The question isn't whether, it's what kind. By publishing a detailed policy framework, OpenAI gets to set the agenda. They define the problem space, propose the categories of solutions, and establish themselves as a constructive partner in the conversation. If governments adopt even a fraction of these proposals, they'll be working within a framework that OpenAI designed. This isn't necessarily cynical. Companies that build transformative technology probably should contribute to the policy conversation. They have data about how the technology works, where it's headed, and what kinds of disruption it's likely to cause. Excluding them from the discussion would be its own kind of failure. But there's a difference between contributing to a conversation and controlling it. And there's a difference between proposing policies that constrain your own behavior and proposing policies that ask someone else to clean up after you.
The uncomfortable question
The document's own logic creates a tension it never resolves. OpenAI clearly believes AI will cause significant economic disruption. The entire document is premised on it. They project that superintelligence could reshape "how organizations run, how knowledge is created, and how people find meaning and opportunity." They acknowledge that "workers using AI might well agree that it's increasing their productivity without believing they're seeing the benefits." They warn that without intervention, AI could "widen inequality by compounding advantages for those already positioned to capture the upside." If you believe all of that, and OpenAI clearly does, then why are you racing to deploy it as fast as possible? The document never addresses this. It treats the arrival of superintelligence as inevitable, a force of nature that policy must adapt to rather than something being built by specific companies making specific choices about speed, safety, and deployment. OpenAI is simultaneously the architect of the disruption and the consultant hired to manage the fallout.
What to make of it
The honest answer is that both readings of this document are probably true at the same time. Yes, it's self-serving. OpenAI gets to shape the regulatory conversation, position itself as responsible, and propose frameworks that protect its business model. The timing, right as public anxiety about AI job losses is peaking, is not a coincidence. But it's also substantive. The proposals around tax reform, public wealth funds, and workforce transition aren't trivial. They draw on real economic thinking and address real problems. Someone has to start this conversation, and the alternative, waiting until the disruption arrives and then scrambling to respond, is worse. The danger isn't that OpenAI is having this conversation. It's that we might mistake their version of it for the whole conversation. Policy proposals from the company that profits most from AI adoption are a data point, not a compass. They should be read alongside proposals from labor economists, displaced workers, civil society organizations, and the legislators who tried to pass binding regulation before OpenAI decided voluntary frameworks were the better path. OpenAI wants a safety net. They just want to be the ones holding it.
References
- OpenAI, "Industrial Policy for the Intelligence Age: Ideas to Keep People First" (April 6, 2026). https://openai.com/index/industrial-policy-for-the-intelligence-age/
- TechCrunch, "OpenAI's vision for the AI economy: public wealth funds, robot taxes, and a four-day workweek" (April 6, 2026). https://techcrunch.com/2026/04/06/openais-vision-for-the-ai-economy-public-wealth-funds-robot-taxes-and-a-four-day-work-week/
- Bloomberg, "OpenAI Advocates Electric Grid, Safety Net Spending for New AI Era" (April 6, 2026). https://www.bloomberg.com/news/articles/2026-04-06/openai-advocates-electric-grid-safety-net-spending-for-new-ai-era
- Wall Street Journal, "What to Know About OpenAI's Ideas for a World With Superintelligence" (2026). https://www.wsj.com/tech/ai/what-to-know-about-openais-ideas-for-a-world-with-superintelligence-e97d6e7b
- Axios, "OpenAI proposes new AI doom scenario" (April 7, 2026). https://www.axios.com/2026/04/07/openai-economic-political-policy
- Quartz, "OpenAI on robot taxes, public wealth fund, AI jobs in policy" (2026). https://qz.com/openai-robot-taxes-public-wealth-fund-ai-jobs
- Tech Policy Press, "OpenAI's New 'Industrial Policy for the Intelligence Age' is a Policymercial" (2026). https://www.techpolicy.press/openais-new-industrial-policy-for-the-intelligence-age-is-a-policymercial
- The Hill, "OpenAI's Altman releases blueprint for taxing, regulating artificial intelligence" (2026). https://thehill.com/policy/technology/5817906-openai-ai-policy-recommendations/
- CFO Dive, "OpenAI urges tax policy rethink as AI heralds new economic era" (2026). https://www.cfodive.com/news/openai-urges-tax-policy-rethink-as-ai-reshapes-economy/816789/
- Senator Wiener, "Senator Wiener Responds to OpenAI Opposition to SB 1047" (August 21, 2024). https://sd11.senate.ca.gov/news/senator-wiener-responds-openai-opposition-sb-1047
- PMC, "Inventing Conflicts of Interest: A History of Tobacco Industry Tactics" (2012). https://pmc.ncbi.nlm.nih.gov/articles/PMC3490543/
- Bloomberg Technology, "OpenAI Pushes for Policies to Offset AI's Impact" (April 6, 2026). https://www.youtube.com/watch?v=1O5Qo3qi3iM