The four-day workweek is an AI bribe
In early April 2026, OpenAI released a 13-page policy document titled "Industrial Policy for the Intelligence Age." Among its proposals: taxes on AI profits, a public wealth fund, and a four-day workweek with no loss in pay. On the surface, it reads like a progressive wish list. Subsidized childcare. Portable benefits. Robot taxes. The kind of thing you'd expect from a labor union, not an $852 billion AI company. But when the company most likely to displace your job is also the one drafting the consolation package, it's worth reading the fine print.
The proposal, briefly
OpenAI's blueprint centers on three goals: distributing AI-driven prosperity more broadly, building safeguards against systemic risks, and ensuring widespread access to AI so economic power doesn't concentrate too heavily. The headline proposals include shifting the tax base from labor income and payroll taxes toward corporate income, capital gains, and "taxes related to automated labor," essentially a robot tax. There's a public wealth fund that would give every citizen a stake in AI-driven economic growth, even those not invested in financial markets. And there's the four-day workweek: OpenAI suggests governments and employers "incentivize employers and unions to run time-bound 32-hour/four-day workweek pilots with no loss in pay that hold output and service levels constant." The company also proposes expanding what it calls the "care and connection economy," roles in childcare, eldercare, education, and healthcare, as new pathways for displaced workers. Plus portable benefit accounts that follow workers across jobs, boosted retirement contributions, and subsidized dependent care. It sounds generous. Almost suspiciously so.
The framing trick
There's a move in corporate strategy that's as old as the Industrial Revolution: position yourself as the solution to the problem you're creating. Andrew Carnegie made his fortune in steel while his workers labored in dangerous conditions for poverty wages. Then he spent $60 million building 1,679 public libraries across America. Genuine generosity? Probably, in part. But also a masterclass in narrative control. The man who extracted the wealth got to decide how it was redistributed, and he got a legacy of benevolence in the bargain. OpenAI's policy paper follows a similar logic. The company acknowledges, right there in the document, that "AI-driven growth could hollow out the tax base that funds Social Security, Medicaid, SNAP, and housing assistance as corporate profits expand and reliance on labor income shrinks." It then proposes the fix. The arsonist is handing you a fire extinguisher. This isn't unique to OpenAI. It's the standard Big Tech playbook: create the disruption, fund the recovery, control the narrative. Facebook connected the world and then funded digital literacy programs when misinformation became a crisis. Google democratized information and then launched initiatives to combat the attention economy it helped build. The pattern is consistent: the company that causes the problem gets to define what "addressing" it looks like.
Who defines AI profits?
Consider the robot tax proposal. OpenAI suggests "increasing reliance on capital-based revenues, such as higher taxes on capital gains at the top, corporate income, or targeted measures on sustained AI-driven returns." Bill Gates floated a similar idea back in 2017, the concept of taxing the robot that replaces a human worker at the same rate the human was taxed. But here's the problem: who defines "AI-driven returns"? The boundaries between AI-augmented revenue and regular revenue are already blurry, and they're going to get blurrier. When a company uses AI to write marketing copy that drives sales, is that an AI-driven return? When an AI system optimizes a supply chain and saves millions, where do you draw the line? OpenAI, the company proposing this tax framework, is also the company that would be subject to it. There's an inherent conflict of interest in letting the entity that stands to be taxed help design the tax. It's like asking a restaurant to write its own health code. The company also stops short of specifying a corporate tax rate, which currently sits at 21% after Trump's 2017 cut. The vagueness is strategic. The proposal gestures toward higher taxes on capital without committing to anything that would meaningfully constrain OpenAI's own growth.
The four-day workweek assumes you still have a job
The four-day workweek is the most seductive proposal in the document, and the most misleading. The idea itself has real merit. Trials in the UK, Iceland, and elsewhere have shown that reducing work hours can maintain or even improve productivity while boosting worker wellbeing. A 2022 pilot involving 61 UK companies found that 92% continued with the four-day week after the trial ended. But OpenAI's framing of the four-day workweek as an "efficiency dividend" obscures a crucial assumption: it only works for people who still have jobs. If AI eliminates your role entirely, a shorter workweek at your former employer is meaningless. The proposal is designed for the workers who survive the transition, not the ones who don't. OpenAI's own research paints a sobering picture. The company's AI jobs transition framework found that 18% of U.S. jobs face relatively higher short-term automation risk, and 24% may see declining employment as task composition shifts. Goldman Sachs estimates AI could replace the equivalent of 300 million full-time jobs globally. A four-day workweek doesn't help when you're working zero days. The company does propose the "care and connection economy" as a safety net for displaced workers, suggesting roles in childcare, eldercare, and education. But these sectors are chronically underfunded and undervalued. Telling displaced software engineers and financial analysts to become eldercare workers isn't a transition plan, it's a downward mobility euphemism.
The portable benefits problem
OpenAI proposes portable benefit accounts that follow workers across jobs. On paper, this is a good idea. In practice, it reveals how the proposal sidesteps the real issue. The document frames benefits like healthcare, retirement matching, and childcare subsidies as corporate responsibilities rather than government ones. As TechCrunch pointed out, this leaves out the people AI is most likely to displace entirely. If automation eliminates your job, your employer-subsidized healthcare and retirement match go with it. Portable benefits are only portable if you have somewhere to port them to. The proposal acknowledges this gap and separately suggests portable accounts that follow workers, but these still likely depend on employer or platform contributions. They stop short of the government-backed universal coverage that would actually protect people displaced by AI. It's a half-measure dressed up as a revolution.
The nonprofit question
All of this lands differently when you consider OpenAI's own trajectory. The company was founded as a nonprofit with the explicit mission of ensuring AI "benefits all of humanity." It converted to a for-profit structure, a move that drew legal challenges from California nonprofits, philanthropies, and labor groups concerned about violations of charitable trust law. A company that abandoned its nonprofit structure to pursue shareholder returns is now publishing policy proposals about how to distribute AI's benefits to everyone. The contradiction isn't subtle. OpenAI's fiduciary duty is now to its investors, not to humanity. The policy paper reads less like a roadmap for equitable AI and more like a PR document designed to preempt regulation while the company consolidates its market position. The timing is also telling. The document dropped alongside reports that OpenAI president Greg Brockman donated millions to political campaigns, and that tech billionaires have funneled hundreds of millions into super PACs supporting light-touch AI policies. OpenAI is simultaneously proposing economic redistribution and funding the political infrastructure that would prevent it.
What a credible proposal would look like
If OpenAI were serious about addressing AI-driven displacement, the proposal would look different. It would include specific tax rates, not vague gestures toward "capital-based revenues." It would advocate for government-backed universal benefits, not employer-dependent portable accounts. It would propose independent oversight of the public wealth fund, not leave the details to the companies that would fund it. And it would address the fundamental power asymmetry: the people most affected by AI displacement have the least say in how the transition is managed. Anthropic, OpenAI's chief rival, took a different approach six months earlier. Its Economic Futures Program committed $10 million to independent research on AI's economic impacts, explicitly framing its proposals as "starting points for deeper research, policy development, and public debate" rather than a finished blueprint. The tone was more humble, the scope more honest about uncertainty. That's not to say Anthropic's approach is perfect. But there's a meaningful difference between funding independent research and publishing your own policy manifesto.
The real question
The four-day workweek is a good idea. Public wealth funds have precedent, Alaska's Permanent Fund has distributed oil revenue to citizens since 1982. Robot taxes deserve serious consideration. None of these ideas are bad in isolation. The problem is who's proposing them, and why. When the company building the technology that threatens to displace millions of workers is also the one drafting the policy response, the power dynamic is fundamentally skewed. OpenAI gets to set the terms of the debate, define the scope of acceptable solutions, and position itself as a responsible actor, all while continuing to build the very systems that create the problem. The question isn't whether a four-day workweek would be nice. It would. The question is whether we're comfortable letting the companies that stand to profit most from AI disruption be the ones who design the safety net. History suggests we shouldn't be. Carnegie's libraries were wonderful. But they didn't undo the conditions in his steel mills. The generosity was real, but it was also a distraction from the structural problem: the people generating the wealth had no power over how it was used. OpenAI's proposal deserves to be read, debated, and taken seriously. Some of the ideas will likely become policy. But it should be read for what it is: a document written by a company with a $852 billion valuation, a fiduciary duty to its shareholders, and a vested interest in shaping the rules of the game it's winning. Some action is better than none. But a bribe is still a bribe, even when it comes with a day off.