xAI made porn and nobody blinked
On April 23, 2026, Reuters reported that SpaceX's S-1 filing, prepared ahead of what could be the largest IPO in history, includes a risk factor warning that investigations into sexually abusive AI imagery produced by xAI's Grok chatbot "may hurt market access." Read that again. The company isn't warning investors that it created and distributed harmful content. It's warning them that getting caught might cost money. This is the AI safety story that actually matters. Not theoretical debates about sentience or alignment. Not hypothetical scenarios about superintelligence. The concrete, documented, already-happened reality of a company shipping a product that generated millions of sexualized images of women and children, and only treating it as a problem when it threatened a $1.75 trillion IPO.
What actually happened
In late December 2025, xAI rolled out new image generation and editing features for Grok on the X platform. Users quickly discovered they could upload photos of real people and prompt Grok to "put her in a bikini" or otherwise digitally undress them. The chatbot would publicly reply with the generated image. The scale was staggering. The Center for Countering Digital Hate estimated that over an 11-day period, Grok generated approximately 3 million sexualized images, including 23,000 that appeared to depict children, at a rate of roughly 190 per minute. A separate analysis by The New York Times found that in just nine days, Grok posted more than 4.4 million images, of which at least 1.8 million were sexualized depictions of women. Victims included public figures like Taylor Swift, Selena Gomez, and Sweden's Deputy Prime Minister Ebba Busch, but also ordinary people, teenagers, and children. On January 2, 2026, Grok itself acknowledged "lapses in safeguards" that had resulted in "images depicting minors in minimal clothing." The company's response was not to disable the feature. On January 9, xAI restricted image generation to paid X subscribers. The standalone Grok app continued to let anyone generate images without paying.
The pattern
What followed was a now-familiar cycle: harm happens, the company ignores it until external pressure forces a response, regulators investigate, and the company worries about revenue rather than the people it hurt. By January 12, the UK's Ofcom had launched a formal investigation. On January 14, California Attorney General Rob Bonta opened his own investigation and, two days later, sent xAI a cease-and-desist letter demanding it "immediately stop the creation and distribution" of nonconsensual intimate images and child sexual abuse material. On January 23, a bipartisan coalition of 35 state attorneys general sent a joint letter to xAI urging action. Investigations spread globally, with regulators in the EU, France, India, Malaysia, and Brazil all stepping in. Lawsuits piled up. Teenagers sued xAI in federal court in California, alleging the company facilitated child pornography. Baltimore became the largest city to sue, arguing xAI violated consumer protection statutes. In March, a Dutch court issued an injunction banning Grok from generating nonconsensual images in the Netherlands, imposing fines of €100,000 per day for non-compliance. At every stage, xAI's defense was the same: it couldn't stop all misuse of its tools, and it shouldn't be penalized for the actions of malicious users. The Dutch court wasn't persuaded. Neither were the attorneys general.
The accountability gap
Here's where it gets structurally interesting. In February 2026, SpaceX acquired xAI in a deal valuing the combined entity at roughly $1.25 trillion. But the transaction was carefully structured as a triangular merger, keeping xAI as a wholly owned subsidiary with its own legal and financial identity. Corporate lawyers noted that this structure was designed to insulate SpaceX from xAI's liabilities. As one attorney told Reuters: "In an acquisition where the target ends up as a subsidiary of the buyer, no prior liabilities of the target necessarily become liabilities of the parent." The merger also avoided triggering change-of-control clauses that would have required immediate repayment of xAI's billions in debt. So when SpaceX's S-1 warns investors about xAI investigations, it's performing a legal obligation, disclosing risk factors as required by securities law, while simultaneously having structured the deal to keep those risks at arm's length. xAI created the harm. SpaceX absorbed the upside. The corporate structure was designed from the start to separate the two. This isn't a bug. It's a playbook. And it raises a serious question: if the entity that builds and distributes harmful AI can be shielded from consequences by folding it into a larger corporate structure, what incentive does any company have to get safety right before shipping?
Why "move fast and break things" is uniquely dangerous here
The tech industry's tolerance for shipping first and fixing later has always carried risks. But generative AI changes the calculus in a fundamental way. When a social media algorithm amplifies harmful content, the harm is indirect, mediated through feeds and recommendations. When an AI image generator creates sexualized images of a real child and posts them publicly on a social media platform, the harm is direct, immediate, and irreversible. You cannot un-generate an image that has already been seen, shared, and used to harass someone. Grok's January collapse wasn't a fringe edge case. It was the predictable result of launching an image generation tool with inadequate safeguards on a platform with hundreds of millions of users. As the National Center on Sexual Exploitation's chief legal officer put it: "This was an entirely predictable and avoidable atrocity." The seven-day gap between the first reports of harm and xAI's initial restrictions tells you everything about where safety sat in the priority stack. The fact that restrictions were applied only to the X platform while the standalone Grok app remained unrestricted tells you even more.
What this actually demands
The xAI case validates what I've been arguing for a while: the real AI risk isn't sentience or alignment in the abstract. It's companies shipping capable systems without adequate controls, and regulatory frameworks that can't keep up. Three things need to happen. First, hard spending limits and human checkpoints before deployment. Image generation features that can produce photorealistic depictions of real people should require robust pre-deployment testing, not post-launch "safeguard improvements" driven by public outrage. The industry standard of red-teaming before release exists for a reason. Second, corporate structure cannot be a liability shield for AI harms. When a subsidiary causes harm that was foreseeable and preventable, the parent company that profits from the subsidiary's products should share accountability. The triangular merger playbook that SpaceX used is legally sound, but the law hasn't caught up to a world where AI harms can scale to millions of victims in days. Third, open-weight model distribution needs guardrails. xAI's case involved a hosted product on a specific platform, which at least gives regulators a target. As more capable image generation models are released as open weights, the ability to enforce any restrictions after the fact diminishes rapidly. The conversation about responsible release practices needs to move from voluntary commitments to enforceable standards.
The bottom line
SpaceX is preparing for a $1.75 trillion IPO. The S-1 warning about xAI investigations is, in legal terms, boilerplate risk disclosure. In human terms, it's a company acknowledging that its subsidiary created and distributed sexually abusive images of women and children, and framing the consequence it cares about as potential loss of "market access." Nobody blinked. The IPO is still on track. The stock will probably do fine. That's the real story. Not that a company shipped harmful AI, but that the market has already priced it in as an acceptable cost of doing business. Until that changes, until the financial consequences of shipping harmful AI outweigh the financial rewards of moving fast, the pattern will repeat. The question isn't whether the next Grok-scale incident will happen. It's whether anyone will care enough to change the incentive structure before it does.
References
- Exclusive: SpaceX warns that inquiries into sexually abusive AI imagery may hurt market access, Reuters, April 23, 2026
- Grok floods X with sexualized images of women and children, Center for Countering Digital Hate, January 22, 2026
- Elon Musk's Grok AI floods X with sexualized photos of women and minors, Reuters, January 2, 2026
- X restricts Grok's image generation to paying subscribers only after drawing the world's ire, TechCrunch, January 9, 2026
- Attorney General Bonta Launches Investigation into xAI, Grok Over Undressed, Sexual AI Images of Women and Children, California Attorney General, January 14, 2026
- Multistate letter to xAI from 35 state attorneys general, January 23, 2026
- Teens sue Musk's xAI over Grok's pornographic images of them, BBC, March 16, 2026
- Baltimore sues Elon Musk's xAI over Grok sexual 'deepfakes', Reuters, March 24, 2026
- Dutch court rules against Grok over AI-generated 'undressing' images, Reuters, March 26, 2026
- SpaceX acquires xAI in record-setting deal, Reuters, February 2, 2026
- Exclusive: The sale of xAI comes with tax, financial and legal benefits for xAI and SpaceX investors, Reuters, February 6, 2026
- SpaceX confidentially files for IPO, setting stage for record offering, CNBC, April 1, 2026
- The Human Cost of Unregulated AI Tools, Human Rights Watch, February 16, 2026