Nobody trusts AI except China
Stanford's 2026 AI Index Report dropped last week, and one number stopped me cold: 84% of Chinese citizens trust their government to regulate AI. In the United States, that figure is 31%, the lowest of any country surveyed. We spend a lot of time talking about the AI race in terms of benchmarks, funding rounds, and model parameters. But the most consequential gap between the US and China might not be technical at all. It might be trust.
The trust gap is real, and it's widening
The data comes from a global Ipsos survey included in the Stanford HAI report. Across 32 countries, respondents were asked whether they trust their own government to regulate AI effectively. Countries in Asia and South America consistently scored highest. The US came in dead last at 31%. This is not just a quirk of survey methodology. It reflects something structural. In China, the government has positioned itself as both the promoter and the regulator of AI. Citizens see a coordinated national effort. In the US, AI governance is fragmented across agencies, stalled in Congress, and tangled in partisan politics. The result is a public that sees AI advancing rapidly with no one credibly in charge of steering it. Globally, 59% of people now say AI products offer more benefits than drawbacks, up from 52% the year before. But optimism and anxiety are rising in tandem. The share of people who say AI makes them nervous also ticked up to 52%. Americans are among the most likely to expect AI to eliminate jobs rather than create them, and only 33% expect AI to make their jobs better, compared to a global average of 40%.
Follow the money, then follow the people
On paper, the US is winning the investment race by a landslide. US private AI investment hit $285.9 billion in 2025, more than 23 times China's $12.4 billion. The US produced 1,953 newly funded AI companies, more than ten times the next closest country. But those private investment numbers don't tell the whole story. China's government guidance funds have deployed an estimated $184 billion into AI firms between 2000 and 2023. When you add state capital to the picture, the gap narrows dramatically. China is playing a different game, one where public investment and industrial policy do the heavy lifting that venture capital does in the US. And then there's talent. The number of AI researchers and developers moving to the US has dropped 89% since 2017, with an 80% decline in the last year alone. That's not a trend line. That's a collapse. Some of this is driven by active recruitment from countries like China, Canada, and the UK, which are offering competitive salaries, generous research funding, and AI-focused visas. Some of it is driven by changes in US immigration policy that have deterred international students and pushed existing researchers to leave. Talent is voting with its feet, and it's walking away from the US.
The scoreboard is more complicated than it looks
The US still leads in producing notable AI models, 50 in 2025 compared to China's 30. American institutions remain at the frontier of capability. But China leads in AI publications, citations, and patent grants. The quality gap on major benchmarks like MMLU and HumanEval has shrunk from double digits to near parity. China's open-source community has been a major driver of this convergence. Models like DeepSeek and Qwen have rapidly closed the performance gap, giving Chinese developers access to frontier-class capabilities without depending on proprietary US systems. Meanwhile, the benchmarks themselves are breaking down. Humanity's Last Exam, a deliberately brutal evaluation designed by over 1,000 subject-matter experts to be the "last closed-ended academic test" for AI, launched in early 2025 with frontier models scoring around 8.8%. By April 2026, the top model hit 41.6%. The test was supposed to last years. It may not last through the end of this one. Capability is not the bottleneck. Societal readiness is.
Trust as infrastructure
Here's the argument I keep coming back to: trust is infrastructure. Not the soft, feel-good kind. The hard, load-bearing kind, like roads or power grids or legal systems. It's the thing that determines whether a society can actually deploy the technology it builds. China's 84% trust figure doesn't mean Chinese citizens are naive about AI risks. It means they operate in a system where the government has credibly committed to both promoting and governing the technology. Whether you agree with how that governance works is a separate question. The point is that high trust creates a permissive environment for adoption. Companies can deploy AI products with less friction. Citizens are more willing to use AI-powered services. The feedback loops between development and deployment spin faster. The US has the opposite problem. Low trust creates drag at every level. Companies face regulatory uncertainty. Consumers are skeptical. Policymakers are paralyzed between competing fears of over-regulation and under-regulation. The result is that even with a massive lead in private investment and model capability, the US struggles to translate technological advantage into broad societal adoption. This is the incumbency trap. The US has the technology, the capital, and the talent (for now). But incumbency only matters if you can actually deploy what you've built. And deployment requires trust.
The small-country advantage
There's a third path worth watching, and it doesn't belong to either superpower. Singapore, a city-state of six million people, has quietly become one of the most interesting case studies in AI governance. Singapore combines high public trust in government with a pragmatic, framework-driven approach to AI regulation. Its National AI Strategy 2.0, launched in late 2023, explicitly aims to "harness AI for the public good." The government has invested in AI governance testing frameworks like AI Verify, launched support programs for companies adopting AI, and in its 2026 Budget committed to free premium AI tool access for citizens completing AI training courses. Small countries with high institutional trust can move faster than superpowers on adoption. They don't have the fragmented regulatory landscape of the US or the political complexity of the EU. They can experiment, iterate, and course-correct at a pace that larger nations simply can't match. The AI race isn't just about who builds the best model. It's about who builds the best environment for that model to actually be used.
What this means going forward
The Stanford report paints a picture of a technology advancing faster than the institutions meant to govern it. AI experts and the public are living in different realities, with 73% of experts seeing positive job impacts versus just 23% of the public. That perception gap is itself a trust problem. The countries that will lead in AI over the next decade won't necessarily be the ones with the most compute or the biggest funding rounds. They'll be the ones that figure out the trust equation: how to give citizens enough confidence in governance that adoption can accelerate, while maintaining enough oversight that the confidence is warranted. Right now, China has trust but faces questions about transparency and individual rights. The US has capability but faces a crisis of public confidence. And smaller nations like Singapore are demonstrating that you don't need to be a superpower to move fast, you just need a population that believes the people in charge know what they're doing. The real AI race isn't being run in data centers. It's being run in the space between what technology can do and what society will allow.