The GPU landlord era
Everyone wants to rent GPUs now. Shoe companies, cloud providers, startups, sovereign wealth funds. Compute has become the new real estate, and like real estate, the landlords are going to win more than the tenants. This isn't a metaphor anymore. It's the actual structure of the AI economy.
The new landlord class
The AI boom runs on GPUs. Every model trained, every chatbot query answered, every image generated requires GPU compute. And the companies that own those GPUs are sitting in a position that looks remarkably like property owners in a housing crisis: demand is through the roof, supply is constrained, and renters have no choice but to pay up. The GPU-as-a-service market was valued at roughly $5.7 billion in 2025, and projections put it anywhere from $26 billion to $74 billion by the early 2030s, depending on who you ask. The growth rate tells the story more clearly than the exact numbers. SemiAnalysis reported that H100 one-year rental contract pricing shot up nearly 40% between October 2025 and March 2026, from $1.70/hr/GPU to $2.35/hr/GPU. On-demand capacity is sold out across all GPU types. Trying to find GPU compute in early 2026, as they put it, is like trying to book the last flight out. At the top of the stack sit the actual GPU landlords. NVIDIA designs and manufactures the chips. The hyperscalers, AWS, Microsoft Azure, Google Cloud, build and operate the massive data centers. And increasingly, sovereign wealth funds are entering the picture not as tenants but as infrastructure owners. Everyone else is subletting.
AWS proved the model, NVIDIA is perfecting it
The playbook isn't new. Amazon Web Services demonstrated two decades ago that owning the infrastructure layer is more durable than building applications on top of it. AWS didn't need to win at any particular software category. It just needed everyone else to need servers. The margin on being the landlord turned out to be enormous. NVIDIA is running the same play one layer deeper. In fiscal year 2026, NVIDIA's data center business generated $194 billion in revenue, a 68% year-over-year increase. The company's networking revenue hit $31.4 billion, up 142%. NVIDIA isn't just selling chips anymore. It's selling the foundation layer of the entire AI economy, and it has supply visibility extending into 2027. The comparison to real estate isn't just rhetorical. Like prime real estate, GPU infrastructure requires massive upfront capital, benefits from scarcity, and generates recurring rental income. And like real estate, the best time to have bought in was yesterday.
CoreWeave and the rise of the GPU subletter
CoreWeave is the most visible example of what happens when you bet early on GPU infrastructure. Founded in 2017 as a cryptocurrency mining operation, the company pivoted to AI compute when crypto crashed. That pivot put them years ahead of competitors in understanding how to operate large GPU clusters at scale. CoreWeave IPO'd in March 2025 at $40 per share, raising $1.5 billion. By early 2026, the company was projecting a $2.6 billion revenue run rate. Meta recently expanded its cloud capacity agreement with CoreWeave to $21 billion through 2032. Even companies that own their own data centers, like Meta, need more infrastructure than they can build alone. Other GPU-as-a-service providers are following a similar trajectory. Applied Digital, Nebius Group, and IREN Limited all benefit from the same dynamic: demand for GPU compute far outstrips what the hyperscalers can supply on their own. The subletters are thriving because the housing shortage is real.
The Allbirds signal
If you needed proof that GPU rental has become the default business model for companies with nowhere else to go, look no further than Allbirds. In April 2026, the once-$4-billion shoe company sold off its entire brand and assets for $39 million. Rather than delisting, it rebranded as NewBird AI and announced plans to use $50 million in financing to buy GPUs and lease them out. The stock surged 600% in a single afternoon. The move is absurd on its face. $50 million is a rounding error in a market where tech giants are spending nearly $700 billion on AI infrastructure this year alone. As William Blair analysts noted, it's "a drop in the bucket." But the market reaction tells you something important: investors believe GPU rental is such a sure bet that even a shoe company pivoting into it gets rewarded. Allbirds isn't going to become a real GPU landlord. But the fact that the market briefly pretended it could says everything about where perceived value sits in the AI economy.
Sovereign wealth enters the landlord stack
The most consequential new entrants to the GPU landlord class aren't startups. They're sovereign wealth funds. Saudi Arabia's Public Investment Fund, managing over $900 billion in assets, launched Humain as a full-stack AI company, covering data centers, cloud capabilities, and large language models. The kingdom has earmarked more than $40 billion for AI-related investments. Google Cloud and PIF are jointly investing $10 billion to build an AI hub in Saudi Arabia. Abu Dhabi's MGX fund launched with a $100 billion mandate focused on AI infrastructure and semiconductors. The Qatar Investment Authority signed a $20 billion deal with Brookfield for AI infrastructure investment. These aren't speculative bets. These are nations converting energy wealth into compute wealth. The strategic logic is compelling. Sovereign wealth funds can deploy capital at scales and timelines that private investors cannot match. They don't need to fundraise. They don't answer to quarterly earnings calls. And they're sitting on top of the one resource that GPU data centers consume most: energy.
The energy constraint nobody talks about enough
You can buy GPUs, but can you power them? Global AI infrastructure capital expenditure surpassed $400 billion in 2025, with 2026 projections exceeding $600 billion. But the bottleneck is increasingly not chips but electricity. Data center electricity consumption reached roughly 415 terawatt hours in 2024, about 1.5% of global electricity consumption, and it's been growing at 12% per year. AI is accelerating that growth dramatically. GPUs account for about 40% of power usage in AI data centers, but the total facility overhead, including cooling, networking, and power conversion losses, adds roughly 1.4x on top of that. A modern AI GPU draws between 700 and 1,200 watts, compared to 150-200 watts for a CPU. The energy math gets brutal at scale. This is why the Middle Eastern sovereign funds have a structural advantage. They're not just investing in AI infrastructure, they're co-located with energy production. The Gulf states sit between Europe, Africa, and Asia with subsea cable connections to all three continents. They have the power, the capital, and the geographic position. 74% of data center capacity currently under construction is already pre-leased. The landlords who locked in power and permits early are the ones who will profit. Everyone else is competing for what's left.
The Singapore angle
Southeast Asia attracted more than $55 billion in AI infrastructure commitments in 2025. Singapore, with a data center vacancy rate of just 1.4%, the lowest in Asia-Pacific, is at the center of it. The Singapore government allocated $740 million to build national AI capabilities, with strict requirements for local data residency and processing. Bridge Data Centres announced plans to invest up to S$5 billion in Singapore's AI infrastructure. The Infocomm Media Development Authority awarded 80MW of new capacity to Equinix, GDS Holdings, Microsoft, and others. But Singapore's position in the GPU landlord stack is complicated. It has the regulatory credibility, the financial infrastructure, and the geographic position. It doesn't have abundant cheap energy or vast land. Malaysia's Johor, just across the causeway, is developing 4.5 times its current operational data center capacity, partly because it has what Singapore lacks: space and power. U.S. chip export restrictions have also turned Singapore into an interesting intermediary. Chinese firms seeking overseas computing power have increasingly looked to Singapore and Malaysia as regional AI data center hubs, creating a dynamic where geography and geopolitics intersect with infrastructure economics. Singapore's role may end up being more like a financial district than a factory floor: the place where GPU compute is brokered, governed, and regulated, even if the physical machines increasingly sit next door.
The demand squeeze
Here's where the analogy to real estate gets uncomfortable for tenants. AI training costs remain enormous. Frontier models cost hundreds of millions of dollars to train, and those costs are rising as models get larger. But the bigger economic shift is in inference, the ongoing cost of actually running models in production. Inference is projected to account for 65% of AI compute by 2029 and 80-90% of lifetime AI system costs. The per-unit cost of inference is falling fast. Stanford's 2025 AI Index showed inference costs dropping from $20 to $0.07 per million tokens. But usage is growing even faster. Reasoning models like DeepSeek R1 consume 150 times more compute than traditional inference. Agentic AI systems generate 5 to 30 times more tokens per query than simple chatbots. OpenAI's Sora reportedly burned through $15 million per day in inference costs while generating only $2.1 million in lifetime revenue before shutting down in early 2026. That's not a pricing problem. That's a structural mismatch between what AI costs to run and what users will pay for it. For GPU renters, this means a margin squeeze from both directions: the cost of renting compute stays high because demand is insatiable, while the revenue from AI services faces constant downward pressure from competition and open-source alternatives. The landlords collect rent either way.
The counter-argument: on-device and edge AI
It would be dishonest to ignore the forces pushing in the other direction. On-device inference is getting dramatically better. Quantized models now achieve 95-99% of cloud model accuracy while running on local hardware. Edge AI, processing data on devices near the source rather than in centralized data centers, is gaining traction for latency-sensitive applications like video analytics, predictive maintenance, and autonomous systems. If enough AI workloads move to the edge, the demand for centralized GPU rental could plateau or even decline for certain use cases. Apple, Google, and Qualcomm are all investing heavily in making their consumer and enterprise chips capable of running AI models locally. But here's the thing: edge AI solves a latency and privacy problem, not a scale problem. Training frontier models, running massive inference pipelines, and serving millions of concurrent users still requires centralized GPU infrastructure. The edge chips handle the small stuff. The landlords still own the heavy lifting. For the foreseeable future, the trend is clear. The tenants need the landlords more than the landlords need any individual tenant.
What this means
The GPU landlord era follows a pattern we've seen before. In every gold rush, the people selling picks and shovels do better than most of the miners. In every real estate boom, the developers and landowners capture more long-term value than the tenants. The AI gold rush is no different. NVIDIA owns the picks. The hyperscalers own the mines. Sovereign wealth funds are buying up the land. CoreWeave and its peers are the property managers, taking a cut for making the landlord's assets accessible. And the rest of the industry, the model builders, the app developers, the startups, are tenants, paying rent that rises with demand. The question isn't whether this structure is fair. It's whether it's durable. And if the history of infrastructure economics is any guide, the answer is yes. The landlords tend to win.
References
- AI GPU Rental Market Trends (April 2026), Thunder Compute
- The Great GPU Shortage, Rental Capacity, SemiAnalysis
- GPU as a Service (GPUaaS) Market and Competition Analysis, Yahoo Finance / Research and Markets
- Why CoreWeave Is Growing So Fast in 2026, NeuraPulse
- Is CoreWeave a Buy 1 Year After Its IPO?, The Motley Fool
- Saudi Arabia is making a massive bet on becoming a global AI powerhouse, Yahoo Finance / CNN
- The Middle East's AI Spending Is Becoming a Two-Horse Race, WIRED Middle East
- Energy demand from AI, International Energy Agency
- Inference demand drives continued AI buildout, J.P. Morgan
- SG60: Singapore's role in Southeast Asia's AI future, Singapore EDB
You might also enjoy