🔮 Sunday edition #538: Watts, not weights. The capex clock. Liberalism & human nature. Robots, walking & defense …
- Azeem Azhar, Exponential View <exponentialview@substack.com>
- Hidden Recipient <hidden@emailshot.io>
Hi all, Welcome to our Sunday edition, where we explore the latest developments, ideas, and questions shaping the exponential economy. Enjoy the weekend reading! Azeem The burn rate of a bubbleThe generative AI investment boom is not immune to exuberance. It’s notoriously hard to get infrastructure spending right, especially during the installation phase of a new technology: over-capacity tangos with under-supply. During this boom, though, there seems to be no end in sight for ever larger commitments for ever larger datacenters. But might capex growth outpace realised revenues? Back in February, I highlighted that quandary:
Turning to the question of depreciation this week, Praetorian Capital reckons that, covering depreciation on this year’s GPU cohort¹ requires about $40 billion in profits. Depending on profit margins that could translate to triple or quadruple in revenues. Last year, hyperscalers booked just $45 billion in generative AI revenue. Forecasts put AI revenue above $1 trillion by 2028. About $400 billion would come from enterprise productivity spending, while $680 billion would come from consumer wallets (across e-commerce spending, advertising, new apps and content). But that is in the far future, at least outside the lifespan of the chips being installed today. It is hard to reconcile the math: these GPUs are consumables. They aren’t highways or cable ducts that can sweat for decades. They live on balance sheets, on what I believe are improbable six-year depreciation schedules. Feels a bit bubbly. What changes the picture is that sober-minded infrastructure capital is starting to play in this build-out. Meta, for example, tapped PIMCO and Blue Owl for a $29 billion financing. These aren’t frivolous decision makers. So the bet must be that the borrower, Big Tech, is good for the money. I suspect the Big Tech firms are motivated by the following logic: there is more than a possibility that demand will run hotter than even eager forecasts suggest. And that these massive data centers will increasingly convert to revenue as more consumers turn to paying apps, as genAI increases ad revenues through new inventory or better ROI and as business demand continues to grow. Finally, the industry can see a huge and growing market out there. Under-provisioning their capacity means failing to serve customers at the quality they want. Their competitors are eagerly waiting. That’s likely the rationale. We could all, of course, be wrong.
The watts doctrineA growing strand of Chinese AI strategy reframes the ambition as a physics problem: how efficiently can you convert energy into useful intelligence? Ryan Cunningham calls it the energy-compute theory – the idea that watts, not weights, will determine who dominates AI and the economy built around it. Professor Yu at Tsinghua University talks about moving from “race to AGI” to “ubiquitous edge intelligence”:
Improvements in energy efficiency multiply. Better chips, smarter models, new architectures create polynomial advantages. China is using robotics and heterogeneous chip stacks as forcing functions to unlock these compound gains. To contextualize, let’s recognise that American genAI firms also care about efficiency. Computation uses energy. Energy costs money. AI companies have incentives to reduce their costs: viz., OpenAI, whose GPT-5 has a router that steers easy tasks to cheaper models. Google hasn’t stood still either. In the past year, it reduced the energy consumption of its LLMs by a factor of 33. The average prompt in Gemini now consumes 0.24Wh. To put this in context, if you watch an hour of Netflix on a big TV, you’ll use as much energy as 400 typical Gemini queries. And the startup world is full of firms going all-in to improve energy’s intelligence cost, including some of the companies I’ve invested in: Vaire Computing, Fractile and Trainloop. All these will deliver increasingly high-quality and heavy AI workloads with greater efficiency. The intelligence/energy exchange rate is far from fixed. Liberalism and human natureModern strands of liberal thought often treat autonomy as its highest good, the promise of maximum choice, unbounded freedom and the ability to shape ourselves however we wish. Francis Fukuyama argues in a new essay that we’ve misunderstood liberalism’s core strength. It was never about boundless freedom but its recognition of limits – that our dignity and equality arise from a shared, enduring human nature.
This perspective becomes urgent across today’s debates, from biotechnology to AI. Consider genetic enhancement: altering the traits of future generations involves irreversible choices imposed on those who cannot consent. Parents may exercise exactly the kind of arbitrary power over others that liberalism was meant to restrain. As Jürgen Habermas warned in The Future of Human Nature, we risk creating new forms of domination disguised as liberation. ElsewhereIn AI, tech & science:
And very fun to watch:
Markets and strategy:
In society & culture:
Thanks for reading! Azeem 1 Google, Amazon, Microsoft and Meta will spend $400 billion on datacenters in 2026 alone. |
Similar newsletters
There are other similar shared emails that you might be interested in: