AI infrastructure spending has now exceeded what was spent on telecom and internet infrastructure in the dot-com boom, and is still growing. This massive AI build out is currently propping up the US economy; AI-related capital expenditures contributed more growth to the US economy in the past two quarters than all of consumer spending. 

In effect, the economy is resting on the backs of the big tech AI hyperscalars (Amazon, Google, Meta, Microsoft, and Oracle), and credit agencies are closely watching how they are financing this buildout. Traditionally, these companies would use their own resources to build infrastructure facilities but are now looking to outside sources of capital. Due to the enormous demand for AI, tech companies and their bankers have been developing increasingly complicated financial strategies to pay for it, including mortgage-backed and asset-backed securities. 

Experts are concerned about a “self-reinforcing leverage loop” underpinning much of the AI infrastructure buildout. VC-backed startups are leveraging their GPU stockpiles as collateral to finance data center construction. This is the strategy used by CoreWeave, Crusoe, Fluidstack, and Lambda. (CoreWeave went public earlier this year, and Lambda is reportedly seeking to do so too.) Further closing the “loop”, CoreWeave is starting its own VC fund to invest in AI startups using a “compute-for-equity” model. 

The complexity of the financial arrangements makes it difficult to estimate the potential fallout, but systemic risk is undoubtedly building. Private equity firms are issuing the loans tied to GPUs via private credit vehicles, and questions are mounting about how the firms and other investors plan to exit. Further complicating this byzantine equation are outstanding questions about the longevity of AI infrastructure. Some argue that because they need considerable upkeep and chips continue to evolve, data centers will depreciate faster than they can generate revenue. 

There’s also reason to be concerned with the demand side of data center economics. Per Sequoia Capital, there is currently a $600 billion difference between what has been invested in generative AI and what it has generated in revenue. Companies developing generative AI models are currently running “cloud debt;” they’re offering their products to end-users at prices that do not cover their computing costs, which means they have to keep raising funds from investors to stay afloat. This is troubling since if the pace of VC investment persists, the industry may run out of money in six quarters. So far, the only reliable liquidity mechanism privately-held AI companies have is selling their talent to Big Tech through acqui-hires. All of this raises questions about how model providers will consume all of the computing power currently being brought to market. 

Is this Silicon Valley’s subprime mortgage crisis moment? 

VCs have an enormous amount of money invested in currently unprofitable startups that have such high valuations that it makes it difficult for them to sell. Case in point: OpenAI is currently floating a $500 billion valuation, and it can’t even go public because of its contested nonprofit structure. The stock prices of the hyperscalars have been high, but this is because investors have been pricing them as if their asset-heavy AI businesses will be as profitable as their asset-light ones, when in reality much of Big Tech’s latest profits come from their established products

The concentration of data center leases in a handful of large, creditworthy companies raises concerns about what would happen if any one of them reduced spending or took a credit hit. In July, Oracle received warnings from Moody’s and S&P that it needed to reduce its debt ratios or face a downgrade. Oracle has committed billions of dollars to develop “unprecedentedly large” data centers, and it already has deals with xAi, NVIDIA, and OpenAI to use them. The company’s stock price soared after the announcement, but questions remain about how it will fund this development. Oracle doesn’t currently have the chips to fulfill the contract or the cash to buy them. (The company recently recorded a negative annual cash flow for the first time since 1990.) It’s debatable whether OpenAI - Oracle’s largest customer - will even be able to fulfill its end of the $300 billion bargain; by the company’s own estimations, it won’t turn a profit until 2030.

Investors are increasingly looking to the performance of data centers as an early indicator of where the AI market is headed. And while the potential shock to the economy from this seeming house of cards is cause enough for concern, there is additional collateral damage. Local governments have been making huge investments in energy infrastructure to support the surge of data centers, and residents have been subjected to distorted power readings, strains on their water supply, and increased pollution. All of this suggests that if an AI crash does materialize, local communities will be left with the financial, environmental, and social fallout.

Questions to consider

  • How are companies building data centers and other AI infrastructure estimating demand and profit projections? How are they estimating the depreciation of the infrastructure components?

  • How are companies building data centers planning to mitigate costs being incurred by local governments and communities? Are they assessing the risk of litigation?

Keep Reading

No posts found