AI boom: going
around in circles

author image

Christiaan Bothma

Investment Analyst

The AI boom is driving one of the largest investment cycles in corporate history, with tech giants pouring trillions into new data centres. What’s new is the circularity of these flows – suppliers are funding customers, who then reinvest in their suppliers. This interdependence boosts growth but also raises risks if demand cools. While we do feel risk of a data centre overbuild has risen, near-term indicators of AI adoption and early productivity gains still support a positive outlook, and we remain selectively invested.

The rise of AI has sparked a new kind of industrial revolution – powered not by steam or oil, but by semiconductors, electricity and vast amounts of capital. Beneath the headlines about chatbots and AI agents lies a more concrete story: the rapid expansion of data centres, the physical infrastructure that underpins AI’s digital promise.

The great data centre expansion

Hyperscalers – companies like Microsoft, Amazon, Google, Meta and Oracle – are spending at a pace last seen during the late-1990s dot-com boom. Morgan Stanley estimates that global data centre capital expenditure from 2025 to 2028 could exceed US$2.9 trillion – an amount roughly equivalent to France’s gross domestic product.

Only about half of this is expected to be funded through the hyperscalers’ own cash flow. The remaining US$1.5 trillion gap is likely to be filled by credit markets, private equity and vendor financing. It marks one of the largest coordinated investment cycles in corporate history – and it’s still gathering momentum.

Circularity: when suppliers become financiers

What’s new – and potentially risky – is the circular funding loop emerging between AI’s major players.

Take Nvidia: it’s selling chips to firms like OpenAI, Oracle and CoreWeave, while also investing billions in their infrastructure projects – effectively recycling its own profits to create future demand for its chips. Nvidia has even agreed to buy back up to US$6 billion in unused capacity from CoreWeave – a kind of insurance policy that blurs the line between genuine demand and vendor-supported growth.

Then there’s the more unconventional AMD-OpenAI deal. Announced in October 2025, OpenAI agreed to purchase 6 gigawatts of AMD graphics cards – roughly two million units – over several years. In return, AMD issued OpenAI warrants for up to 10% of its equity, valued at around US$34 billion at the time. In effect, AMD is paying OpenAI in stock to choose its chips over Nvidia’s.

These are just two examples of the increasingly complex web of relationships across the AI value chain – where suppliers are financing customers, customers are investing in suppliers, and even competitors are cross-licensing or sharing revenues. It’s an ecosystem defined by rapidly deepening interdependence.

Potential implications

While we believe AI demand is real and adoption is accelerating, history suggests that major technology buildouts often overshoot in their early phases before settling into equilibrium. The growing interdependence between suppliers and customers increases the system’s vulnerability if near-term demand falls short of high expectations. Should a key player stumble – for example, if OpenAI’s commercial revenues disappoint – the shock could ripple across its entire network of counterparties.

This interconnectedness doesn’t undermine the AI investment case, but it does raise the systemic risk profile if demand slows or data centre supply runs ahead of sustainable growth.

We remain constructive – for now

Despite the clear risks, there are strong reasons to remain constructive on AI. The technology is already driving measurable productivity gains – particularly in software engineering, design and data-heavy industries. Early adopters are embedding AI deeply into workflows, creating sticky, recurring demand for compute and cloud services. Each wave of efficiency improvement compounds: model costs decline, new use cases emerge, and adoption continues to broaden.

It’s also worth noting that today’s AI boom differs from the dot-com era in one key respect. Back then, much of the capital came from unprofitable startups and debt-fuelled telecoms chasing growth ahead of cash flow. In contrast, today’s core investors – Microsoft, Alphabet, Amazon, Meta, Nvidia and Oracle – have resilient revenue streams and robust balance sheets. This doesn’t eliminate the risk of overinvestment or capital misallocation, but it does mean the current cycle is built on a stronger foundation.

How we’re positioned

We maintain exposure across the AI value chain – from the ‘picks and shovels’ of chipmaking to the hyperscalers driving end-user adoption. Applied Materials, for example, supplies the precision tools used by TSMC and Samsung to manufacture the advanced chips that power high-performance graphics processing units.

Our holdings in Microsoft, Amazon, Alphabet and Tencent benefit on multiple fronts – as infrastructure builders expanding data centre capacity, and as platform leaders integrating AI to cut costs, boost productivity and unlock new revenue streams.

That said, we remain disciplined in our position sizing and mindful of valuation risk. Our exposures remain calibrated in line with our assessment of intrinsic value based on long-term cash flow potential and not short-term sentiment.

A final word

The expansion of AI data centres marks one of the largest capital deployments in corporate history. The question is no longer whether the money will be spent, but whether the returns will justify the scale.

The circular flow of capital between suppliers and customers has created a powerful engine of growth – but also a potential feedback loop of dependency. Investors should approach this cycle with a blend of optimism and healthy scepticism. The profit potential may be extraordinary, but the path to realising it is unlikely to be linear.

We can assist you with
>
Thank you for your email, we'll get back to you shortly