By Alison Savas, Investment Director, Antipodes
Nvidia continues to defy gravity.
Despite being one of 2023’s best performing global equities, up almost 240%, at the time of writing Nvidia’s share price has continued to rise yet another 130%[i] this calendar year. With a market cap of $2.8tr, it’s the third largest company in the MSCI ACWI behind Microsoft and Apple. Nvidia has had a phenomenal run thanks to its near monopoly over AI chips and its pricing power.
The commercial implementation of large language models has been percolating for a number of years, and the release of Chat GPT in November 2022 and the acceleration in investment it catalysed has been unprecedented. Nvidia has been the primary beneficiary of this investment cycle. Building and operating AI models is both power and hardware intensive. The market has blessed the stock as the Ultimate AI Winner.
But we know that with any non-linear change, the landscape will shift over time.
We are currently in an arms race to build more capacity and train increasingly sophisticated models. But the question is, how sustainable is the current level of spending? AMD estimates that spending will continue to grow at 70% p.a. from $45b in 2023 to more than $400b by 2027. To support this level of AI hardware, surrounding infrastructure also needs to be upgraded – we estimate an additional $350b will need to be invested along-side the $400b. This will take total data centre investment to $750b by 2027. These numbers are staggering – and to justify this level of spend, companies will need to find ways to monetise models.
As companies look to scale their AI models the spotlight is squarely shifting to reducing the total cost of compute.
Competition is building from Nvidia’s traditional semiconductor rivals as well as the cloud giants that are developing in-house accelerator chips (an alternative to GPUs — graphic processing units) to support both training and inferencing workloads (the workloads from deploying, or using, the AI model). Further, researchers are working on ways to increase the algorithmic efficiency of the AI models to get better use out of existing chips; startups are contemplating using alternative GPUs to run AI models once they are deployed into the real world; and smaller models are being deployed to run locally on devices without the need to use GPUs in data centres.
All these methods aim to reduce the cost of compute.
The point is the phenomenal growth that Nvidia has experienced is not guaranteed into the future.
The capability of these large language models is transformational, and there will be more than one winner from this cycle of innovation despite the way the market is behaving today.
At Antipodes, we’re looking for Pragmatic Value exposure to AI. Stocks that can benefit from the AI investment cycle but are mispriced relative to their business resilience and growth profile. Two such ideas are Taiwan Semiconductor Manufacturing (TSMC) and Qualcomm (QCOM).
TSMC is the picks and shovels play of AI given its critical role in the supply chain. TSMC is the largest and most sophisticated foundry in the world with a near monopoly over the manufacture of the most advanced semiconductor chips. The GPU or accelerator chips that are currently deployed in data centres are more than likely manufactured by TSMC.
TSMC’s competitive strength is evidenced by Intel’s challenges scaling its foundry business, and Samsung Electronics’ inability to mass produce leading-edge chips at the same volume, quality and cost as TSMC.
The company has made the investments required to participate in this cycle of innovation including building leading-edge fabrication plants in the US and Japan. Explosive demand for AI chips places TSMC in pole position to harvest those investments for growth and profitability. We see the company growing earnings 15-20% p.a. and it’s priced at only 14x our 2026 earnings forecasts. Geopolitical risks do exist but given TSMC’s critical role, both superpowers are still very dependent on the company.
Beyond first-order beneficiaries, we are also thinking about edge applications. Qualcomm is a global leader in low power compute and connectivity chips. Qualcomm’s expertise allows it to flex its creative muscle by designing AI chips for devices like phones and laptops. For example, some of Microsoft’s new Surface tablets and laptops will be able to run certain AI tasks locally on the device, powered by chips from Qualcomm[ii]. Running AI models locally results in lower cost (no data centre required), better security (sensitive information is not being sent to the cloud) and a better user experience from lower latency (avoids internet lag).
The company is also gaining share in new markets like smart glasses and connected cars.
Nvidia is today’s undisputed AI leader, but as with previous episodes of innovation the landscape will shift.
With TSMC and Qualcomm we’re able to take exposure to AI at mid to high teens multiples versus 30x for the broader semiconductor complex. This is Pragmatic Value exposure to AI.
[i] Source: Factset
[ii] Source: https://www.cnbc.com/2024/05/20/microsoft-qualcomm-ai-pcs-snapdragon-arm-processors.html