Current Landscape and Investment Perspective
Nvidia Corporation (NASDAQ:NVDA) (NEOE:NVDA:CA) has emerged as one of the most contentious and electrifying equities in 2023, with no sign of abating in 2024. The juggernaut of Nvidia’s artificial intelligence (AI) dominance is pitted against various threats to its preeminent position.
In this comprehensive scrutiny, we meticulously appraise the interplay of these forces, offering a multitude of quantitative insights and up-to-the-minute developments. Crucially, we set forth diverse valuation scenarios to empower investors in assessing the risk/reward matrix of investing in Nvidia shares over a three-year horizon.
Our prognosis underscores the formidable challenge for contenders to dislodge Nvidia over the next 1-2 years, thereby cementing the company’s role as the leading provider in the rapidly expanding accelerated computing sector. We believe that this is inadequately factored into the current share valuation, potentially paving the way for further significant appreciation in the stock price in 2024 as analysts recalibrate their earnings estimates. Nevertheless, there are distinctive risk factors meriting vigilant surveillance, such as the specter of Chinese military intervention in Taiwan or renewed U.S. restrictions on Chinese chip exports.
Persistent Competitive Dynamics
Nvidia’s groundwork for the era of accelerated computing spans well over a decade. The thrust of the company’s 2010 GTC (GPU Technology Conference) was the deployment of GPUs for general-purpose computing, with a specific emphasis on supercomputers. Notably, a slide from the 2010 presentation by Ian Buck, then Senior Director of GPU Computing Software and today General Manager of Nvidia’s Hyperscale and HPC Computing Business, offers a compelling artifact of this vision.
Even then, Nvidia envisaged the future of computing pivoting around GPUs, not CPUs, in response to the burgeoning demand for accelerated computing. Four years later, at the 2014 GTC, the spotlight was on big data analytics and machine learning, a theme accentuated by CEO Jensen Huang in his keynote address.
Galvanized by these insights, Nvidia has methodically broadened its GPU portfolio for accelerated computing, thus attaining a substantial first-mover edge. This strategic thrust culminated in the launch of the Ampere and Hopper GPU microarchitectures in recent years, with Ampere officially unveiled in May 2020 and Hopper in March 2022. The world’s preeminent A100, H100, and H200 GPUs, predicated on these architectures, have dominated the burgeoning data center GPU market in 2023, propelled by nascent AI and ML initiatives. Notably, these GPUs propelled Nvidia to a stratospheric ~90% market share during the year. Buoyed by the triumph of its GPUs, Nvidia has even ventured into the multibillion-dollar networking business in 2023, a discussion we will delve into later.
In addition to its state-of-the-art GPUs and networking solutions (the hardware layer), providing superlative performance for large language model training and interference, Nvidia enjoys another pivotal competitive edge in CUDA (Compute Unified Device Architecture), its proprietary programming model for harnessing GPUs (the software layer).
Efficiently exploiting the parallel processing capabilities of Nvidia GPUs necessitates their access through a GPU programming platform. Unlike the more arduous and developer-intensive process of using general, open models like OpenCL, CUDA confers low-level hardware access, obviating intricate details for developers via the use of direct APIs. This simplification, supplemented by specific CUDA libraries for distinct tasks, has been the focus of Nvidia’s substantial investments.
Having been introduced 16 years ago, CUDA now occupies the commanding heights of the AI software ecosystem, akin to A100, H100, and H200 GPUs reigning over the hardware ecosystem. Academic studies in AI overwhelmingly employed CUDA acceleration with Nvidia GPUs, while corporations developing their AI-powered solutions predominantly hewed to CUDA. Even if rivals were to devise viable GPU alternatives, emulating a software ecosystem like CUDA would entail a protracted incubation period. In the calculus of CFOs and CTOs contemplating investments in AI infrastructure, the onus is not solely on the purchase price of Nvidia GPUs but also on the attendant developer costs and the support level for the given hardware and software infrastructure. Herein lies Nvidia’s competitive moat: While the initial outlay for Nvidia GPUs might be steep, the encompassing ecosystem yields substantial cost efficiencies, engendering a compelling sales proposition.
Turning to emerging competition, the preeminent independent challenger to Nvidia in the data center GPU sphere is Advanced Micro Devices, Inc. (AMD), whose MI300 product family began shipping in Q4 2023. The standalone accelerator MI300X and the accelerated processing unit MI300A will herald the principal contention to Nvidia’s AI hegemony.
This hardware ensemble is bolstered by AMD’s open-source ROCm software (the CUDA equivalent), officially launched in 2016. In recent years, ROCm has made significant inroads with popular deep learning frameworks like PyTorch or TensorFlow, surmounting a pivotal impediment for AMD GPUs to gain substantial traction in the market. In 2021, PyTorch unveiled native AMD GPU integration, portending the potential for code written in CUDA to be executed on AMD hardware.