Nvidia‘s (NVDA -1.55%) lead in artificial intelligence (AI) chips has led to eye-popping financial growth this year and supercharged the chipmaker’s stock big time — with its shares rising more than 230% in 2023 as of this writing — and the good part is that the company is taking steps to ensure that it continues to be the dominant force in this niche next year as well.
The company’s H100 data center processor has turned out to be the go-to chip for organizations and governments looking to train AI models. The waiting period for this $40,000 chip reportedly runs into months and is one of the reasons why Nvidia controls a whopping 80% of the AI chip market. Nvidia is now looking to build up on the success of the H100 with an updated H200 processor, which is set to be shipped to customers in the second quarter of 2024.
Let’s see how this updated processor could help Nvidia maintain its hegemony in AI chips and give the stock a nice boost next year.
Nvidia’s new chip is significantly faster
The H200 is based on the same Hopper architecture that powers the flagship H100 processor. However, Nvidia says that this is the first AI graphics processing unit (GPU) that comes equipped with HBM3e, a high-capacity, high-bandwidth memory (HBM) that’s purpose-built for accelerating AI workloads.
More specifically, the H200 is powered by 141 GB (gigabytes) of HBM3e memory. That’s a significant upgrade over the 80 GB of HBM3 available on the H100. This new generation of HBM enables the H200 processor to deliver 1.4 times higher memory bandwidth than the H100 at 4.8 terabytes per second, along with 1.8 times more memory capacity.
Nvidia pointed out in a press release that the faster and larger memory will “fuel the acceleration of generative AI and large language models, while advancing scientific computing for HPC workloads.” What’s more, the H200 can also be paired with Nvidia’s GH200 Grace Hopper Superchip, which combines a central processing unit (CPU) and a GPU on a single platform. Additionally, the H200 is compatible with the server systems that currently run H100, which means that customers won’t have to alter their existing server setups, and they can simply plug the new chip into the existing systems.
It is worth noting that the GH200 is already equipped with 282GB of HBMe3, so customers who pair it up with the H200 GPU can get access to massive computational power to train large language models (LLMs). Even better, Nvidia points out that the H200 has significantly higher AI inference capabilities than the H100. The company claims that the new chip is “nearly doubling inference speed on Llama 2, a 70 billion-parameter LLM, compared to the H100.”
A faster inference speed means that LLMs will be able to generate answers to queries faster. Not surprisingly, Nvidia has already received orders for the H200 from multiple cloud service providers (CSPs) that are looking to stay ahead in the AI game. Amazon Web Services, Alphabet‘s Google Cloud, Oracle Cloud Infrastructure, and Microsoft Azure will start deploying cloud instances based on the H200 from next year.
Given that Nvidia’s existing H100 processor reportedly costs between $25,000 and $40,000, depending on the configuration, the H200 is likely to be priced higher, given its improved specs. This is likely to drive an improvement in Nvidia’s already solid pricing power in the market for AI chips.
What about the supply?
We have already seen that the current generation H100 processors are supply constrained, and customers are reportedly having to wait for a long time to get their hands on these chips. However, Nvidia is reportedly working to significantly increase supply over the course of 2024. According to the Financial Times, Nvidia is aiming to increase the output of the H100 processors to as much as 2 million units from this year’s estimated production of half a million units.
Additionally, Nvidia spokesperson Kristin Uchiyama told technology news website The Verge that the production of the H200 isn’t going to the output of the H100. Uchiyama added that Nvidia will continue to add supply through 2024 while also procuring long-term supply for its chips as well. All this indicates that Nvidia’s data center revenue could continue to increase at a tremendous pace in 2024.
Nvidia could set new records in its data center business
The company’s data center revenue was up a whopping 171% year over year in the second quarter of fiscal 2024 (for the three months ended July 30, 2023) to a record $10.3 billion. Nvidia’s overall revenue was up 101% year over year to $13.5 billion. Given that the company is expecting a faster year-over-year jump of 171% in the overall revenue for the fiscal third quarter to $16 billion, its data center business is likely to grow at a faster pace.
The data center business produced 76% of Nvidia’s revenue in fiscal Q2. A similar percentage for Q3 based on Nvidia’s revenue forecast of $16 billion means that it could generate just over $12 billion in revenue from this segment. Analysts are anticipating the company to post $17.7 billion in revenue for the fourth quarter of the fiscal year, a 193% year-over-year jump. So, the data center business could produce $13.5 billion in revenue in the next quarter if it continues to account for three-fourths of Nvidia’s top line.
Adding the company’s data center revenue in the first six months of the fiscal year to the projections for the final two quarters indicates that Nvidia could generate $40 billion in revenue from this segment in fiscal 2024. That would be a massive increase of 2.7 times from the $15 billion in data center revenue Nvidia reported for fiscal 2023.
Since Nvidia is working to substantially increase the output of its AI chips in 2024 with the help of its suppliers, and it has a new chip that will go on sale next year in the form of the more powerful H200 processor that could command a higher price, it won’t be surprising to see the data center business multiply once again in the next fiscal year.
For instance, a 2.5x jump in Nvidia’s data center business in fiscal 2025 (which will begin in January next year), thanks to a combination of higher shipments and improved pricing, would send its revenue from this segment to $100 billion. That would enable Nvidia to crush analysts’ estimates of $82.7 billion in total revenue for fiscal 2025 and could help this hot AI stock maintain its outstanding run on the market in the New Year.
Suzanne Frey, an executive at Alphabet, is a member of The Motley Fool’s board of directors. John Mackey, former CEO of Whole Foods Market, an Amazon subsidiary, is a member of The Motley Fool’s board of directors. Harsh Chauhan has no position in any of the stocks mentioned. The Motley Fool has positions in and recommends Alphabet, Amazon, Microsoft, Nvidia, and Oracle. The Motley Fool has a disclosure policy.