Bulletproof Nvidia showed an unusual bout of weakness this past month following a report from The Information that Nvidia’s new AI chips are delayed. The report asserts that Nvidia’s upcoming artificial intelligence chips will be “delayed by three months or more due to design flaws,” resulting in a final flush of selling where the stock was down (-15%) in 7 days.

According to the report that was based on two anonymous sources, “if the upcoming AI chips, known as the B100, B200 and GB200, are delayed three months or more, it may prevent some customers from operating large clusters of the chips in their data centers in the first quarter of 2025, as they had planned.” This statement sent the market into a panic as it implies all three Blackwell SKUs will be delayed into the June quarter given the statement a three-month delay may prevent large clusters of Blackwell from not being operable in the first quarter.

It’s strange then, to say the least, that according to two of Nvidia’s closest supply partners, there is evidence the GB200s will initially ship in Q4, and are expected to see an increase of production volume in Q1.

The third supplier provides a read-through that the fab producing the chips is not seeing any material impact. This is important as the The Information also asserts the machines fabricating Blackwell GPUs are sitting idle. Per the report: “it is highly unusual to uncover significant design flaws right before mass production. Chip designers typically work with chip makers like TSMC to conduct multiple production test runs and simulations to ensure the viability of the product and a smooth manufacturing process before taking large orders from customers. It’s also uncommon for TSMC, the world’s largest chipmaker, to halt its production lines and go back to the drawing board with a high-profile product that’s so close to mass production, according to two TSMC employees. TSMC has freed up machine capacity in anticipation of the mass production of GB200s but will have to let its machinery sit idle until the snags are fixed.”

The quote above implies the issues were entirely unforeseen, which might not be the case. My firm covers Nvidia’s management team statements quite closely since I first covered the AI thesis in 2018, and management has been quite clear that CoWoS-L packaging for Blackwell will require more time for testing than previous generations. I’ve dug up some of this commentary for you below.

Nvidia is delivering the history’s most aggressive product road map on new fab processes. This is a “move fast, break things” problem, which contrasted to strictly a design flaw, does not mean the architecture inherently has issues. Rather to contrast, the progression of this generation is testing the upper limits of manufacturing complexities. Blackwell with CoWoS-L packaging seeks to increase yields by circumventing a silicon monolithic interposer, and instead, will use an interposer with higher yields to help package the processing and memory components seamlessly together. The result will be to break ground on unprecedented performance gains for memory-intensive tasks.

These nuances matter for tech investors. Around this time, on August 2nd, my firm took the opportunity to buy our last Nvidia tranche at $105.73 in an effort to catch what we believe will be about 25% – 50% upside before price tops out.

We also look more closely at supply chain commentary, as there is one supply chain partner in particular that has reported a mysteriously high level of growth in a segment that is tied to Blackwell. We covered this for our premium members the evening of the supplier’s earnings report on August 6th when Nvidia stock was bottoming at $105.

As a reminder, we don’t make earnings calls, as many factors can affect stock price. Instead, we present quality research so that investors are fully informed to make their own decisions. From there, we take this a step further and publish every single trade we make on our research site. In finance, full transparency is rare, yet through never-ending tenacity, my firm has offered up to 3900% gains on Nvidia alone.

We continue this long-standing dedication to our readers in the analysis below.

TSMC Reports 23.6% MoM Growth in July, Highest in 2024

TSMC releases monthly numbers which would reflect quickly if a highly anticipated release was causing idle machines. Instead, July monthly revenue showed a sharp acceleration from a decline in May and June to a MoM growth of 23.6% to NT$256.95 billion.

On a MoM/YoY basis, July reported the second largest growth this year:

TSMC’s MoM growth can be lumpy, yet July month’s 38.3% YoY growth points to a positive start to the September quarter. The company guided for revenue of $22.4 billion to $23.2 billion, representing YoY growth of 31.9% at the midpoint.

The analyst consensus estimates are trending higher, which typically, you’d see a decline in the analyst estimates on the news of a material delay. Analysts expect Q3 revenue to grow 38.1% YoY to $23.32 billion from the earlier 32.5% growth expected in mid-June and 32.1% growth expected in mid-May.

Note: The analyst estimates below differ slightly from reported figures in the company IR due to the currency conversion. However, we use the estimates below to understand the expected growth rate trend.

TSMC offered positive commentary on its business and raised the outlook when it reported its Q2 report last month. The company’s revenue grew by 32.8% YoY to $20.82 billion and beat the midpoint guide of 27.6% growth, helped by strong AI demand.

On a QoQ basis, the chipmaker’s high-performance computing (HPC) revenues rose 28% QoQ to $10.8 billion and accounted for 52% of Q2 revenue, up from 46% of revenue in Q1. HPC is above the 50% mark for the first time.

C.C. Wei, Chairman and CEO of the company, said in the Q2 earnings call, “Our business in the second quarter was supported by strong demand for our industry-leading 3-nanometer and 5-nanometer technologies, particularly offset by continuous smartphone seasonality.” There was a similar trend in Q1 as revenues were impacted by smartphone seasonality and offset by HPC revenue.

Wei also said that “over the past three months, we have observed strong AI and high-end smartphone related demand from our customers, as compared to three months ago, leading to increasing overall capacity utilization rate for our leading-edge 3-nanometer and 5-nanometer process technologies in the second half of 2024. Thus, we continue to expect 2024 to be a strong growth year for TSMC.”

They raised the full-year guidance to “slightly above mid-20s percent in US dollar terms” from the earlier “increase by low to mid-20% in U.S. dollar terms.” He further added, “we have such high forecasted demand from AI related business.” Given TSMC has many high-profile customers, the HPC segment alongside the CEO commentary help to differentiate the impact is coming from AI, rather than being mobile-related.

The I/O Fund built a leading AI portfolio beginning with Nvidia’s AI thesis in 2018, with up to 3,900% gains on Nvidia alone provided to our free readers. Premium members receive real-time trade alerts on NVDA and our entire portfolio, including two AI semiconductors we believe are poised for growth with allocations rivaling our NVDA holding.

TSMC’s Advanced Packaging

TSMC has limited CoWoS-L capacity to produce Blackwell chips. This is a problem all investors should get comfortable with as we head into 2025.

TSMC’s chip-on-wafer-on-substrate (CoWoS) architecture refers to the 3D stacking of memory and processors modules layer by layer to create chiplets. The architecture leverages through-silicon vias (TSVs) and micro-bumps for shorter interconnect length and reduced power consumption compared to 2D packaging.

There are three types of CoWoS architectures, which replaced multi-chip modules by scaling up the interposer area to fit multiple dies. Current CoWoS interposers are up to TSMC’s 3.3X reticle limit, with the goal of building interposers that can reach 8X the reticle limit by 2027. At the North American Technology Symposium earlier this year, TSMC stated they will reach 5.5X reticle limit by 2025 for more than a 3.5X increase in compute power.

As transistor density increases, advanced packaging solutions help to alleviate bottlenecks by increasing interconnect density, which results in higher signal speed and processing power.

  • CoWoS-S: this is the most popular CoWoS architecture for GPUs already deployed, including Nvidia’s H100s, H200s and AMD’s MI300s. It uses silicon as the interposer material and is lower cost than CoWoS-R.
  • CoWoS-R: connects chips with redistribution layers (RDL) wiring as the interposer material, offers InFO technology as an upgrade for HBM memory and SoC integration.
  • InFO technology reduces the size of components in more powerful devices. By being Fan-Out (FO) instead of Fan-In, TSMC’s process can integrate multiple dies on top of each other with a common I/O connecting layer.
  • CoWoS-L: combines multiple Si interconnects (LSI) for a reconstituted interposer (RI) that replaces the monolithic silicon interposer in CoWoS-S. By taking the benefits of CoWoS-S, CoWoS-L offers strong system performance while avoiding the yield loss from one Si interposer.

Nvidia designs offer pure ingenuity, for example, the A100s offered sparsity and the H100s offered a transformer engine. We covered the importance of the Transformer Engine for our premium site six months prior to Hopper shipping, which led to entries as low as $10.85 when factoring in the stock split. Ultimately, Nvidia’s design ingenuity combined with TSMC’s process improvements defy Moore’s Law.

Due to TSMC’s CoWoS-L requiring more complexity and precision, it was already expected the validation and testing process would be time consuming. We had stated in the analysis Nvidia Q1 Earnings Preview: Blackwell and The $200B Data Center that “the advanced CoWoS packaging that is needed to combine logic system-on-chip (SoC) with high bandwidth will take longer, and thus, it’s expected that Blackwell will be able to fully ship by Q4 this year or Q1 next year. How management guides for this will be up to them, but commentary should be fairly informative by Q3 time frame.”

Per another source, Trend Force last April: “Although NVIDIA plans to launch products such as the GB200 and B100 in the second half of this year, upstream wafer packaging will need to adopt more complex and high-precision CoWoS-L technology, making the validation and testing process time-consuming. Additionally, more time will be required to optimize the B-series for AI server systems in aspects such as network communication and cooling performance. It is anticipated that the GB200 and B100 products will not see significant production volumes until 4Q24 or 1Q25.”

From the horse’s mouth, Nvidia’s own management team, it was stated during the GTC Financial Analyst Day in March that the very first systems will ship in Q4, but to expect constraints. In a roundabout way, the CEO tells investors what to expect should this happen, which is that customers will continue to build with H100s, H200s and any other supply they can get their hands on.

Atif Malik, Citigroup:

Hi. I am Atif Malik from Citigroup. I have a question for Colette. Colette in your slides, you talked about availability for the Blackwell platform later this year. Can you be more specific? Is that the October quarter or the January quarter? And then on the supply chain, readiness for the new products is the packaging, particularly on the B200 CoWoS-L and how you are getting your supply chain ready for the new products?

Colette Kress:

Yeah, so let me let me start with your second part of the question, talking about the supply-chain readiness. That’s something that we’ve been working well over a year getting ready for these new products coming to market. We feel so privileged to have the partners that work with us in developing out our supply chain. We’ve continued to work on resiliency and redundancy. But also, you’re right, moving into new areas, new areas of CoWoS, new areas of memory, and just a sheer volume of components and complexity of what we’re building. So that’s well on its way and will be here for when we are ready to launch our products. So there is also a part of our supply chain as we talked earlier today, talking about the partners that will help us with the liquid cooling and the additional partners that will be ready in terms of building out the full of the data center. So this work is a very important part to ease the planning and the processing to put in all of our Blackwell different configurations. Going back to your first part of the question, which is when do we think we’re going to come to market? Later this year, late this year, you will start to see our products come to market. Many of our customers that we have already spoken with talked about the designs, talked about the specs, have provided us their demand desires. And that has been very helpful for us to begin our supply chain work, to begin our volumes and what we’re going to do. It’s very true though that on the onset of the very first one coming to market, there might be constraints until we can meet some of the demand that’s put in front of us. Hope that answers the question.

Jensen Huang:

Yeah, That’s right. And just remember that Hopper and Blackwell, they’re used for people’s operations and people need to operate today. And the demand is so great for Hoppers. They — most of our customers have known about Blackwell now for some time, just so you know. Okay, so they’ve known about Blackwell. They’ve known about the schedule. They’ve known about the capabilities for some time. As soon as possible, we try to let people know so they can plan their data centers and notice the Hopper demand doesn’t change. And the reason for that is they have an operations they have to serve. They have customers today and they have to run the business today, not next year.

—End Quote

Recently, Nvidia’s VP Ian Buck stated at BofA Financial Conf in June 2024; “So we stated recently in our earnings that Blackwell has now entered into production builds. We started our production.

The samples are now going — will go out this quarter, and we’re ramping for production outs later this year. And then everything — that always looks like a hockey stick, you start small and you go pretty quick to the right. And the challenge, of course, is with every new technology transition comes — the value is so high, there’s always a mix of a challenge of supply and demand. We experienced that certainly with Hopper. And there’ll be similar kinds of supply/demand constraints in the on-ramp of Blackwell certainly at the end of this year and going into next year.”

Taking this full circle, let’s go back to what TSMC said in the most recent earnings call about CoWoS capacity:

Management stated in the earnings call Q&A that the supply is expected to continue to be tight next year, and they are working with OSAT (Outsourced Semiconductor Assembly and Test) partners to increase production capacity.

Gokul Hariharan:

“How do you think about supply demand balance for AI accelerator and CoWoS advanced packaging capacity? And I think in your symposium you talked about 60% CAGR, component growth for CoWoS capacity in the next four, five years. So, could you talk a little bit about how much capacity for CoWoS would you be planning to build next year as well?”

C. C. Wei:

“Gokul, I also try to reach the supply and demand balance, but I cannot today. The demand is so high, I have to work very hard to meet my customers’ demand. We continue to increase, I hope sometime in 2025 or 2026 I can reach the balance. You’re talking about the CAGR or those kind of increase of the CoWoS capacity. Now it’s out of my mind. We continue to increase whatever, wherever, whenever I can. Okay. The supply continues to be very tight, all the way through probably 2025 and I hope it can be eased in 2026. That’s today’s situation.”

Gokul Hariharan:

“Any thoughts on next year capacity? Are you going to double your capacity again next year for CoWoS?”

C. C. Wei:

“The last time I said that, this year I doubled it, right? More than double. Okay. So next year, if I say double, probably I will answer your question again next year and say more than double. We are working very hard, as I said. Wherever we can, whenever we can.”

—End Quote

My notes: There were many opportunities for TSMC to report a material impact from idle machines – quarterly numbers ending in June, July monthly numbers, commentary during the earnings call from the CEO that establishes the opposite, which is that capacity is primarily the issue (rather than a dire flaw that is halting production) and the company is working hard to increase this capacity.

Earlier this month, TrendForce citing Money DJ’s report, estimated that CoWoS capacity is in short supply at 35,000 to 40,000 wafers this year. With outsourced capacity, 2025 production could be over 65,000 wafers per month.

According to the report, TSMC will assign the orders of the initial stage of CoWoS packaging, Chip on Wafer (CoW) to OSAT partner SPIL. This is the first time the company is outsourcing this process since the demand is high and previously WoS (Wafer-on-Subtrate) process was outsourced while keeping the higher margin CoW process in-house.

According to DigiTimes, the company is expected to have CoWoS production of 60,000 wafers per month in 2025 and a further increase to 70,000 to 80,000 in 2026 after the company recently acquired Innolux Fab. The 2025 production capacity would suggest a 300% increase from 15,000 at the end of 2023.

Super Micro: The Near-Perfect Proxy for Nvidia

Super Micro stock surged alongside Nvidia over the past year and a half with returns of 659% compared to Nvidia’s returns of 787.8%. Supermicro is a leading partner on building AI systems with Hopper GPUs by leveraging air cooled and liquid thermal designs for AI accelerators to grow upwards of 5X faster than the industry average for subsystems and server systems.

The Hopper generation is primarily air cooled. However, the percentage of air-cooled systems shipped versus liquid cooled systems will change (dramatically) with Blackwell.

In June, we wrote an analysis on AI Power Consumption: Rapidly Becoming Mission Critical which stated that as the industry progresses towards a million-GPU scale, this puts more emphasis on future generations of AI accelerators to focus on power consumption and efficiency while delivering increasing levels of compute. Data centers are expected to adopt liquid cooling technologies to meet the cooling requirements to house these increasingly large GPU clusters.

Specifically, it’s the Blackwell architecture that kicks off the need for liquid cooling. Most servers today are air-cooled yet AI necessitates a shift to liquid cooling as the H100 GPUs are already at 700W of power and Blackwell GPUs will see a 40% increase to 1,000W or higher. The B200 doubles the transistor count compared to the H100 and provides 20 petaflops of AI performance compared to the H100s 4 petaflops. The resulting 3X leap in training performance and 15X leap in inference performance is shifting the focus to liquid cooling as 1,000 watts is too hot to be air cooled.

The B200 systems and chipsets will be the first release to be primarily liquid cooled, according to Dell, who competes with Super Micro on building AI servers. Note that per the statement from Dell in March, the B200s are due in early 2025.

Tom’s Hardware has also stated that direct liquid cooling will start with Blackwell: “Even Nvidia’s high-end H100 and H200 graphics cards work well enough under air cooling, so the impetus to switch to liquid hasn’t been that great. However, as Nvidia’s upcoming Blackwell GPUs are said by Dell to consume up to 1,000 watts, liquid cooling may be required.”

VP Ian Buck of BofA GTC Conference in June of 2024 also stated: “The opportunity here is to help [customers] get the maximum performance through a fixed megawatt data center and at the best possible cost and optimized for cost. By doing 72 GPUs in a single rack, we need to move to liquid cooling. We want to make sure we had the higher density, higher power rack, but the benefit is that we can do all 72 in one NVLink domain.”

Super Micro is a proxy for Nvidia as its growth has been in lock-step with the AI GPU juggernaut since the launch of the H100 nearly two years ago. Most importantly, we have a key metric from Super Micro that is specifically tied to Nvidia’s Blackwell launch, which is the ramp of liquid cooling.

Liquid cooling has been around for 30 years, yet the H100s and H200s launched with air cooled systems. Today, Super Micro builds HGX AI supercomputers with racks that support 64 H100s, H200s or B200s with direct liquid cooling (DLC), saving up to 40% of energy costs. Although H100s and H200s have the option for DLC, the CFO of Super Micro has stated that as GPUs and CPUs run over 1,000 watts, the benefits of liquid cooling are “going to start to become painfully obvious.”

Per the CEO in last month’s earnings call, it was the months of June and July specifically when DLC started to ramp: “I mean as you know liquid cooling have been in the market for 30 years and market share compared with overall datacenter size always small, less than 1% or close to 1%, I would have to say. But just June and July two months alone, we shipped more than 1,000 racks to the market. And if you calculate 1,000 racks, AI rack is about more than 15% on a global datacenter new deployment.”

Next quarter will mark the highest quarter growth in Supermicro’s history with guidance for 206.6%, an acceleration from the previous quarter’s growth of 144%. This is 590 bps higher than Super Micro’s previous record quarter for 200.7% growth.

Direct Liquid Cooling is Surging

Considering that Blackwell is a clear catalyst for direct liquid cooling, it is odd to say the least that Supermicro reported on August 6th that demand for direct liquid cooling is surging, a mere four days after The Information’s dire report.

According to Super Micro’s earnings report, the company’s direct liquid cooling capacity grew 50% month-over-month from 1,000 racks per month to 1,500 racks per month. By year end, the company will grow to 3,000 racks per month, resulting in 200% growth in six months.

This represents an increase from Super Micro’s original estimate the company would end the year with 1,500 racks. The CFO stated: “But even we were surprised by the acceleration that we saw in the liquid-cooled rack market.”

SuperMicro offers liquid cooled H200 HGX systems, yet the H200s run up to 700W; not the 1000W that necessitates DLC. I have yet to see where the H200 was expected to drive overnight demand for DLC, rather, it’s been expected for some time that Blackwell would be the catalyst for the DLC market.

To put the sudden surge in context, Super Micro stated: “I believe for June and July in last next two months we may ship at least 70% to 80% of liquid cooling compared with all the liquid cooling in the world. So for liquid cooling, we have at least 70% to 80% market share” – the readthrough is the DLC market skyrocketed very suddenly in the last two months.

Supermicro’s report is communicating that servers that require direct liquid cooling are soaring (suddenly) as of June and July from 1% of all new servers shipped to 15% at 1,000 racks. Management is also communicating that it’s expected to continue to soar to 3,000 racks by the end of this year, reaching up to 30% of servers shipped.

Yet if Blackwell is materially delayed, how can liquid cooling be skyrocketing?

A Few Theories I’m Working With:

Theory #1: The Delay Was Accounted For:

Per the GTC commentary from management, the very first GB200 systems will ship in Q4 and will ramp from there, with the understanding Blackwell will be capacity constrained. Financial analysts knew CoWoS-L could present delays, and the April press release from Trend Force clearly describes this, stating CoWoS-L is “making the validation and testing process time-consuming.”

Nvidia reiterated that Q4 is when the first systems would ship after The Information’s report with an Nvidia spokesperson stating to The Verge: “Nvidia expects production of the [B200] chip “to ramp in 2H,” according to a statement that Nvidia spokesperson John Rizzo shared with The Verge. “Beyond that, we don’t comment on rumors.”

The delay may have already been accounted for, as discussed, it’s a new packaging process and a more complex chip, with many statements on record it would require additional testing. This would help explain why TSMC and Super Micro are raising/beating estimates driven by their AI segment as it implies their guidance was aligned with the delay.

Theory #2: The GB200s NVL36 and NVL72s are Hogging CoWoS-L Capacity

My firm has been reporting on X for months that GB200 demand is surging. For example, UBS said that it believes “demand momentum for $NVDA Blackwell rack-scale systems remains exceedingly robust” and that the “order pipeline for (Nvidia’s) NVL72/36 systems is materially larger than just two months ago.”

Source: Beth Kindig’s X Account

According to reports from Wccftech: “Team Green is expected to ship 60,000 to 70,000 units of NVIDIA’s GB200 AI servers, and given that one server is reported to cost around $2 million to $3 million per unit, this means that Team Green will bag in around a whopping $210 billion from just Blackwell servers along, that too in a year.”

The weight of that report cannot be overstated as it implies 26% upside to 2025’s estimates based on one SKU alone. In fact, this one SKU is expected to drive 9% more revenue than analysts currently have estimated two years out for FY2027.

Theoretically, if the GB200 systems are seeing enough demand to exceed FY2027 estimates (per the preliminary data), Nvidia would be wise to cancel the B100s and B200s built on CoWoS-L capacity entirely, and switch these SKUs back to CoWoS-S. There’s a write-up on new SKUs based on CoWoS-S capacity and air-cooling from Semi Analysis here.

Here’s why the GB200 can drive this kind of revenue so quickly:

  • Nvidia’s GB200, featuring one Grace CPU and two B200 GPUs, is estimated to sell for ~$60,000 to $70,000.
  • In the NVL36 configuration, featuring 18 GB200s (18 Grace CPUs and 36 B200s), each GB200 would be selling for $100,000 at the current estimated ASP of $1.8 million.
  • In the NVL72 configuration, featuring 36 GB200s (36 Grace CPUs and 72 B200s), each GB200 would be selling for ~$83,333 at the current estimated ASP of $3 million.

In this case, Nvidia would theoretically prioritize the GB200 NVL36 and NVL72 as the price points are quite high. The two NVL36 and NVL72 rack configurations carry a ~27% to ~54% higher selling price per GB200, making it understandable why Nvidia would focus on the racks given production constraints from CoWoS capacity.

Ultimately, reconfiguring lower priced SKUs will not matter to Wall Street if it’s based on outsized demand for GB200s. This theory hinges on Super Micro’s report, as it’s the sudden surge in direct liquid cooling sales that is truly mysterious. From my vantage point today, it feels nearly impossible that Super Micro could report this level of surge in direct liquid cooling from 1% of systems in May, to 15% of systems today, to 30% of systems by the end of the year and for there to be a material, unforeseen delay in every Blackwell SKU.

If the B100s and B200s are pushed out in favor of the GB200NVLs, then next year will be game-on for Nvidia investors as these systems sell at a high multiple. Keep an eye out for where bad news now (some GPUs are canceled) eventually becomes good news over the next four quarters (in favor of systems priced 36X to 72X higher).

Foxconn Earnings Call Commentary:

Briefly, I’d like to mention Foxconn has recently stated in an earnings call: “We are on track to develop and prepare the manufacturing of the new [Nvidia] AI server to start shipping in small volumes in the last quarter of 2024, and increase the production volume in the first quarter of next year.”

The company also indirectly debunked The Information’s assertion that “it is highly unusual to uncover significant design flaws right before mass production” with Foxconn stating the opposite “It is normal to dynamically adjust [shipment schedules] when the specs and technologies of a new product are largely upgraded. Whether the shipping schedule changes or not, Foxconn will be the first supplier to ship the first batch of GB200,” Wu said.

Note Foxconn specifically calls out shipping the GB200, rather than the B100, which was due to ship first. Hopefully, by now it’s clear to our readers should the B100 be bumped, this could have a bullish readthrough if Nvidia re-allocates CoWoS-L capacity to the higher priced GB200 systems.

The H200 is a Force of Its Own

To further the conversation on why a delay in the B100s and B200s can be absorbed, it’s worth taking a moment to discuss the H200.

The H200 is shipping now and is a force of its own with 141 GB of HBM3e memory, up from 80 GB of HBM3 memory in the H100. The GH200 superchip is also equipped with HBM3e and is shipping this quarter.

By significantly boosting memory per GPU – up ~75% from 80 GB of HBM3 in the H100s – the H200 allows Nvidia’s customers to address memory-constrained workloads, such as workloads requiring the largest LLMs, which were built and trained on the H100s. This will fill the gap between shipments of the H100 and Blackwell by easing one critical bottleneck to AI training – memory bandwidth.

The question that I’ve seen raised time and again by investors is why is Nvidia’s GPU demand is this durable in a typically cyclical industry? The answer lies within the H200 and Blackwell. As VP Ian Buck explained at BofA’s GTC Conference in June, “From the end of ’22 to today, I think we’ve improved Hopper’s inference performance by 3x. So we’re continuously making the infrastructure more efficient, faster and more usable. And that gives the customers who have to now buy at a faster clip, confidence that the infrastructure that they’ve invested in is going to continue to return on value and does so.”

More importantly, Buck emphasized that hyperscalers “can retire their old legacy systems that maybe they’ve just left, not upgraded. They can accelerate the decommission of the older CPU infrastructure.” Essentially, Nvidia’s customers can free up megawatts of power and hundreds of racks (and save millions with performance and efficiency gains providing lower TCOs) by decommissioning prior generations of GPUs or CPU-based servers, and this goes for the H200 and Blackwell. Customers can retire older GPU generations such as Volta and Ampere and refit it with H200s, while waiting for Blackwell chips to build new infrastructure, allowing them to benefit from the memory upgrades while in mid-cycle for the Blackwell upgrade.

On the HBM3 side, Micron, SK Hynix and Samsung are intertwined in a deep competition for supply, with SK Hynix serving as the primary supplier for the H100 and Micron being the first to announce itself as the supplier for the H200. Micron has said it is sold out of HBM3e supply through 2025, with preparations and discussions already being made for HBM4 and HBM4e in 2026 and beyond. SK Hynix also revealed earlier in May this year that it was nearly entirely sold out of HBM through 2025. On the other hand, Samsung has reportedly struggled for some time to validate its HBM3e chips with Nvidia due to power consumption and heat issues.

We’re still seeing no signs of slowing for H100 and H200 demand, with DigiTimes reporting last week that H100 and H200 production volumes have been “increasing monthly.” There are also signs in the broader DRAM market that point to HBM demand remaining robust, another signal pointing to lasting Hopper demand. DRAM revenue in the June quarter surged nearly 25% QoQ to $22.9 billion, driven primarily by HBM demand and rising prices due to “aggressive procurement strategies” from buyers.

Conclusion:

As of now, there’s a disconnect between next fiscal year’s revenue estimates of $167 billion and the $210 billion in GB200s alone expected to ship next year. Perhaps analysts are waiting for signals the supply chain can produce these outsized orders. So far, so good with the signals we see from TSMC and SMCI’s most recent earnings reports. Foxconn commentary helps, as well.

Where Nvidia investors run a risk is the valuation of 25X forward PS and 45X PE Ratio as it’s the highest the stock has traded since the market has priced in the AI accelerator boom. My firm believes in an active approach to managing risk. For example, if you had bought the 2022 top in Nvidia, you’d currently be up over 275%. If you bought the October 2022 low, you’d be up over 1100%. It is unlikely many bought the top and bottom in any stock (we actually did buy Nvidia at the very low on October 18th, 2022, but it’s rare). Yet, being cognizant of the larger trend and pattern in play has allowed us to increase our return while decreasing the risk with Nvidia.

Point being, we actively seek to buy quality companies at lower prices. Let the market (with help from the media) doubt the AI juggernaut in its first inning, let them drag the price down, and then our plan is to pounce … because Blackwell is on its way, the GB200s are going to crush expectations in FY2026, we are getting the green light from suppliers the delay is immaterial at this time, demand/big tech capex remains high, and let’s be real, nothing can stop what’s coming.

Our premium members will receive our post-earnings analysis right after the report. If you own Nvidia stock, or are looking to own NVDA, we encourage you to attend our weekly premium webinars, held every Thursday at 4:30 pm EST. Next week, we will discuss our plan following NVDA’s earnings, as well as a handful of lesser-known AI plays for 2024 – what our targets are, where we plan to buy as well as take gains.

If you would like notifications when my new articles are published, please hit the button below to “Follow” me.

Share.

Leave A Reply

Exit mobile version