MU: Get on board the XPU train
While the Street worries about stagnant momentum in the memory markets and expects modest print and guide, we think the highlight of the call today could well be about MU breaking out of NVDA’s grip. Going into the 3FQ24 earnings event in June, we called a near-term top to MU’s fundamentals (link) and cut our $150 price target shortly thereafter (link), before the mid-year sell-off in AI-related semis. Our reason was simple enough – with MU management claiming its HBM capacity was sold-out through 2025 and with little help from the traditional markets, we felt FTM earnings estimates had peaked.
Six month later, and with the stock stuck in neutral, we now see a way out. We think the XPU opportunity is ahead of MU and is yet to price in. As the XPU market heats up, the Street has begun to pay attention and is casting about for AI silicon opportunities outside of the two GPU incumbents. Just as with GPU accelerators, XPU accelerators too need HBM memory.
We suspect US-based hyperscale players may lean towards partnering with MU, instead of the Korea-based memory suppliers, as the HBM supplier for their XPUs. Our checks show that MU may have won the HBM socket at AWS’s Tranium3. Even though we expect this to be a 2026 opportunity, we think the Street has begun to pay attention to AI silicon beyond 2025, as seen in the market’s strong response to AVGO’s Fy27 outlook. Furthermore, we expect incremental HBM manufacturing capacity at MU to come online in Fy26, allowing MU to go beyond NVDA as its sole customer and to diversify into new customers.
We are cognizant of a few additional positives: 1) We think Samsung may no longer be a credible threat to MU’s HBM3e share, 2) the client market, while moribund at the present, is showing modest improvement in visibility into 2025 and 3) geo-political factors, a negative for most semis, nets out of positive for MU.
Going into the previous earnings (4FQ) call we set a tight PT of $110 based on Fy25 EPS (link). Despite significant upside to Nov guidance provided at the previous earnings call, the stock has not been able to break out of the $110 ceiling. With the introduction of XPUs potentially in the mix, we think the Street’s attention now shifts to FY26. As the company diversifies its customer base and expands manufacturing capacity, the potential for new vectors of growth provides avenue for earnings expansion. We are raising our PT to $125, based on 12x to our below-consensus Fy26 earnings estimate. If the XPU opportunities at the various hyperscale players really are of 1mn cluster size each, as AVGO management expects, MU could be looking at significant multi-year opportunities, not constrained by NVDA.
–xx–
The XPU wave is in its infancy: We expect MU management to ride the coat tails of AVGO and blast its way beyond the gravitational pull of NVDA’s GPU-centric ecosystem. That every one of the US-based hyperscale players has been working on internally designed AI accelerators is no secret. MSFT, GOOG and, most recently, AMZN/AWS have rolled out details and high-level road maps. The Street is also generally aware that META and potentially AAPL too are working on internal solutions. However, these efforts have not translated into stock valuations, we think, due to the lack of granularity of the roadmaps and unclear timing of production.
The AVGO earnings presentation last week took a stab at putting a timeline around the XPU opportunity. The surprisingly strong 3-year SAM opportunity that AVGO laid out was translated into significant expansion in market cap. The earnings presentation from MRVL the week earlier too took investors by surprise.
In essence, we think there is great interest in the investment community to learn more about the $value and the timing of AI opportunities outside of NVDA/AMD. Whereas GOOG’s TPU6 has gone into production, delivering revenue to AVGO, the XPUs at the other hyperscale companies are further out in design, let alone production. But the key is this – the Street has begun to put a valuation on the silicon suppliers involved in XPUs. And MU’s HBM is in the sweet spot.
XPUs at hyperscale companies: Just as the ASIC suppliers such as AVGO/MRVL work closely with hyperscale customers during the design phase, we believe memory suppliers get involved early in the XPU design project. This provides HBM suppliers with visibility into the revenue opportunity multiple years away. There are several internal programs ongoing at hyperscale players. MSFT’s 2nd generation AI accelerators could go into pilot productions late 2025. META and AAPL have internal AI chips in the works, the timing of production though is unclear. AMZN/AWS recently unveiled the 3rd generation of its internal AI chip.
MU lands AWS’s Trainium3 HBM socket: AMZN/AWS recently announced its roadmap for the 3rd generation XPU chip. We believe Trainium3 is a year away from tape-out. And yet, we believe AWS has already chosen its HBM vendor, due to the need for close collaboration. We believe the HBM contract has been awarded to MU.
We believe that while the previous generation Trainium2/Inferentia2 was a limited release effort to test out the various LLMs AWS has been experimenting with, Trainium3 could go into a wider release. We will not hazard a guess as to the revenue opportunity this socket presents, but it could be meaningful.
For its 3rd generation, we believe AWS decided to roll out just one ASIC for both training and inferencing purposes. We believe AWS has settled on a limited number of LLMs it plans to take into full production. Trainium3 could become the AI workhorse at AWS in a couple of years. If MU lands this socket, as we think it has, we think MU could be looking as a large driver of growth.
Samsung’s HBM3e may no longer pose a threat to MU/Hynix: Stung by delays, Samsung may have decided to cut its losses and walk away from qualifying HBM3e. We think Samsung may have decided to re-allocate R&D resources elsewhere, such as in the development of HBM4. As such, we think Samsung’s HBM3e is no longer an overhang for MU.
Traditional markets could be stabilizing: The promises at MWC and Computex earlier this year failed to materialize in the AI smartphone and AI PC markets. Memory and flash build-up by suppliers earlier this year in anticipation, instead of selling through during holiday season, ended up in channel inventory. Having said that, the supply chain is setting modest goals for the smartphone/PC markets and adjusting supply accordingly.
Targets for AI smartphones for 2025 have been dialed into the supply chain, setting the stage for improved visibility. Starting off small, less than 20mn units of Android AI smartphone units, it gives a jumping off point for memory suppliers. Minimum requirements appear to be 8GB DRAM and 256GB flash storage. Samsung’s targets for overall smartphones next year too appear to be dialed in. Slightly lower in volume vs 2024 but offset by higher $content.
PCs expectations have been dialed back to pre-pandemic levels. There is very little talk of AI PCs. The focus has returned to traditional metrics such as battery life. Intel is on record saying that its 3nm based Lunar Lake uses MU DRAM. At a recent investor conference, the CFO complained of the high price of MU’s product. We believe Lunar Lake is sole sourced to MU’s lpDDR5x module.
Geo-political factors net out in MU’s favor:
- BIS regulations targeting China data centers – no impact to MU: The Street expects another sets of BIS regulations to be released before end of year, placing restrictions on exports to China of, among other types of semis, HBM modules. We think MU is insulated from further downside due to existing restrictions coming for the Chinese side, regarding the import of MU’s memory products.
- CHIPS Act monies- a positive: MU is a beneficiary of a substantial grant from the US government for use in MU’s planned investment of fabs and packaging facilities in the US
- Political crisis in Korea is driving up cost of MU’s competitors: The Korean Won has dropped ~2.5% against the USD and the Chinese Yuan in the two weeks since the recent political crisis. Korean credit default spreads widened the most among sovereign bonds across the globe last month, second only to Brazil. As we wrote in a recent note, the price of NAND raw dies jumped 2%-3% overnight following the political event earlier this month (link). MU could be an accidental beneficiary of the unexpected price uptick of Korean memory output.
Financials and TP: We are model Fy25 at $38.1bn/$8.2 and with a gross margin on 41% vs. consensus estimate at $38.3bn/$8.96, with gross margin of 43.4%. We believe the Street is modeling gross margin too aggressively in the back half of Fy25. We model FY26 at 45.7bn/$10.47, gross margin 42% vs. consensus estimate of $47.4bn/$13.25, gross margin 48.8. Again, we believe the Street is too aggressive on margin assumptions. Based on a 12x multiple to our Fy26 EPS estimate, we derive a price target of $125. Our previous PT of $110 was based on our FY25 estimate.
Net/Net: We think the XPU opportunity is ahead of MU and is not priced into the stock. We believe MU management has an opportunity to break out of its dependence on NVDA and draw investor attention to new customers for its HBM products. With the Street concerned about near-term dynamics in the memory market, MU management has an opportunity to highlight HBM opportunities over a 2- to 3-year period in the XPU space. We believe there is upside to the stock. We are raising our PT from $110 to $125. KC Rajkumar