Jahanara Nissar

NVDA: Optimists vs. Skeptics

NVDA investors wade today into one of the more confusing earnings events in recent memory. During NVDA’s ‘quiet period’, investors have been pummeled with a profusion of lightly sourced information with regards to product delays, alternate product SKUs, detailed failure analysis and so on and so forth. The supporting cast of analysts, channel partners and Asia media sources appear confident that none of these concerning details should put a dent to NVDA’s revenue ramp profile, such is the strength of the end demand. As their confidence can only come from NVDA itself, we suspect the company had messaged the gist of what is to be said on the call today through channel partners. 
As such, we expect the message on the call today to be aligned with that from the supporting cast – a minor delay in the revenue ramp of Blackwell to be offset by higher-than-expected demand for Haswell SKUs. With the stock back to its previous high, we can safely assume this view is mostly priced into the stock. What could nudge the stock into a higher orbit? For the stock to launch into a higher orbit, management needs to communicate that the demand for Haswell more than offsets the revenue delay from Blackwell ramp. In one form or another, we expect the management to communicate just that. The first reaction to the earnings call may well be to the upside. But that could be a head fake.–xx–
 NVDA has become a ‘battle ground’ stock: The stock came under severe pressure in mid-July as the Skeptics questioned the wisdom of vast amounts of capex spent by hyperscale players chasing what the skeptics claimed to be less than optimal ROI. A few weeks later in early August, the hefty revenue outlook from SMCI encouraged the Optimists to return, driving the stock back up. The resolution of the battle, we suspect, will emerge only in the days and weeks following the earnings event today as investors mull over the details of the call and as management resumes meeting with investors. 
We are in the Skeptics camp: We do not support the idea that increased shipment of H100/H200 could make up for the delay in Blackwell shipment. All things being equal, we think it appropriate to adjust NVDA’s data center revenue ramp downward vs consensus estimates until Blackwell gets into volume shipment, which could be well into NVDA’s FY26
Our view – 1) the constrained supply of H100 earlier on in the year may well have turned into excess supply as demand stabilizes, as seen ease of availability, falling rental/token pricing, an emerging secondary market, 2) H200 may have good demand, but could suffer from low yield and heat-related failures (link) and 3) there is no way to assess with any precision the actual delay in volume shipment of Blackwell; it could be more than just a few months. Our view – heat-related failures could be endemic to GPU/interposer modules, which gets worse with higher HBM content. Blackwell has higher HBM content than Haswell. 
A caveat: Despite potential issues with the H200, we suspect that ODM/OEM partners may absorb sales from NVDA and place the volume on their inventory. NVDA management may claim sales of H200 GPUs have picked up, but we are not yet seeing signs of H200 racks being absorbed into data centers in a big way. A rather curious article in Digitimes last week related to Foxconn caught our attention. The article talks about ‘complex AI server trading models’ and the moving away from the simple ‘buy and sell’ model to more complex models, which we suspect involves leasing the GPUs to end customers, rather than outright sales. Such trading models usually result in pulling in future ‘real’ demand and in build-up of channel inventory. 
H100 shortage may have turned into surplus: H100 lead times have come in. The question is whether increased supply of H100 is keeping pace with increased demand. We do not think so. We think supply may be outstripping end demand. We are seeing several signals that help us draw the conclusion.
Three months ago, we noted that HGX H100 server rental, as reported by GPU-specialized data centers had dropped to ~$2.25 per hour vs. ~$4.75 earlier in the year (link). Our latest checks show that pricing has dropped further, now running below $2, at levels where smaller DCs may well be pricing server rental below operating cost.
VC platforms in Silicon Valley which bought GPU rental time contracts at wholesale prices earlier this year for the use by AI startup clients, we understand have begun to offload time slots, due to reduced demand from their client companies.
Blocks of H100 GPUs have begun to appear on the secondary market in Silicon Valley as tactical buyers of the highly valued chips now face reduced demand and falling value of the chips. We are aware of a secondary market for H100 in Hong Kong, which is now reporting declining price.
As for the hyperscale players, there no longer are constraints to the availability of MSFT’s CoPilot; users can gain access at will. We are also hearing scattered reports of some hyperscale CSPs, slowing down or even pausing purchasing of H100. 
H100 – why the surplus? In short, because hyperscale CSPs may have, to paraphrase an expression from the Google earnings call, converged on a set of base capabilities i.e. trained models, sooner than NVDA had anticipated. And this releases a large amount of H100 installed base capacity formerly allocated to training now to inferencing.
Two quarters ago, NVDA management commented that inferencing was running at ~40% of the workload on NVDA GPUs. The implication was that the training workload was still more than a majority of the workload. Training workloads required a huge commitment of GPU cluster size and continuous usage time, thereby sopping up large amount of GPU (mostly H100) capacity. 
Today, we think training workload usage of H100 capacity may now be no more than 20-30%, thereby releasing 70%-80% of installed capacity of H100 for inferencing. 
The peak in training workloads is done with: Six months ago, umpteen LLMs were in the process of being trained. Of late though, in recent months, we think CSPs have settled on a handful of fully trained LLMs that they plan to take into production, i.e. to run inference workloads at scale. 
So, for instance Google’s Gemini 1.5 is frozen and so is MSFT’s internal LLM used for CoPilot. At META, with its recent launch of Llama3 405bn model (which took 3 months to train), we think META has a production-worthy family of models in place. At AMZN, which had experimented with a menagerie of LLMs, we think AWS has converged on a handful of LLMs, including Llama3. At all these major players, we think training days are over. All the publicly available data sets have been consumed. The little there is left to train is focused on a declining supply of fresh data sources.
Independent LLM vendors too may be largely done with modeling, and even if there are not, they may have run out of resources. Stability AI is on the way out, due to solvency issues, as reported in the media. Anthropic may be largely done with modeling with the release of Claude3 a few months ago. Even OpenAI may be finding it difficult to persist with training ever larger models given its relatively small revenue base. 
If the peak in training workloads on NVDA GPUs is behind us, we think it releases a large installed base of GPUs to inference workload, thereby increasing H100 availability and driving down pricing of server rental, price per token and price of the GPU itself.
If H100 is in surplus, why are hyperscale CSPs raising capex? Are they raising capex due to unmet infrastructure capacity for current demand? We do not think so. We think they are raising capex to 1) meet future demand for AI services they anticipate will emerge, 2) to spend on shell construction and power/water utilities, 3) on inference-optimized GPU and 4) at mid-sized data centers distributed geographically worldwide. 
So, for instance, of the $19bn capex MSFT announced for the June quarter, the bulk of the cash spending allocated for the near future we think will go towards land/shell acquisition worldwide. The monies allocated for infrastructure spending we think will be spent much further down the road depending on actual demand.
But the key is this – we think future infrastructure spending will be on inference-optimized GPU. In the case of MSFT, we think they will hold out for a GPU with a higher density of HBM than that on the H100. We think they will hold out for the B200 or denser versions.
We do not think MSFT is likely to populate new shells with H100/H200, only to rip out the racks in less than a year and install Blackwell racks. That would simply not be a capital efficient approach.
While MSFT waits for Blackwell, we think they are likely to aggregate existing capacity of H100 at third party data centers. 
Blackwell could be a winner, but when?  We think hyperscale CSPs are likely to stick to the plan of allocating fresh infrastructure spending to inference-optimized, higher memory density GPU such as Blackwell. We do believe that there will be good demand for Blackwell, as and when supply becomes available. 
What is a realistic timeline for Blackwell to ramp up in production quantity? Before taking delivery at production scale, we would expect hyperscale players to take delivery of a limited volume of test servers and run field testing for several months. If initial shipment is delayed by say one quarter, as media reports seem to suggest, we would expect another 1-2 quarters before Blackwell is shipping in volume. This may delay revenue ramp deep into NVDA’s Fy26.
H200/B200A – Could they ship in the interim? Of roughly equal HBM density (144GB), these GPUs could be better suited for inference workloads than the lower density H100. 
Going back nearly two months, there have been reports of heat-related failure with the H200. Our checks show that there have been instances of customers rejecting test racks due to GPU failures in the field. 
It is no secret that these GPUs release intense heat due to the ~1000W of power consumption per chip. At the root of the problem, we suspect that NVDA has not standardized the exact solution for heat extraction, leaving the problem up to the ODMs/OEMs to solve. There could be important differences in the cooling systems adopted by the various vendors resulting in failures at some vendors. 
The upshot being that hyperscale players are unlikely to take delivery unless they are convinced that NVDA and its channel partners have settled upon a common solution. Until then, we expect shipment of H200/B200A to be constrained. 
Net/Net: We do not support the idea that increased shipment of H100/H200 could make up for the delay in Blackwell shipment. All things being equal, we think it appropriate to adjust NVDA’s data center revenue ramp downward vs consensus estimates, until Blackwell gets into volume shipment, which could be well into NVDA’s FY26. 
But this may not be how it turns out. NVDA may still indicate higher than expected shipment of H100/H200 in the medium term. Were this to happen, we think investors would need to worry about a build-up in channel inventory, a negative situation.
We think the first reaction to the earnings call could be to the upside. But that may be a head fake. In the days and weeks ahead as investors mull over the details on the NVDA call and as earnings reports from hardware/software vendors such as DELL/HPE and CRM/MDB emerge, we would expect NVDA investors to step aside and look for better points of entry

AAPL: Is the Mate60 a paper tiger?

After an initial surge in the China domestic market in September/October, the availability of the Huawei’s highly publicized Mate60 premium smartphone seems to have largely melted away. Why? We believe Huawei may not have been able to sustain supply of its Kirin9000 modem chip. And why is that? We believe it is due to inadequate yield of the Kirin chip at SMIC foundry.

Ever since the Mate60 model launched to much fanfare in China, AAPL investors have been mindful of the challenge from a newly resurgent Huawei. Though AAPL stock has perked up of late on macro tailwinds, on a 6-month basis the stock has underperformed the Qs, as the damage done to the stock from China fears persist. Market research reports and consumer surveys have left investors fearing share loss of the iPhone15 to the Mate60 in the China market.

At a House committee hearing two days ago, a senior member of the Commerce Secretary’s staff testified that the ‘neither the performance nor yield’ of the silicon in the Mate60 ‘may match the market of the device’. We wish to highlight the statement. The finding presented at the testimony appears to be the first public disclosure of the US government’s investigation into the Kirin chip.  We believe the statement is significant and has positive implications for AAPL.

Going into the 4FQ earnings event, the stock hit our $170 PT that we had set three months before (link). The gain in AAPL stock after the earnings event has been largely due to macro tailwinds. Of the big tech peers, we think investor expectations are the most muted for AAPL. With the Fed Chair yesterday markedly shifting towards a neutral policy stance, we think AAPL has more to gain than its big tech peers.

Going forward, whatever may be the headwinds the iPhone15 faces, we have come to view that the Mate60 is not one of them. If investors layer on top of the macro story, diminishing idiosyncratic risk from China/Huawei, we think AAPL could see a nice run-up into year-end.

We are turning modestly positive on the stock and raising our PT to $220 as we nudge up our Dec quarter iPhone expectations slightly ahead of consensus. Our FY24 revenue growth estimate creeps into positive territory, while still below consensus. We will however temper enthusiasm by noting that an earlier than usual seasonal shutdown of a Foxconn factory may not bode well for the March quarter.

–xx–

AAPL – muted investor sentiment: In a note titled ‘Investor expectations need resetting’ we published more than a year ago, we called out FY23 revenue down 1.5% vs. consensus up 4.7% at the time (link). A year later the company printed FY23 revenue down 2.3%, slightly worse than our estimate and significantly worse than consensus. After four quarters of negative revenue growth in FY23, investors have reconciled themselves to expect little to no growth. The guidance provided for the December (1FQ24) quarter did little to enthuse investors. Investor expectations have indeed reset from the go-go days of iPhone 12/13. While consensus calls for FY24 revenue up ~4%, we wonder if there is much conviction in the estimate. The China/Huawei risk adds to the gloom.

In our view, going forward whatever may be the headwinds the iPhone15 faces, we believe the risk of share loss to the Mate60 is not one of them as there isn’t adequate supply of its modem chip Kirin9000. In the context of muted investor expectations, reduced idiosyncratic risk out of China/Huawei could send AAPL stock off to a nice run-up into year-end.

Why would Mate60 modem chip have yield and performance issues?

  • Yield: We believe the potential for yield issue is due to the Kirin9000’s large die size and complications arising from printing 7nm features using older generation 28nm fab equipment. The quadruple exposure on the stepper tools needed to reduce native 28nm feature to 7nm, results in die size larger than if the chip was processed on native 7nm technology. Larger die leads to lower yields.
  • Performance: Why would the performance be lower than traditional 7nm modems out of say TSMC? We think the Kirin9000 transistor is based on planar architecture vs. the typical FinFET architecture used in traditional 7nm process. This difference could result in higher leakage currents, a critical metric in benchmarking transistor technologies.
  • Indirect evidence of low output of Kirin9000 chip out of SMIC: Given the large die size of the Kirin chip, Huawei orders to SMIC should have translated into high sequential revenue growth at SMIC. But it apparently hasn’t. SMIC printed Q3 below-expectations and provided disappointing Q4 guidance.

Thoughts on the Mate60: We believe Huawei’s Mate60 is intended to demonstrate China’s ability to produce 7nm chips using older generation equipment. This ability may have military applications, thus triggering a US government investigation. However, for consumer products, which require high volume at high yields, the SMIC process may not be capable of producing silicon on the scale required to move the needle for AAPL iPhones. Insofar as Apple’s iPhone15 is concerned, we think the Mate60 is a paper tiger.

Bookend the iPhone upside – Foxconn action raises a red flag: Our checks show that the Schengen iPhone factory may have furloughed its workers as early as in mid-November which is unusually early for its seasonal shutdown ahead of the Chinese New Year. Part of the reason could be to decommission and move out manufacturing equipment to new geographies, such as Vietnam and India. And a part of the reason could be due to lack of iPhone order visibility into the 1st half of CY24, potentially a red flag.

And yet, on the positive side:

  • The China market could be seeing a modest resurgence for high-end smartphone models as mobile gaming makes a comeback. The iPhone15 and Xiaomi’s X14 could benefit from the trend.
  • US telco carriers are left with little option on the premium end as Samsung has not refreshed its Galaxy flagship in ~2 years. Therefore, carriers are having to spend marketing $s promoting iPhone 15.
  • Even pre-paid vendors in the US, such as Cricket, are promoting Apple’s flagship this year vs. their usual preference for refurbished models. We suspect Apple got rid of a lot of iPhone11 inventory at the end of the June quarter, resulting in reduced availability of refurbished models. This is a positive for iPhone’s overall ASP.

Raising AAPL PT to $220 from our $170 PT we set back in Aug’23 ahead of the June quarter earnings (link). Heading into the Sept quarter earnings, the stock hit our previous $170 PT. It has since then run up largely on macro tailwinds. However, the risk of share loss to the Mate60 risk has remained a concern for investors. Were the concern to go away we expect AAPL’s Dec consensus estimates to go up.

For the December quarter we model iPhone and overall revenue ahead of consensus. We are tweaking up our Dec iPhone revenue growth from up 2% to up 7.5% vs. consensus of up 5% (Exhibit 1). We model overall Dec revenue up 2.8% vs. consensus up 1.7%. For the fiscal year 2024 though, we are model iPhone and overall revenue below consensus (Exhibit 2). We model iPhone revenue down 0.5% vs. consensus up 3%. We are modeling overall earnings for FY24 at $391bn/$6.4 with revenue up 1.9% vs. consensus of $397bn/$6.56 with revenue growth up 3.7%. Our $220 PT is based on 34x to FY24 eps.

MU: Peak earnings + nascent worries = downside risk

A quarter ago when we raised our FY25 eps estimate to ~$12 (link) and with consensus estimates at ~$7, we felt it appropriate to value the stock at $150 by applying a 12x multiple, in the upper half of historical range. With the stock having hit out PT, and with Street eps estimates now likely at peak earnings, we think it reasonable to consider applying a lower multiple to the stock.

MU management reiterated its outlook that HBM capacity is sold out though Cy25. With HBM pricing through Cy25 have having been contracted out and with little intention by management to increase DRAM bit supply during the Fy25 capex cycle, we do not see further upside to FY25 DRAM consensus estimates. Management’s outlook for HBM revenue in the ‘multiple $billions’ was modeled into estimates ahead of the earnings call. Management’s outlook for record overall revenue in Fy25 was already dialed in a quarter ago when consensus revenue estimate was at $32.6bn vs. previous peak of $30bn. Fy25 revenue estimates now run more than $10bn above the previous high sent of $30bn in Fy18. At the recent earnings call, management did little to coax up Fy25 estimates. Given the visibility management provided, we think Street expectations could be closing in on peak earnings

Meanwhile, nascent worries have descended on investors’ minds. 1) August guidance came in under buyside expectations, with no clear reason from the Street for the shortfall. We provide a possible reason for the guidance miss – Share loss in Mobile DRAM to Samsung. 2) More worrisome are the emerging concerns on HBM yield, which management did not quite snuff out during the call, in our view. Going into the call we had warned of a ‘wrinkle in the AI narrative’ and that management may not provide adequate clarity. Unlike many on the Street, going into the call, we pointedly did NOT raise our PT. We fear the issues could be related to NVDA’s H200 ramp, which MU may have little control over.

We believe the Street has pulled forward the ramp in revenue/margins expected for the next 12-18 months. As such we think the stock should be trading at multiple associated with peak eps (10x or lower) and not trough eps (12x or higher). Based on peak earnings and nascent yield worries, we cut our eps multiple to 10x. At our FY25 estimates of $39.5bn/$11.8, we cut our PT to $120 from $150.

We note that in the Fy18 cycle, with eps peak at $12, the stock peaked at ~$60. In this cycle, we think eps potential needs to be substantially higher than $12-$14 for the stock to trade more than 2x the previous stock peak set during the previous DRAM surge.

–xx–

Odd statements leave investors guessing: At the top of the Q&A session, in response to a question, the CEO stated ‘we are very much focused on continuing to ramp our [HBM] production and also to improve our yields’. This may sound like a banal enough statement, except that the question was not about HBM yields.

Rather, the question was about qualifying HBM3e for a broader base of customers beyond NVDA. Instead of providing an answer to this question, the CEO’s statement on yields sounded more like a disclosure, perhaps lending credence to yield-relayed chatter that had been circulating on the Street before the earnings call.

Then there is the matter of HBM ‘trade ratio’. When management says it takes 3x the DRAM wafers to get to a given target for HBM bits output versus the wafers required for traditional D5 bits, does this ‘trade ratio’ refer to loss of bits MU incurs before shipping HBM3e product to NVDA? Or does it refer to loss of HBM bits at packaging and testing of the H200 module at NVDA? Who bears the cost of loss of DRAM bits? MU or NVDA?

Our view on yield issues: While yield loss at CoWoS packaging has been known for a while, we are coming around to believe that there could be additional yield loss of NVDA’s module (GPU and HBM) during the testing of H200 modules. The H200 module perhaps has some of the highest current densities ever recorded in electronic devices, we believe higher than that in H100. Even though the NVDA GPU module is cooled during operation, challenges in heat-dissipation could be higher than previously anticipated, leading to heating-related failures.

During the initial stages of the product ramp, perhaps it is a matter of testing enough number of H200 modules to filter in working devices. But as the unit volume ramps the logistic of testing devices could turn into an exponential problem, slowing down output and the pace of revenue ramp. There was little on the MU call that gave us confidence that yield issues do not exist. We think the management’s odd statement at the top of the Q&A session is cautionary.

If the HBM trade ratio were to decrease, as yields improve, wouldn’t that increase bit supply and reduce MU’s profitability? If the trade ratio were to increase above 3x, wouldn’t the additional yield loss raise concerns with NVDA’s data center customers about reliability of H200?

Is there upside to HBM revenue expectations? We do not think so. 1) Management stated that HBM is sold out for Cy24 and Cy25, 2) further pricing increase of HBM is unlikely as pricing has been contracted out for Cy25, 3) further increase of HBM output is unlikely as little of the Fy25 capex has been allocated to bit supply growth. Even if NVDA were to increase its demand forecast for HBM, we do not see how MU would be able to increase output or pricing. 4) Samsung’s potential entry into the HBM market as some point in the next 6-12 months could dilute MU’s share if not at H200 but possibly at B100. Recall that B100 is expected start ramping in CQ4/CQ1.

Management reiterated the Fy25 guidance for HBM at ‘several $billions’. Heading into earnings, we think Street models already dialed in HBM revenue north of $5bn, and perhaps quite a bit higher. In this context, reiteration at the earnings event of ‘several $billions’ does little to move the needle.

Management spoke of overall revenue setting a ‘substantial revenue record’ in Fy25. But with sell-side models already at or slightly above ~$40bn versus the previous revenue record of $30bn in Fy18, management’s outlook does not move the needle.

Why did Aug guidance miss enhanced expectations? We think this is in part due to weakness in MU’s mobile business arising from China’s import policy. We believe the surprising strength in Samsung’s reported Q2 revenue/profitability comes, not from AI end market, but from share gain in China’s mobile DRAM market.

Due to restrictions imposed by the China government, we think there is excess inventory of MU’s lpDDR5 in the disti channel. So much so, we believe that MU’s lpDDR5’s disti pricing is lower than lpDDR4’s pricing. Under normal channel inventory conditions, the lpD5 should be priced higher than lpD4.

Due to China import policy restrictions on MU, we think China mobile handset vendors prefer Samsung’s lpDRR5 over MU’s, even for handsets allocated for export purposes. Oppo/Vivo/Xiaomi provide ~30% of global smartphone units. China’s policy largely locks MU out of this market. Due to weak sell-thru, we think lpDRR5 inventory at China distis has been building up for a while. We think MU decided to cut sell-in in May and Aug quarter to normalize disti inventory.

We think lpDRR5 channel inventory resulted in MU’s Mobile segment revenue in the May quarter printing down 1% q/q, while each of the other 3 segments reported revenue up for the quarter. We think MU’s overall DRAM bits printing down ~7% q/q is largely due to decline in lpD5 bits sold into China’s mobile market. We do not expect lpDRR5 demand in China distis to normalize until China’s import policy changes.

The next positive catalyst for the stock is the upcoming TSM earnings event. We expect TSM to sound enthusiastically positive on AI revenue potential for the rest of the year, which will be seen as a positive read-thru for MU’s HBM. While the stock may make another attempt to cut above the previous high, we expect the drive to stall, as worries regarding revenue ramp at HBM/H200 persist.

The arc of our PT changes: We started raising our MU PT right after NVDA’s April’23 earnings on the basis of HBM potential (link) when there still was considerable inventory in DRAM channel. Into MU’s Nov’23 earnings, we spoke of a sudden ‘urgency’ in the DRAM channel, as DRAM customers began to sense a tightening of supply. We called for upside to our $80 PT (link). Three months later, into the Feb’24 earnings, and with the stock in the mid-$90s, we raised our PT sharply to $150 (link) into our expectations of MU essentially becoming tied at the hip to the 2nd phase of NVDA’s AI GPU ramp. Three short months later, into the Apr’24 earnings, with the stock having hit our PT, we decided not to raise our PT into Apr’24 (link).

The velocity at which consensus earnings expectations have risen, we think expectations may have become over-extended. We might even say HBM expectations may have entered bubble territory. With little upside to Fy25 estimates and with nascent yield worries, we think it appropriate now to cut our PT for the first time this cycle.

Net/Net: The months of May/June witnessed a series of positive catalysts from the AI ecosystem, all through which MU stock ran up, finally stalling at our previous PT of $150. At the earnings event two weeks ago, MU management did not provide incremental guidance for taking up Street’s Fy25 revenue estimates. Additionally, investors were left with nascent worries about HBM yield and qualification.

We expect the stock to remain volatile as it ricochets between 1) positive commentary from MU/TSM/NVDA management regarding demand pull from hyperscale customers and 2) investor worries about the pace of HBM/H200 ramp.

We believe the Street has pulled forward ramp in revenue/margins expected for the next 12-18 months. Based on peak earnings, nascent worries on HBM yield and the potential for a slower ramp of HBM revenue, we cut our eps multiple to 10x. At our FY25 estimates of $39.5bn/$11.8, we cut out PT to $120 from our previous PT of $150.

AAPL: Stock re-rates, more to go

Even after the sharp move in the stock post-wwdc, we do not believe many investors appreciate the real value in the stock. The value lies not in gigantic LLMs or snazzy demos. Rather, AAPL’s value lies in a rich vein of gold that hyperscale peers do not have access to. And even if they did, they lack the means to mine it as efficiently as Apple can, thanks to the low-power consuming Apple Silicon. Even after the sharp move to the upside last week, the stock has merely caught up with SPX on an ytd basis

Apple’s value lies in its ability to personalize the AI experience for its 2+billion installed base of devices and billion+ paid subscribers based on its access to private data locked up in Apple devices and guarded by Apple’s privacy rules. From a financial perspective, Apple’s value lies in its apparent ability to ramp up AI offerings without having to spend a king’s ransom in capex. Evidence of this ability was demonstrated by Apple announcing a major stock buyback at the previous earnings call, which would not have been possible if capex were to increase anywhere close to the spending at hyperscale peers.

The debate ought not to be merely about whether iPhone16 benefits from an upgrade cycle. We are confident it will, as iPhone16 offers a new class of capabilities. Gen AI joins a long line of capability upgrades over the years – touch-based apps, 5G connectivity, enhanced cameras – each of which triggered a multi-year device upgrade cycle. After two tepid years of iPhone growth, it is only reasonable to expect pent-up demand from the early adopters.

In keeping with historical precedence, we expect iPhone16 to carry mostly in-house developed Gen AI applications. We expect iPhone16 to garner demand from early adopters. We expect Gen AI apps developed by 3rd party ISVs to start showing up next year in iPhone17, which should then encourage another round of upgrades, this time from mainstream consumers and enterprise customers.

But it is more than just iPhones. We expect Apple to integrate Gen AI functionality into its Macs, iPads and even Watches. We expect functionality to steadily improve over the years which then incentivizes more users to upgrade the gamut of Apple devices. We are not looking for heroics. We are modeling back-to-back years (FY25/26) of overall revenue growth slightly above mid-single digit and eps growth in the high single digits (Exhibit 1). We model Fy25 as $415bn/$7.15 and Fy26 at $443bn/$7.73; revenue in both years up ~7% and eps up 8%.

The two-day surge following the wwdc event took the stock up intra-day to our current PT$220. Based on 34x to our Fy26 eps estimate, discounted at 5% back to this year we derive a new PT of $240.

–xx–

The next leg in the AI revolution: AI models in the cloud having reached a certain level of maturity, the next leg in the AI revolution is the distribution of distilled versions of LLMs to hundreds of millions of client/edge devices. This step needs to be taken with great care, as novel forms of attacks and malware running on powerful client SoCs could spell disaster on a scale unimaginable. To preserve its brand as a trusted personal device, we think Apple needs to necessarily stay deliberate in launching AI apps and in allowing 3rd party ISVs into its walled garden of devices. In our view, Apple is not being slow in its AI roll-out; Apple is merely being cautious in its approach to AI. And there is value in that.

Personal data – the real store of value: Having exhausted publicly available data for training giant LLMs, AI vendors thrash about looking for new sources of data to train their models (to reduce hallucinations further). However, there are just two giant untapped data sources, both in private domains, inaccessible to internet-facing Gen AI players such as OpenAI. One such domain is the vast amount of enterprise data locked up in secure corporate data centers, the kind of data which the likes of CRM are trying to monetize. The other untapped domain is the vast amount of personal data locked up in secure personal devices such as the iPhone, Mac, iPad and Apple Watch.

Have LLMs become commoditized? The one idea that we can take away from wwdc – AAPL plans to provide a personalized AI experience to its billion-plus paying customers and 2+billion Apple devices based on ultra-small models trained on data locked up in Apple devices and Apple Cloud. And if there is a corollary, it is this – LLMs have become commoditized. Why else would OpenAI give away its prized models, on which it may have spent $billions, to Apple for free?

The value going forward lies not in LLMs by themselves, but in access to proprietary data for training the LLMs further and for developing useful apps. Personalized Gen AI apps so built and distributed to billions of users worldwide are a work in progress, in our view. Whereas cloud-based Gen AI applications, such as ChatGPT and CoPilot have reached a level of maturity, client-device based Gen AI applications are very much in their infancy.

Apple’s AI apps to launch in waves: We think the first generation of AI-centric devices from AAPL, such as the iPhone16, are likely to carry mostly internally developed Apple apps. To recall, iPhone2 the model that first launched the apps revolution, carried mostly home-built apps. Apps from ISVs took some time to make their way into Apple’s roster of iPhone apps. We expect a similar cadence from Gen AI apps as well. We expect Gen AI apps from ISVs to appear on the second generation of Apple devices, such as the iPhone17. This second wave of Ai apps is a Fy26 opportunity.

Apple has a moat others lack: Even after the sharp move to the upside after wwdc, the stock has merely caught up with SPX on an year-to-date basis. Its under-performance all year we believe was due to a series of investor misperceptions which got corrected after Apple events, the most notable being 1) iPhones were in secular decline due to share loss and 2) Apple did not have an AI strategy. The first one was addressed at Apple’s earnings event, the second was addressed at the wwdc event

However, investor skepticism lingers. Apple is seen as having been late to the AI party. But is it though? Just as NVDA is seen as having a moat due to CUDA, Apple too has a moat due to its control over billions of personal devices, every one of them a carrier of future AI apps. Google may have had a jumpstart over Apple in the development of in-house AI infrastructure. But Google does not have the kind of control over Android as Apple does over iPhones. And Google does not have Apple’s ecosystem of non-phone personal devices.

Apple is a one-stop shop for ISV access to Apple’s client devices. ISVs could build AI apps based on a range of inputs – touch, pencil, visual inputs and voice activates. That is Apple’s moat. And that is why we think OpenAI is willing to give away its prized LLMs to Apple.

Apple silicon in the cloud: Apple has another weapon its hyperscale peers, outside of Google, lack – Apple Silicon capable of running complex LLM workloads in the cloud. While peers such as MSFT, AMZN and META struggle to develop internal Ai silicon just so they could escape the onerous pricing and power consumption of AI GPUs, Apple announced at the wwdc event that it plans to use its internally developed, fully mature, low-power consuming Apple Silicon for cloud AI workloads.

Will Apple servers be made available in the merchant server market? Down the road, after Apple established an AI beachhead, we think merchant Apple severs are a possibility. As META’s Llama LLMs are compatible with Apple Silicon, we can see META becoming a potential future customer. But we will leave this possibility out of our fin models.

Net/net: We are raising our PT to $240 from $220. We expect AI adoption for Apple customers in waves, the 1st generation of Apple devices carrying internally developed apps to launch in Fy25, the 2nd generation introducing apps from ISVs to launch in Fy26. We see at the very least two years of ~7% revenue growth and ~8% growth in EPS growth. After two years of stagnation, we expect pent-up demand from early adopters this year to extend to mainstream adopters next year and beyond. Furthermore, thanks to its internally developed Apple Silicon, muted capex needs may allow Apple exhibit superior ROI and free cash flow compared to its peers

AAPL: What Is The Cloud Strategy?

If 2023 was the year of text-based interaction with AI models, 2024 appears to be the year of interaction via human voice, ‘chatting’ in the real sense of the word. We were positive on AAPL through the lean months earlier this year, and we remain positive on the expectation of an AI-triggered refresh cycle. While investor focus today will be on dev tools made available to AI developers, our core interest in the name remains unchanged – we believe Apple Silicon offers the highest efficiency performance-to-power profile for inferencing on-device and in the cloud. But it is up to AAPL to convince investors.

Expectations of the event today – besides mundane text summaries and generative image capabilities, AAPL needs to provide assurance that its Gen AI related audio-interactivity and multi-modality capabilities are comparable to peers and offer clear value over existing Apple devices. A second topic of interest is AAPL’s business model in its engagement with major 3rd party partners. Investors will need to assess whether the engagements could trigger business conflicts and anti-trust issues.

In recent weeks consensus has drifted to the notion that AAPL could form a strategic alliance with OpenAI. In hindsight perhaps this idea should not have been a surprise. We wonder how many took note that the mobile device on which OpenAI demo-ed its real-time conversational capability at the GPT-4o event on May 13th was an iPhone 15 Pro, and not an Android handset, or for that matter, a Microsoft AI PC.

AAPL is hardly going to preview future Apple devices at the event today. Demos if any will necessarily be on existing devices, i.e. iPhone15 Pro and iPad Pro. Given that the iPad Pro has the more advanced Apple Silicon, we are hoping to see more of this device in demos today. But client device demos hardly answer a key question on investors’ minds. Investors necessarily need to know AAPL’s cloud strategy. AAPL needs to inform developers and investors how it offloads larger tasks to the cloud, and more importantly, what sort of silicon it would use in the cloud.

The stock has run-up into the event today. A bout of profit-taking is to be expected. We are buyers on weakness. But our opinion is conditional on tight LLM integration with AAPL o/s and native applications. And just as important, we’d like to hear from AAPL if Apple Silicon has a future in the cloud (link). We maintain our $220 PT.

–xx–

Cloud latency forces AI workloads to the edge: Many of the capabilities OpenAI showcased at the GPT-4o event are now available on any smartphone via the ChatGPT4.0 app. So how would iPhones differentiate? We think the OpenAI demo on real-time conversational AI offers a possible hint. The iPhone 15 Pro running the demo had to use a wired ethernet connectivity vs using the more convenient WiFi connectivity. We infer from the choice in connectivity that the conversational AI workload was not running on-device. Rather, it was running on the cloud, presumably on NVDA infrastructure. For current generation mobile devices, we are aware that latency is an issue for real-time conversations with an audio AI agent, not just when only a cellular network is available but even with the availability of typical enterprise WiFi networks.

Mobile devices can hardly be expected to be hooked up to wired ethernet all the time. One way to reduce latency issues is to perform AI tasks, including real-time audio AI, as much as possible on the local client device. This requires beefed up neural engines on the device. We are sure to hear a lot more about the TOPS metric. AAPL disclosed at the iPad Pro launch event that the NPU on the M4 chip delivers 38 TOPS.

On-device Small Language models: On-device workloads on mobile devices necessarily are based on SLMs. Even as it partners with 3rd party LLM vendors, we think it is only natural for the orchestration of on-device workloads to be controlled by AAPL’s proprietary Small Language Models. Such models then need to work with 3rd party models from OpenAI and elsewhere, and when required, help partition AI tasks to the cloud. We need to hear the state of AAPL’s SLMs. Google provided a whole bunch of details at its recent Gemini event as to how it provides seamless hand-off from cloud-based tasks to remote client devices based off a spectrum of models – from the large Gemini Pro to the smaller Gemma and Nano models (link). We need to hear a similar narrative from AAPL.

Crucial questions for AAPL – Cloud-based AI workloads: Internet data as well as access to giant databases of images and videos necessarily require cloud data access and cloud-based LLMs. For cloud-based tasks, Google plans to use its Gemini Pro models running on its internal silicon.

What is the equivalent at AAPL? Will AAPL’s AI cloud workloads be using OpenAI LLMs running on NVDA servers hosted by Microsoft Azure? Alternatively, is there a future for Apple Silicon in the cloud? If AAPL plans to have a long-term relationship with OpenAI, would GTP-4o be ported to Apple Silicon? Where does MSFT feature in the mix of business relationships?

Net/Net: While the Street appears focused on client device-based AI applications, we think the real question is related to AAPL’s cloud strategy. On this hinges company valuation. The stock has run-up into the event today. A bout of profit-taking is to be expected. We are buyers on weakness. But our opinion is conditional on tight LLM integration with AAPL o/s and native applications. The company needs to overcome its historical coyness and become more forthcoming about its strategy for cloud-based Gen AI workloads.

NVDA: Has the H100 user wave crested?

NVDA bulls appear to have decided to sit out the immediate aftermath of the earnings call and prepare for a temporary pullback. They appear secure in their belief that the H-series continues to be supply -constrained, and with the added insurance that the beefier B100 is just around the corner. So why worry? Let’s buy the dip, say the bulls. But what if the tip of the spear in mass-market AI development is no longer chasing beefier GPUs and ever larger GPT-type foundational models?

For mass market AI app developers, the divergence between H100-GPT4 pricing and a path to profitability is too daunting, we believe. And so, they necessarily must walk from H100 for now and seek cheaper solutions. If we are right, then the supply-demand imbalance the Street infers from upstream checks could be closer to an end. And that will not help the bull thesis.

Mass-market AI applications, such as tech support chatbots and email summaries, need to seek out an alternative path. We believe there is intense development activity ongoing to fine-tune GPT4 class of LLMs in order to fit the vector database into NVDA’s older gen A100, which happily enough seems to be in surplus. Our checks show A100 servers running open source LLMs are less than a tenth of cost of GPT4-H100 in terms on $/token. And that is unbearably attractive to AI app developers. We believe the initial wave of H100 users has crested.

The print/guide provided today may not be quite as relevant as qualitative commentary on out-quarters. NVDA management is likely to provide the Street with just enough juice to model next year up y/y. Purchase commitment for Fy26 is likely to go up vs. the ~$1bn commitment disclosed a quarter ago. The Street appears to be modeling NVDA’s Fy26 Data Center revenue growth anywhere from up 20% to up 50% based on supply-chain checks. We think upstream supply-chain signals are not a reliable indicator of future growth when there are shifting trends downstream with AI developers.

We do not think NVDA is trading on fundamentals as expectations for next year seem unmoored from business realities downstream. We take our previous PT of $425 (link) from 3 months ago and index it up by the Sox up 17% to get to a new PT level of $500. We will need to see NVDA stock give up all its ytd gains before we get interested on the long side especially as, on a macro level, a whiff of inflation is making long-duration secular names such as NVDA incrementally less desirable to investors.

–xx–

Development costs are simply too crushing to all but a handful of hyperscale AI-service providers, in our view. Even at hyperscale AI-service providers, we doubt if their services/applications are close to profitability. Certainly not at Google GCP – external AI users have essentially been getting free access to Bard/Gemini. Microsoft’s Copilot priced at $30/month may not margin accretive, our industry checks show. But these giants have the financial muscle to take losses while they seed a new market. Smaller 3rd party app developers do not have that luxury. So how are they going about it?

The more innovative app developers appear to be pivoting away from supply-constrained AI-hardware. The tip of the spear appears to be pointing away from expensive H100 servers and towards older generation A100, for which we think there is surplus supply. We think there is now an active secondary market for A100 servers. This has encouraged a new crop of relatively unknown boutique CSP to enter the AI-fray and act as price-spoilers to the CSP incumbents

We believe rental pricing of A100 servers has been dropping rapidly and is now a fraction of the H100 servers. This acts as a powerful motivator for app developers, who were previously dabbling with H100 but with limited success due to high pricing, to gravitate to the older generation A100. A race to the bottom in pricing – this has been ethos of tech innovations over the past 4-5 decades. This time is no different.

Who is the incremental buyer for the B100? When the beefier B100 hardware becomes available in a few months, if the H100 user base is already cresting, who will be the incremental commercial buyer for hardware that is more expensive than the H100? We find it hard to believe hyperscale CSPs could acquire the B-series with the same gusto they did the H-series servers.

There must be a reason Jensen has been, of late, cozying up to national leaders. Maybe it takes the backing of a national budget to fund the enormous outlays B100-based data centers may require. We note in passing that the estimated combined capex of the top 4 US-based CSPs sits just below the #2 national defense budget (China), and larger than the combined budget of the #3 and #4 nations (Russia, India). At some point of time, commercial CSPs need to disclose profit metrics; they have been silent on that front so far. Loading up on more capex does not help. We think Cy25 will be a year of capex digestion as the top 4 US players allow costs to run-off via depreciation.

A new phase: We believe the initial spurt of activity in Gen AI is fast maturing and is entering a new phase. The period of intense experimentation at all costs may be behind us. Our checks across the industry show mass-market application developers are moving away from experimentation to a new phase of discovering paths to profitability. This new phase, while just as exciting and innovative, may not be quite as fast-moving. And the path to profitability, may not run through NVDA’s most advanced GPUs

At the very high end of the spectrum of applications are the ones which hold the promise of dramatic productivity gains in the near term. These applications, such as GitHub CoPilot, could well be profitable on the existing high-end solution, i.e. GPT4 running on NVDA’s H100.

The H100 user wave may already have crested: However, proliferation of Gen AI applications beyond just the most lucrative end, we think necessarily requires innovation in lower cost hardware and open source LLMs. We think developers in mass market applications are moving away from NVDA and OpenAI’s cutting-edge solutions – they simply do not see a path to profitability. In our conversations across private developers, we think the wave of GPT4-H100 users may have crested. We think 3rd party app developers have been moving away from the GPT4-H100 combination in the past few months.

Signs of cresting user base: 1) OpenAI is having to make drastic cuts to pricing every three months, 2) VC-funded startups on Y-Combinator platform (Altman’s alma mater), we hear are being encouraged to use OpenAI’s GPT-models instead of cheaper open source models, 3) Microsoft CoPilot, our checks show, is handing out free seats to app developers, 4) Google GCP AI appears to be in no hurry to move away from free access to external users.

In search of cheaper hardware: So where are mass-market AI app developers headed? To AMD? To proprietary solutions from hyperscale CSPs? No. We don’t think so. The non-NVDA solutions are far from the plug-and-play stage. These developers have no choice but to continue working within the NVDA family of products, for now. However, we think they may have discovered that a potential path to profitability runs through NVDA’s older gen product, the A100. And why is that?

A secondary market has emerged in A100: We think the A100/80GB is already trading on the secondary market, and with it, a drop in hardware cost and server rental pricing. Less than three years since launch, we think there is surplus of A100 in the market. Hourly rates for A100 servers at data centers have come down over the past year. A100 is offered at a fraction to H100’s hourly rates. Our checks show A100 servers running open source LLMs are less than a tenth of cost of GPT4-H100 in terms on $/token. And that is unbearably attractive to AI app developers. And so, many developers are taking their H100 models and are trying to optimize them to fit on A100 servers.

Price spoilers enter the AI DC market: The clearest signal of surplus A100 in the recent emergence of price-spoilers entering the AI data center market. Boutique DCs with sub-$billon annual revenue and annual capex of only ~$100mn have begun to enter the AI data center market with the explicit goal of poaching users from incumbents. Some of the outfits are highly profitable and prefer to stay that way after entering the AI market. We think they are managing to procure A100 servers at super low prices, thus ensuring continued profitability as they scale up their AI customer base. This is very good news for 3rd party AI app developers.

The H100 too will eventually go into the secondary market perhaps sooner than the A100. As supply of H100 catches up with demand and as the initial wave of H100 users crest, we think the H100 too is likely to go into surplus supply. While it took ~2years for the A100 to go into surplus supply, we think the H100 may get there sooner due to the steeper increase upstream in H100 supply capacity vs. the A100.

Net/Net:

  • While H100 may seem supply-constrained to the Street going purely by upstream supply chain checks, our downstream checks seem to indicate that the initial wave of H100 users may have crested as the pricing is too rich for most mass-market Gen AI applications
  • Mass-market AI app developers need to 1) find innovative ways to take their GPT4-H100 models and adapt them onto A100 servers running LLMs smaller than GPT4, or 2) suspend all development until H100 supply goes into surplus.
  • We expect H100 to go into surplus once hyperscale players such as Meta and Microsoft Azure run into excess capacity and start dumping servers into the secondary market. We think Microsoft Azure could be close to hitting excess capacity. Why else would they be signing up new users of Copilot for free?

We do not think NVDA is trading on fundamentals as expectations for DC growth next year seem unmoored from business realities downstream. We take our previous PT of $425 set 3 months ago (link) and index it up by Sox advance up 17% to get to a new PT level of $500. In other words, we will need to see NVDA stock give up all its ytd gains before we get interested on the long side.

AMD, SOX: Fed impact could magnify downside to AI names

Movement in Treasury rates following FOMC meetings typically has material impact on tech stocks. The FOMC event today we think could be less about the timing of Treasury rate cuts and more about an acknowledgement by the Fed that the underlying economy is running stronger than expected, and that the odds of a recession have come down. In our view the market reaction to such a take-away would be for investors to shift allocation towards economically sensitive cyclical names, such as industrial and commodity names, and away from high-growth secular names, such as the AI-fueled semis and software names.

The AI-exposed names which reported yesterday, AMD and GOOG but perhaps not MSFT, may be just a tad off in the delivery of good news to investors. This slight roll-back in fundamentals could be compounded by the Fed event today and could result in trimming of recent gains by AI-exposed names.

The significant runup in these names since early Nov, following the Fed’s November meeting has been in no small part due to a significant drop in real rates since Nov (Exhibit 1), in our view. Falling real rates provide tailwinds to equity multiples of secular growth stocks. Could there be a further drop in real rates from current levels? We doubt it. If anything, real rates could move up if the Fed were to lower the odds of a recession. Rising real rates could provide headwinds to multiples, just as they did in the Aug-Oct period of rising real rates. Even if real rates move sideways, as they have been doing of late, the loss of downward momentum in rates could provide loss of upward momentum to equity multiples.

We would look to trim positions in high-growth AI-exposed names and add to cyclical names in the memory/storage and Industrials/Consumer sectors. We would trim position in AMD and look for a 10%-15% pullback from yesterday’s close. SWKS would be a good cyclical name to get behind.

Thoughts on AMD: Into a slightly disappointing AMD report, the stock could have held up well into investor anticipation of upside to MI300X outlook provided yesterday. However, we think the Fed’s stance today may compound the modest earnings disappointment. We would be wary of adding to position at current levels; we would rather wait for a 10%-15% pullback.

We note that excluding the MI300X product, AMD’s 2024 annual outlook implied by management’s qualitative comments points to revenue flat to down slight for the

year, not exactly a ringing endorsement. The company appears to be muddling along entirely on the strength of the MI300X product line.

Macro discussion into Fed event today: There has been a lot of discussion on the Street with regards to the Fed being under pressure to cut nominal rates just so real rates do not spike to the upside in response to falling inflation. We think this is something of a spurious argument. We think the Fed has the luxury of stringing the Street along without having to provide timing of cuts. However, it needs to give a reason as to why they are in no hurry.

We think the critical signal coming out of this meeting is not the timing of rate cuts, but rather an acknowledgement from the Fed that, the surprising strength in the economy, despite high interest rates, reduces the odds of a recession in the medium term. Hourly wages growth, which had been declining most of last year, seem to have stabilized in recent months (Exhibit 2). Consumer confidence metrics show unusual strength. If so, where is the urgency to cut rates?

Real rates have already declined, rather dramatically, in response to dovish Fed Chair comments following the FOMC meetings last November and December. On a 2s, 5s and 10-year basis, real rates have already dialed in rate cuts at some point of time in the future; they may be relatively immune to the exact timing of the cuts.

If anything, there is a case to be made that the Fed may hint at downside to the number of cuts penciled into the SEP at the Dec FOMC meeting. The 1st estimate of Q4 GDP has come in significantly ahead of market expectations prevalent at the time of the Fed’s December meeting. Q1 GDP estimate out of the Atlanta Fed too is running hot, close to the Q4 level. In other words, the economy shows no sign of cooling down. So why would the Fed be anxious to cut rates?

Just as the Fed was loathe to raise rates in the absence of data in the days of high inflation (2022-23), one would think the Fed would be just as loathe to cut rates in the absence of data signaling signs of cooling economy and deteriorating labor markets, neither of which exist today.

That the Fed could indicate rate cuts in anticipation of rising real rates (as inflation cools further) is just not a good enough reason, it seems to us. This argument is especially ill-conceived as real rates have already fallen to a long-running median level (dotted line in Exhibit 1). If on the other hand real rates were running close to where they were back in Oct (prior to the Nov FOMC meeting), we would imagine the Fed might have been a bit more nervous allowing real rates to gallop further. This is not the case today.

If there is something to worry about, however slightly, it would be the nonfarm payroll on a 3-month averaged basis (Exhibit 3). This data has been trending down, as it should. In recent months though, the 3-month moving average has cut below the pre-pandemic average monthly level (dotted line in Exhibit 3). And yet the level (~165k) is still quite a bit away from getting into the negative territory, which would then need a response from the Fed. But the downward trend bears watching as it creeps closer to zero. So, the Fed will wait for more data from the labor markets before making a move.

Net/Net: We think the Fed is likely to acknowledge the surprising strength in the economy even as inflation metrics trend closer to the Fed’s target. In such an environment, standing pat and letting fed rates stay where they are, may be the best strategy.

If the downward movement in real rates is behind us (i.e. no further gains in the multiples of high-growth names) and if the odds of a recession have come down, then it seems to us that a good portfolio strategy would be to trim high-growth AI names into their recent outperformance and then allocate capital to cyclical names, which have underperformed the indexes over the past three months.

image
image
image

AAPL: Looking for upside to iPhones and a peek into AI

Apple goes into its earnings call today with the weakest investor sentiment in recent years. Four consecutive quarters of negative growth in FY23 and the 1st quarter in FY24 guided flat have taken their toll on investor sentiment. We were in the minority calling for negative revenue growth in Fy23 as early as Oct’22 (link). Going into the 4Q23 earnings call, the stock hit our previous PT of $170. After calling for downside to investor expectations all through Fy23, we turned the corner in mid-Dec’23 into what we felt was excessive investor pessimism (link), and raised our PT to $220.

While an overall slowdown in iPhone sales weighs on sentiment, investor expectation of a challenge from Huawei’s Mate60 tipped investors into outright depression. We took a contrarian view. We felt investors were being overly pessimistic in their China assessment. Ex-China too, we felt iPhone had the opportunity to gain share from Samsung’s aging Galaxy S23. In recent weeks, reports from market research firms have largely confirmed our assessment of the Dec quarter. Rather than lose share, iPhones seems to have gained share in China and also worldwide as Huawei and Samsung failed to live up to expectations.

As for the March quarter and for the full year, investor sentiment seems just as low on continued China fears and unconfirmed reports of component cuts. However, our checks show that TSM raised 3nm node utilization to full capacity. As there are no major 3nm customers outside of Apple, we think Apple may have raised 3nm wafer starts at TSM, potentially a positive for iPhones and Macs. 

Fears related to iPhone weakness need to be addressed by Apple management. We believe they will. But that alone will not do it. Apple needs to demonstrate itself as a full participant in the AI revolution, not just in applications as many Apple bulls are hoping for, but in Apple silicon for running full-fledged LLM workloads on smartphone and PCs. The M2 and M3 chips designed for Macs show tantalizing promise. It is unclear to us as to what is management’s strategy for AI. But the potential for a significant new growth vector exists.

We model FQ1 at $120bn/$2.16 vs. consensus at $118bn/$2.11. We model FY24 at $391bn/$6.4, revenue up 2% vs. consensus at $395bn/$6.62, revenue up 3%. We are long into print.

iPhones in FQ1: In our Dec note, we raised our iPhone expectations to up 7% y/y vs. consensus up 4% on higher-than-expected strength in China and worldwide.

Going by reports from market research firms, iPhone shipments in CQ4 may have worked out largely inline with our thinking – no share loss in China and worldwide. According to market research firms, IPhones gained share in China, despite the Huawei challenge. Other China brands seem to have lot share to Huawei. This implies that Huawei, while still an important player in China, is largely limited to mid-range smartphone models. Worldwide too, a similar story seems to have played out, for the quarter and for the full CY. iPhones gained share from Samsung to become #1 worldwide by unit volume.

How did iPhones manage to get on top of the competition? 1) On the strength of its balance sheet, we believe Apple has been able to out-finance its competition by offering easy terms to telcos and distributors. Samsung’s loss of share we believe has been due to constrained finances and their inability to help telcos stock up shelves. 2) As for Huawei, we believe poor yields at SMIC 7nm process constrained the supply of the Mate60 and prevented Huawei from following through with the initial success of its launch in early Oct’23. The challenge from Mate60 to iPhones may have been transitory

Apple may have a silicon solution for AI: Whereas Apple bulls have been sending up trial balloons in the form of hopes for Gen AI applications in next gen iPhones, we think Apple’s position in AI could be far more fundamental.

After a decade of integrating its operating system with its cpu, gpu, neural-network and memory, we think Apple may well have the single best silicon capable of performing inference workloads at the lowest power envelop. If not all inference workloads, at the very least transformer functions.

The AI investor base mostly avoids considering the huge cost of running today’s mainstream AI silicon on liquid-cooled GPUs. We believe there is a role for today’s large-cluster liquid-cooled GPU servers in specialized functions - LLM training and large-scale inference workloads. However, for mass market inference workloads and for training SLMs, we think the AI marketplace needs low-cost, air-cooled traditional CPUs. We think the yet-to-launch M3 Ultra at 3nm or the M-series follow-on at 2nm may be that long sought-after chip. And if so, Apple stock has a long way to go.

We called out this possibility going into the M3 launch (link) back in Oct’23. While Apple has avoided referring to its AI plans at the launch event, company management has mentioned in passing, the silicon’s potential for transformer workloads. The M3 Pro supports 126GB of ‘unified memory’ (the gpu shares address space with the CPU) vs. NVDA H100’s 80GB of ‘discrete’ memory (which is more power hungry than unified architecture). The M3 Ultra, yet to be launched, likely features 2x126GB (vs. AMD’s MI300X at 192GB), spread across two chips, thus providing massive memory for loading LLMs.

And it is not just in the hardware specifications. Apple silicon is fully integrated with Apple’s software stack. Also, the open-source community ported PyTorch onto

MacOS. Whereas Intel and AMD talk about ‘AI PC’, Apple’s M2 Mac Pro has already been in use as an AI PC for over a year. Meanwhile, Microsoft Windows OS is yet to provide hooks into x86 PC solutions due to lack of Windows’ integration with Intel’s OpenVino or AMD’s ROCM.

Net/Net:

  • For the stock to shed investor pessimism, Apple management needs to demonstrate strength in its core iPhone business. We think it can.
  • While very much out of character for Apple’s management style, Apple needs to give investors an advance peek into its AI plans, especially as it has under its belt, Apple silicon which has the potential to beat the AI incumbents in mass market AI workloads.
  • We are long into print.

ARM: The hidden hand of AAPL

Anxious to shed its image of being overly dependent on the mature smartphone market, the narrative from ARM management at the earnings call last night leaned heavily towards cloud server CPUs. However, we think the strength in royalty revenue in the December quarter was driven by the more mundane ‘annual refresh cycle’ of premium smartphones as smartphones ‘returned to strong growth in Q3’.

We believe the y/y increase in royalty revenue is based largely on Apple’s return to growth, a prospect few on the Street are willing to embrace. Into an environment of overarching negative investor sentiment, we turned the corner on AAPL in December (link) as we raised our PT to $220. We reiterated our constructive view into Apple’s earnings event (link). Strength in ARM royalty revenue gives us more confidence not only in iPhones but also in a possible turn-around in iPad.

We think the strength in royalty revenue was anticipated by ARM management. On the other hand, the upside surprise to print/guide, in our view, came from licensing revenue from the cloud server CPU market – specifically, from NVDA’s GH200. However, we do not expect NVDA’s licensing agreement with ARM to translate into material revenue for NVDA’s ARM server product, in the medium term, as Microsoft is yet to complete porting its Windows Server OS onto NVDA’s cpu.

ARM’s royalty revenue growth driven by Apple’s emergence from the doldrums:

While management talked up share gains in server CPU, the role of smartphones, contributing 35% to ARM’s royalty business, cannot be de-emphasized.

ARM’s FQ3 royalty revenue was up 11% y/y, even though overall chip units shipped was down 3% y/y. While not calling it out by name, ARM’s royalty growth in FQ3 had a lot to do with Apple iPhone’s return to y/y growth, in our view.

ARM’s FQ4 guidance of royalties up 30% y/y is also strongly influenced by Apple, in our view. That may come as a surprise to many given Apple’s soft guidance for its March quarter.

We believe Apple started a strong ramp of ARM-based M3 wafers in January at TSM’s 3nm node in preparation for the launch of next generation iPads likely in the June quarter. We think TSM’s strong January revenue reflects strength at its 3nm node and gives credence to our iPad thesis. ARM’s royalty growth in March quarter and beyond, we think in part is due to Apple’s iPad refresh and the higher royalties of ARMv9 at 3nm vs. the previous generation of iPads at 5nm.

The surprise upside to print/guide driven by NVDA: The upside to ARM’s print/guide is due to unexpected strength in its licensing business, in our view. ARM cfo spoke of upside to print/guide from selling ‘additional licenses’ to AI end-market that was ‘just not in our plan and not anticipated’. We think the upside surprise in Q3 print and to Q4 guide comes from the licensing revenue from NVDA’s GH200 superchip. We believe the GH200 is targeted for shipment to Microsoft.

According to our checks, NVDA is licensing ARM in advance of actual shipment of GH200 servers. The timing of the shipment will depend on when Microsoft is done porting Windows Server OS onto NVDA’s ARM platform. We believe Microsoft is not ready with the solution. The timing of the shipment of GH200 to Microsoft is not clear.

Net/Net:

  • ARM management appears confident of its royalty revenue growth. We expect to see ARM’s royalty strength translate into revenue growth at Apple.
  • ARM management appears circumspect in its licensing business outlook beyond the current quarter. We do not expect upside surprise in ARM’s licensing business to translate into strength at NVDA’s ARM server business in the medium term.