Jahanara Nissar

AAPL: Taking a contrarian view

Versus the muted expectations on the Street for near-term iPhone momentum, we are long into print. The Street is inundated with chatter of order cuts at iPhone16. We take a more nuanced view and look for upside to Street’s lowered expectations for Dec quarter.

Our expectation for Dec upside is not based on a boost from Apple Intelligence. Rather, we focus on more traditional dynamics. Our view is based on the potential opportunity Apple may be seeing in the general weakness at Samsung, unrelated to the relative progress either of the smartphone players have made in AI.

Signs of Samsung weakness are out there, and yet the Street has not picked up on them; not just in the weak qualitative outlook Samsung provided for its smartphones at its earnings call earlier today, but also in QRVO’s disastrous guidance two days ago. Our work into the supply chain shows Samsung potentially giving up shelf space this holiday season at telcos and retailers, an opening we believe Apple would be only too willing to pounce on. Instead of Apple, we think it is more likely that Samsung has cut build plans for the holiday season. And that leaves Apple with a competitive opening.

Following Apple’s WWDC event a few months ago we took the view that Apple Intelligence, which we are bullish over a multi-year basis, would be no more than a moderate catalyst for iPhone sales in FY25. Into the June quarter earnings call, Apple bulls were looking for iPhone growth in the teens for Fy25 and overall revenue up double digits vs. our estimates of both up mid-single digit (link). Apple’s tepid guidance for the September quarter and the negative news flow over three months whittled Street expectations down to more realistic levels, now inline with our estimates a quarter ago.

Over the past three months, AAPL has traded only inline with the market, justifying our call into the previous quarter earnings event to sit out the June quarter earnings event. Into the earnings call today though, given muted Street expectations, we would be long into print. FQ1 iPhone guidance inline with average consensus of up mid-single digit may be enough to lift the stock.

We remain comfortable with our long-term view that the hidden value in the stock lies in its potential to pull up to par with AI peers over time but without the extraordinary spending on opex/capex at peers. At a slightly reduced multiple of 32x to a slightly higher FY25 eps estimate vs a quarter ago, we maintain our $240 PT (link). However, depending on the tenor of the call today, we could be looking to raise our PT.

–xx–

The Android ecosystem runs into headwinds: Our work into the smartphone channel shows Samsung potentially cutting build of its high-volume smartphone models, a rather odd finding given that Samsung’s A-series models have become a mainstay on shelves in the high-volume pre-paid cellular market. When QRVO, at its earnings call two days ago, talked of its largest Android customer cutting back its mid-tier models (the A-series at Samsung) and tilting towards entry level model, we think we have confirmation of our suspicion.

This trend towards ‘bar-belling’ smartphone models, i.e. terminating mid-tier models and allocating volume towards the entry-level and premium ends of the spectrum has been going on for 1-2 years. However, in recent months, for reasons we don’t quite understand, this trend appears to have accelerated at Samsung. The latest IDC report for Q3 shows Samsung shipment down mid-single digit vs. overall global  smartphone units up mid-single digit. Apple our performed Samsung – iPhone unit growth was reported inline with the overall market. Apple picked up 200bps to share q/q, at the expense of Samsung and Xiaomi, according to the report.

China shipments for Apple too seems benign, with the Canalys report showing iPhones holding up at ~10mn units every quarter for the first three quarter of every year for multiple years, and a seasonal uptick to the high teens in CQ4.

Signs of stress at Samsung, Apple likely to exploit: Across the gamut of its product segment, Samsung appears to demonstrate weakness and a loss of traditional leadership – logic fabs, HBM innovation, consumer goods and mobile phones. We find that a general weakness at Samsung is having observable effects on its upstream/downstream partners. Into the AMD print a few days ago, we anchored our negative thesis on, among other reasons, Samsung’s HBM woes (link).

In the mobile space, we believe Samsung is pulling back on its shipment plans into the holiday season. At its recent earnings call, QRVO said revenue at its largest Android customer was less than that anticipated earlier this year.

We think Apple is likely to exploit the pullback in Samsung’s positioning in the retail space by potentially building additional iPhone volume for the holiday season. We expect Apple management to signal more optimism for iPhone in the Dec quarter than it did for the Sept quarter.

Meanwhile on the AI front, Apple ploughs on as the Android ecosystem hesitates: We think Apple’s M4 chip does not have the horsepower required to perform the kind of Gen AI functions that is likely to make Gen AI a selling point for Apple devices. On the other hand, AI SoC at QCOM and MediaTek appear to have more raw horsepower than Apple M4’s 38 TFLOPS. Both in terms of TFLOPS and the sophistication of the o/s, we think the Android universe has an edge over iPhones.

However this edge Android currently has over the Apple universe could soon disappear as 1) the Android universe hesitates pushing its advantage; Samsung, Google and QCOM seem unable to come together and deliver eye-popping solutions and 2) the vertically integrated Apple ecosystem, with a line of sight to 2nm silicon next year has a shot at better integrating hardware/software and vault over the present-day advantages the Android ecosystem may have.

We think major improvements in image/video generation in iPhones can be expected only in the FY26 cycle when 2nm M5 silicon launches.

Apple – patient and prudent: We expect investors to complain about the lack of eye-popping AI applications on the current iPhone16. However, Apple’s hyperscale peers, despite their heavy investment in opex/capex find themselves unable to impress investors, witness the dour response to META and MSFT earnings last night. Do investors really have a choice but to give Apple the benefit of the doubt and hope that Apple extracts superior ROI from its prudent, one might even say skeletal AI investments?

Financials: We are leaving our model relatively unchanged from a quarter ago. We continue to model iPhone segment and the overall revenue growth up 6% and 6.7% respectively. Street estimates have drifted down to where we have been, as the more bullish expectations got whittled down. We have been modeling Fy26 revenue at $443bn. Three months ago, the bulls got to our F26 estimate in Fy25 itself! Not anymore. Expectations for FY25 have moderated even as the stock has held up inline with the market.

At a slightly reduced multiple of 32x to our slightly higher FY25 eps estimate, we maintain our $240 PT. However, depending on the tenor of the call today, we could be looking to raise our PT.

Exhibit 1: Key financial estimates

Source: Bloomberg, Lynx Equity Strategies

KC Rajkumar

Samsung Woes: Impact on MU, AMD, AAPL

An article in Digitimes highlights large-scale problems in Samsung logic foundry operations, not very surprising given the negative outlook from ASML last month. This piles on top of the series of negative comments outlined in Samsung’s earnings call last week – DRAM/NAND capacity adjustments to normalize inventory, ongoing slowdown in smartphones, lackluster progress in HBM3e.

We have been highlighting the widening impact of Samsung’s woes to companies in our coverage. A severe drop in overall profitability, as reported at the 3Q earnings call, we believe is having an impact on Samsung’s ability to carry out core functions, regardless of the tenor of end demand. In this note, we briefly highlight the Samsung-related calls we made last week to companies in our coverage.

Positive for Micron: We wrote in a note last week, our checks showing at least one DRAM wafer fab at Samsung has been taken offline since early last month (link). Into MU’s earnings in September, we wrote Samsung had begun to cut back packaging of DRAM dies (link). Furthermore, continued delays in Samsung’s HBM3e qualification holds in check HBM supply into the channel. Samsung’s actions curtailing the supply of DRAM and ongoing delays in HBM3e are clearly beneficial to MU. While we have no expectation for MU stock to head for its highs for the year, we think Samsung’s woes are a tailwind to MU. We think there is upside to our $110 PT., but likely no more than $120.

Negative for AMD: We made a negative call into AMD print last week. We wrote in our preview, Samsung’s inability to deliver HBM3e as a key reason why AMD may be unable to provide upside surprise to its MI300X outlook (link). Samsung is a key supplier of HBM to AMD. Management raised 2024 outlook only inline with consensus and disappointed the Street by not providing a peek into 2025. The stock is down ~15% since the earnings event and near the lows of the year, as we had called out in the preview. We do not expect buyers to step in, as there are few positive catalysts to lift the stock.

Positive for AAPL: Worsening profitability at Samsung may be impacting its balance sheet, which in turn impacts its ability to finance smartphones for telco and retail channels. We think this works to AAPL’s advantage. We think iPhones, regardless of the progress in the rollout of Apple Intelligence, are likely to grab incremental shelf space as Samsung recedes from its dominant position in the US and Europe markets (link). In the haze of calls on the Street for cuts to iPhone build, we think the Street underappreciates the potential for weakness at Samsung’s financing muscle to provide tailwinds to iPhone supply into the channel. There is no weakening of Apple’s financing muscle. We expect AAPL stock to gain momentum as the Street digests the competitive landscape. We maintain our $240 PT.

Net/net: We do not think the Street appreciates the widening impact of Samsung’s woes on the competitive and supply chain landscape. This is especially so in the case of AAPL iPhones.

KC Rajkumar

SMCI: What is to be done now?

A darling of Wall Street until just a few months ago, SMCI now finds itself in financial, regulatory and legal purgatory. When we took a negative view on the stock into the previous earnings call, we could not have anticipated accounting problems and the existential threat the Company now faces (link). Into the previous earnings reports we expected the Company to guide revenue soft. Instead, the Company raised revenue outlook significantly, but also missed profitability significantly. Investors were not amused.

With the Company now staring into the abyss, what does it do from an operational perspective? Company management may avoid brightening up short-term optics and instead focus on mending that which ails it. And what ails it? In getting into the liquid cooled GPU server market requiring high investment, we think the company has bitten off more than it can chew.

Inventory on hand exploded to $4.4bn exiting Fy24, tripling from a year ago vs revenue over the year merely doubling. The abrupt hit to inventory arising from high cost of liquid cooled kits for H100/H200 directly impacted margins and profitability. Cash flow from operations for Fy24 was reported negative $2.5bn vs. positive $663mn a year ago. Over the course of Fy24 the company raised ~$1.6bn in a convertible issue; a secondary offering raised $1.75bn in gross proceeds. And yet the Company exited the Fy merely neutral on a net cash basis.

In order to ensure long-term survivability, we think it has no choice but to remove the even more expensive Blackwell program off its shipment pipeline. And this necessarily means lowering Fy25 revenue guidance. Recall that the Fy25 revenue guidance of $26bn-$30bn includes revenue from Blackwell in 2FH. We think the Company’s weakened financial situation leaves it with few alternatives. Could this improve the trajectory of gross margin in the back half of the FY? It could. And that might turn out to be positive for the stock.

While we have no view into the specifics of the accounting problems triggering the Company’s external auditors to resign, we wonder if management’s response to abrupt changes to the cash flow and balance sheet items may have had something to do with it.

–xx–

The high toll of GPU servers: Even before the abrupt drop in margin reported last quarter (Exhibit 1), Company fundamentals had already begun to deteriorate. Inventory on hand began spiraling upwards (Exhibit 2) and y/y growth in opex inflected sharpy upwards (Exhibit 3). Recall that shipments/inventory of NVDA servers have been associated with the Hopper family of GPUs. Transition to the Blackwell family is likely to raise inventory and cost burden further, not just because of the higher upfront GPU cost but also due to the higher component costs associated with higher power and more complex cooling systems.

Unless SMCI does another round of capital raise, the balance sheet may not allow the additional burden of taking on Blackwell. Given the current regulatory situation, capital raise is quite out of the question.

Our view: We think SMCI may have to give up its ambitions for Blackwell servers. We expect Foxconn to take over as the lead (and perhaps the only) ODM for Blackwell. Perhaps even a portion of SMCI’s Hopper pipeline may have to be given up, to right-size the balance sheet and cash flow to the new reality.

Medium term – focus on H100: If the Blackwell program comes off the books, we think SMCI’s prospects have a better chance of mending. We may be in the minority in this view, but we think H100 servers could well become the workhorse for AI inferencing well into the future. Whereas Blackwell and its follow-ons could be the workhorse for modeling workloads, we think H100 could remain the choice for inference workloads due to rising availability, falling prices and a growing secondary market. As such SMCI may not lose much by giving up on its Blackwell ambitions and instead focus on the H100, a chip that SMCI has had quite some experience with. 

With time and experience, SMCI could become more efficient in sourcing, building and shipping liquid-cooled H100. We think SMCI has a better chance of mending its inventory, opex growth and profitability if it sticks to the H100. We think demand for H100 servers is not going to sunset with the arrival of Blackwell. The two GPUs run complementary workloads. We do not expect Blackwell servers to displace H100 servers in AI data centers.

Expertise in liquid cooling systems: We find that the state of the art of liquid cooling systems for GPU server racks is surprisingly primitive. Our view is that NVDA has not been closely involved with the design and manufacturing of the liquid cooling systems. The ODM/OEM vendors have gone ahead and largely designed their own system, resulting in a hodge-podge of designs across server rack vendors. We have heard from the downstream channel that the poor design of cooling systems leads to temperature variance on the GPU board, non-optimal operations of servers and workload inefficiencies.

Among the vendors, our checks show SMCI delivers some of the better designed systems. In working closely with NVDA, we think SMCI has built up internal expertise in designing liquid cooled systems. Innovations in this arena could offer SMCI an avenue to increase customer satisfaction and profitability.

Net/Net: SMCI finds itself facing daunting challenges. The first task at hand is to mend its unsustainable trajectory of inventory and profitability. And this requires, in our view, retrenchment of its ambitions. Given its attenuated ability to raise more capital and dire financials, we think SMCI may have to lighten up on Blackwell ambitions in order to better serve the H100 market and to ensure long-term viability of the Company.

KC Rajkumar

Semis: How to frame the volatility

Semis had been sailing smoothly for a little over a month until yesterday when the sector ran into a spot of rough weather. Back-to-back squalls had the Street running for cover. Two separate headlines drove semis – ASML down 16% and NVDA down 4% dragged the sector down and forced the bulls to revisit their thesis. Rather than go for a diffuse over-arching negative thesis, we’ll attempt in the following our thoughts on pointed reasons for the headlines and how to trade the volatility.

From a trading perspective, 1) we look to buy the sell-off in AMAT/LRCX following the ASML earnings release, 2) we would not dismiss the NVDA-related media story yesterday speaking to country-specific export caps, we think this story could develop further resulting in NVDA coming under pressure and 3) as for TSM, until the AI export policies are clarified we think it prudent to stay on the sidelines.

–xx–

ASML – what happened? In an unexpected pre-release of the earnings packet, the company lowered its revenue outlook for 2025 from €30b-€40bn to €30b-€35bn due to 1) largely expected lowering of revenue contribution from China to 20% vs. 2024 contribution running significantly higher and 2) of much higher concern to the Street, demand push-out of EUV tools due to slower ramp of new nodes at ‘some’ logic foundry customers. No prizes for guessing who is being referred to as ‘some’.

ASML – China export risk – our view: There is some concern in the semicap circles that in the waning days of the Biden administration, more onerous export restrictions could be imposed on the industry. Having said that, we think ASML has had incremental China risk vs. US-based peers. Based on statements from AMAT and LRCX, we think US-based firms have already de-risked export control risks in large measures. Non-US based vendors such as ASML and TEL have pushed back against US pressure for much of the year. Until now. A month ago, it was reported in the media that the Dutch government had imposed new export controls on ASML thereby aligning itself closer to US export policy. We think the large-sized cut in ASML’s 2025 China exposure disclosed by the company yesterday is in response to the new export policy handed down by the Dutch government.

Meanwhile, AMAT and LRCX have already been hewing closely to US government policy, and therefore their incremental risks to further export restrictions may be limited. In this regard, the sell-off in AMAT and LRCX yesterday may have been excessive.

ASML – Advanced logic risk – our view: We think the ‘delayed timing of EUV demand’ is related to INTC. Should this be a surprise to investors? We have been down this road after Intel announced a capex cut at its Q2 earnings call.  We take the view that a significant portion of the ASML miss is due to Intel delaying EUV demand. And just as we did the last time around (link) we will take the view that outside of ASML, the other major semicap names had already dialed out contribution from INTC. We think AMAT/LRCX offset weakness at INTC by strength at TSM wafer foundry and packaging.

Semicap – Net/Net: After the vol subsides, we will be buyers of AMAT and LRCX. But just so that there are no further surprises, we would wait for a few more weeks and get past the US election, before backing the truck up.   

NVDA – what happened?   Bloomberg put out a negative article pre-open, causing NVDA and AMD to sell-off at market open. The article, quoting unnamed sources, reported that the Biden administration was looking to cap sales of advanced AI chips from NVDA and other companies on a ‘country-specific’ basis, to countries beyond China.

This is not the first time this such an idea has made it to the tape. On previous occasions, due to the lack of follow-through from official government sources, the Street shook off the news. However, this time we would not be quite so sanguine. We would not rush in to buy the dip. If confirmation were to emerge from official government sources NVDA could see further downside.

That there is a common theme to the export controls the US government is seeking to slap on export of semicap equipment/consumables and of advanced AI chips cannot be denied – both categories of export controls have a national security angle to them. In the waning days of the Biden administration, it is possible that government agencies are making one last ditch effort to leave a permanent impact; it would be months before the next administration would be in place to make a similar effort, if at all.

NVDA – some questions: We think the key to framing the downside risk lies in assessing which countries could fall under the export capped status. If the targets countries are the Middle East countries mentioned in the Bloomberg article, there may not be much to worry about given NVDA’s low exposure to said countries. But what if countries ranking higher on the revenue exposure scale are under review by the government?

As per NVDA’s latest 10Q, on a 6-month basis, Taiwan and Singapore each account for ~17% vs. the US accounting for 46%. Nearly 90% of NVDA sales in the two most recent quarters are in the data center sector. One infers there is a whole lot of data center GPUs headed to Taiwan and Singapore, countries which do not have the kind of export controls China is subject to.

Combining the revenue contribution from Taiwan, Singapore and China adds up to the contribution from US-based customers. Unlike commodity CPUs, NVDA’s AI GPUs are mostly done direct-to-customer. There isn’t much of a mystery as to who are the direct customers in the US. But who are the direct customers in Taiwan and Singapore? Are there new data centers being built in either of these two countries?

NVDA – Net/net: Is it possible that the countries which could face export caps for advanced AI chips are Taiwan and/or Singapore as opposed to countries in the Middle East the Bloomberg story seems to speculate? If the US government could place pressure on friendly nations such as The Netherlands and Japan to prevent technology leakage of silicon fab equipment, why couldn’t the US government place pressure on friendly nations such as Taiwan and Singapore to prevent leakage of advanced AI chips to China? If indeed it is Taiwan and/or Singapore that the US government is considering putting export cap on, then the risk to NVDA’s revenue could be considerable. Until the identity of the countries facing export caps is understood from official sources, we think it may be unwise to step in and buy the dip.

TSM – thoughts into print: The narrative out of TSM is largely anticipated 1) further pull-in of CoWoS capacity spending in order to support extraordinary demand from AI and 2) unchanged and muted demand from end markets outside of AI. The stock ran up into print in the expectation of 2024 revenue beating the previous guidance ‘modestly above mid-20s’ now running perhaps closer to 30%.

In order for the stock to make a new high, TSM management needs to 1) provide a qualitative view into 2025 and/or 2) raise the 5-year for AI revenue above the 50% CAGR outlook provided at the March’24 earnings call.

TSM – Net/Net: Rather than TSM being a bellwether for semis like it used to, given the outsized influence of NVDA, TSM may be under the gravitational pull of NVDA. Negative headlines for NVDA, like the one yesterday, are likely to weigh on TSM as well. While we are comfortable with our NT$1250 PT, we are not looking to add to position until there is some improvement in the traditional end markets. For now, we will stay on the sidelines.

  • KC Rajkumar

GOOG: Pixel 9

Google hosted an event to roll out Gemini Nano powered applications on the just launched Pixel 9 smartphone. The full set of AI features to be made available in Android models to launch later this year.
Quick summaryGoogle rolled out a series of small but significant improvements in existing Android features powered by Gemini Nano, some new features, all of them available on Pixel 9.Nothing jumps out as a killer app, but that perhaps was not the intent.Google’s intent we suspect has been to make the overall experience more appealing – and make you switch from Apple iPhone.Many Gemini features available on existing Android models.The full experience of Gemini Nano is on Android 15 to roll out later this year.The AI features demonstrated today – many of them can be tried out today on the just launched Pixel9, says google management.
Apple has some catching up to doWe were aware that Apple had not rolled out SDK for Apple Intelligence at WWDC.As we wrote in our June 16th note where we raised Apple PT to $240, we noted iPhone16, at launch, would have AI apps mostly built internally at Apple; 3rd party apps to come later.According to media reports and confirmed later by Apple management, AI capabilities to be rolled out to developers ‘in waves’ during Fy25, and not prior to launch of iPhone16.Which is why we did not model our Fy25 iPhone est too aggressively. We think AI upgrade cycle at Apple is a multi-year cycle, akin to the rolling improvement in iPhone camera over many years.Apple’s valuation is based on more than just the number of devices it sells. Its valuation is justified by its decision to not use NVDA chipset, thereby sidestepping expensive infrastructure costs and the delay issues that now bedevil the likes of MSFT.
Google vs. AppleEven though, at launch later this year, Android 15 phone models could have a fuller AI experience than iPhone15, given Apple’s loyal customer base, we do not expect many ‘switchers’ from iPhone.Having said that, we think Google has a head start over Apple (and other hyperscale peers) in Google’s top-down approach to its own LLM, its own AI accelerators, fully owned data centers, cloud AI computing fully integrated into AI client device.
Google vs. MSFTIt would be interesting to watch how MSFT deals with the delay at NVDA’s Blackwell. We think investors do not appreciate the negative impact to MSFT long-term planning.Google on the other hand is master of its own AI universe. We think its plans to ramp up its next generation TPU6 silicon and its plans for AI infrastructure capacity expansion are on schedule. 

NVDA: Optimists vs. Skeptics

NVDA investors wade today into one of the more confusing earnings events in recent memory. During NVDA’s ‘quiet period’, investors have been pummeled with a profusion of lightly sourced information with regards to product delays, alternate product SKUs, detailed failure analysis and so on and so forth. The supporting cast of analysts, channel partners and Asia media sources appear confident that none of these concerning details should put a dent to NVDA’s revenue ramp profile, such is the strength of the end demand. As their confidence can only come from NVDA itself, we suspect the company had messaged the gist of what is to be said on the call today through channel partners. 
As such, we expect the message on the call today to be aligned with that from the supporting cast – a minor delay in the revenue ramp of Blackwell to be offset by higher-than-expected demand for Haswell SKUs. With the stock back to its previous high, we can safely assume this view is mostly priced into the stock. What could nudge the stock into a higher orbit? For the stock to launch into a higher orbit, management needs to communicate that the demand for Haswell more than offsets the revenue delay from Blackwell ramp. In one form or another, we expect the management to communicate just that. The first reaction to the earnings call may well be to the upside. But that could be a head fake.–xx–
 NVDA has become a ‘battle ground’ stock: The stock came under severe pressure in mid-July as the Skeptics questioned the wisdom of vast amounts of capex spent by hyperscale players chasing what the skeptics claimed to be less than optimal ROI. A few weeks later in early August, the hefty revenue outlook from SMCI encouraged the Optimists to return, driving the stock back up. The resolution of the battle, we suspect, will emerge only in the days and weeks following the earnings event today as investors mull over the details of the call and as management resumes meeting with investors. 
We are in the Skeptics camp: We do not support the idea that increased shipment of H100/H200 could make up for the delay in Blackwell shipment. All things being equal, we think it appropriate to adjust NVDA’s data center revenue ramp downward vs consensus estimates until Blackwell gets into volume shipment, which could be well into NVDA’s FY26
Our view – 1) the constrained supply of H100 earlier on in the year may well have turned into excess supply as demand stabilizes, as seen ease of availability, falling rental/token pricing, an emerging secondary market, 2) H200 may have good demand, but could suffer from low yield and heat-related failures (link) and 3) there is no way to assess with any precision the actual delay in volume shipment of Blackwell; it could be more than just a few months. Our view – heat-related failures could be endemic to GPU/interposer modules, which gets worse with higher HBM content. Blackwell has higher HBM content than Haswell. 
A caveat: Despite potential issues with the H200, we suspect that ODM/OEM partners may absorb sales from NVDA and place the volume on their inventory. NVDA management may claim sales of H200 GPUs have picked up, but we are not yet seeing signs of H200 racks being absorbed into data centers in a big way. A rather curious article in Digitimes last week related to Foxconn caught our attention. The article talks about ‘complex AI server trading models’ and the moving away from the simple ‘buy and sell’ model to more complex models, which we suspect involves leasing the GPUs to end customers, rather than outright sales. Such trading models usually result in pulling in future ‘real’ demand and in build-up of channel inventory. 
H100 shortage may have turned into surplus: H100 lead times have come in. The question is whether increased supply of H100 is keeping pace with increased demand. We do not think so. We think supply may be outstripping end demand. We are seeing several signals that help us draw the conclusion.
Three months ago, we noted that HGX H100 server rental, as reported by GPU-specialized data centers had dropped to ~$2.25 per hour vs. ~$4.75 earlier in the year (link). Our latest checks show that pricing has dropped further, now running below $2, at levels where smaller DCs may well be pricing server rental below operating cost.
VC platforms in Silicon Valley which bought GPU rental time contracts at wholesale prices earlier this year for the use by AI startup clients, we understand have begun to offload time slots, due to reduced demand from their client companies.
Blocks of H100 GPUs have begun to appear on the secondary market in Silicon Valley as tactical buyers of the highly valued chips now face reduced demand and falling value of the chips. We are aware of a secondary market for H100 in Hong Kong, which is now reporting declining price.
As for the hyperscale players, there no longer are constraints to the availability of MSFT’s CoPilot; users can gain access at will. We are also hearing scattered reports of some hyperscale CSPs, slowing down or even pausing purchasing of H100. 
H100 – why the surplus? In short, because hyperscale CSPs may have, to paraphrase an expression from the Google earnings call, converged on a set of base capabilities i.e. trained models, sooner than NVDA had anticipated. And this releases a large amount of H100 installed base capacity formerly allocated to training now to inferencing.
Two quarters ago, NVDA management commented that inferencing was running at ~40% of the workload on NVDA GPUs. The implication was that the training workload was still more than a majority of the workload. Training workloads required a huge commitment of GPU cluster size and continuous usage time, thereby sopping up large amount of GPU (mostly H100) capacity. 
Today, we think training workload usage of H100 capacity may now be no more than 20-30%, thereby releasing 70%-80% of installed capacity of H100 for inferencing. 
The peak in training workloads is done with: Six months ago, umpteen LLMs were in the process of being trained. Of late though, in recent months, we think CSPs have settled on a handful of fully trained LLMs that they plan to take into production, i.e. to run inference workloads at scale. 
So, for instance Google’s Gemini 1.5 is frozen and so is MSFT’s internal LLM used for CoPilot. At META, with its recent launch of Llama3 405bn model (which took 3 months to train), we think META has a production-worthy family of models in place. At AMZN, which had experimented with a menagerie of LLMs, we think AWS has converged on a handful of LLMs, including Llama3. At all these major players, we think training days are over. All the publicly available data sets have been consumed. The little there is left to train is focused on a declining supply of fresh data sources.
Independent LLM vendors too may be largely done with modeling, and even if there are not, they may have run out of resources. Stability AI is on the way out, due to solvency issues, as reported in the media. Anthropic may be largely done with modeling with the release of Claude3 a few months ago. Even OpenAI may be finding it difficult to persist with training ever larger models given its relatively small revenue base. 
If the peak in training workloads on NVDA GPUs is behind us, we think it releases a large installed base of GPUs to inference workload, thereby increasing H100 availability and driving down pricing of server rental, price per token and price of the GPU itself.
If H100 is in surplus, why are hyperscale CSPs raising capex? Are they raising capex due to unmet infrastructure capacity for current demand? We do not think so. We think they are raising capex to 1) meet future demand for AI services they anticipate will emerge, 2) to spend on shell construction and power/water utilities, 3) on inference-optimized GPU and 4) at mid-sized data centers distributed geographically worldwide. 
So, for instance, of the $19bn capex MSFT announced for the June quarter, the bulk of the cash spending allocated for the near future we think will go towards land/shell acquisition worldwide. The monies allocated for infrastructure spending we think will be spent much further down the road depending on actual demand.
But the key is this – we think future infrastructure spending will be on inference-optimized GPU. In the case of MSFT, we think they will hold out for a GPU with a higher density of HBM than that on the H100. We think they will hold out for the B200 or denser versions.
We do not think MSFT is likely to populate new shells with H100/H200, only to rip out the racks in less than a year and install Blackwell racks. That would simply not be a capital efficient approach.
While MSFT waits for Blackwell, we think they are likely to aggregate existing capacity of H100 at third party data centers. 
Blackwell could be a winner, but when?  We think hyperscale CSPs are likely to stick to the plan of allocating fresh infrastructure spending to inference-optimized, higher memory density GPU such as Blackwell. We do believe that there will be good demand for Blackwell, as and when supply becomes available. 
What is a realistic timeline for Blackwell to ramp up in production quantity? Before taking delivery at production scale, we would expect hyperscale players to take delivery of a limited volume of test servers and run field testing for several months. If initial shipment is delayed by say one quarter, as media reports seem to suggest, we would expect another 1-2 quarters before Blackwell is shipping in volume. This may delay revenue ramp deep into NVDA’s Fy26.
H200/B200A – Could they ship in the interim? Of roughly equal HBM density (144GB), these GPUs could be better suited for inference workloads than the lower density H100. 
Going back nearly two months, there have been reports of heat-related failure with the H200. Our checks show that there have been instances of customers rejecting test racks due to GPU failures in the field. 
It is no secret that these GPUs release intense heat due to the ~1000W of power consumption per chip. At the root of the problem, we suspect that NVDA has not standardized the exact solution for heat extraction, leaving the problem up to the ODMs/OEMs to solve. There could be important differences in the cooling systems adopted by the various vendors resulting in failures at some vendors. 
The upshot being that hyperscale players are unlikely to take delivery unless they are convinced that NVDA and its channel partners have settled upon a common solution. Until then, we expect shipment of H200/B200A to be constrained. 
Net/Net: We do not support the idea that increased shipment of H100/H200 could make up for the delay in Blackwell shipment. All things being equal, we think it appropriate to adjust NVDA’s data center revenue ramp downward vs consensus estimates, until Blackwell gets into volume shipment, which could be well into NVDA’s FY26. 
But this may not be how it turns out. NVDA may still indicate higher than expected shipment of H100/H200 in the medium term. Were this to happen, we think investors would need to worry about a build-up in channel inventory, a negative situation.
We think the first reaction to the earnings call could be to the upside. But that may be a head fake. In the days and weeks ahead as investors mull over the details on the NVDA call and as earnings reports from hardware/software vendors such as DELL/HPE and CRM/MDB emerge, we would expect NVDA investors to step aside and look for better points of entry

AAPL: Is the Mate60 a paper tiger?

After an initial surge in the China domestic market in September/October, the availability of the Huawei’s highly publicized Mate60 premium smartphone seems to have largely melted away. Why? We believe Huawei may not have been able to sustain supply of its Kirin9000 modem chip. And why is that? We believe it is due to inadequate yield of the Kirin chip at SMIC foundry.

Ever since the Mate60 model launched to much fanfare in China, AAPL investors have been mindful of the challenge from a newly resurgent Huawei. Though AAPL stock has perked up of late on macro tailwinds, on a 6-month basis the stock has underperformed the Qs, as the damage done to the stock from China fears persist. Market research reports and consumer surveys have left investors fearing share loss of the iPhone15 to the Mate60 in the China market.

At a House committee hearing two days ago, a senior member of the Commerce Secretary’s staff testified that the ‘neither the performance nor yield’ of the silicon in the Mate60 ‘may match the market of the device’. We wish to highlight the statement. The finding presented at the testimony appears to be the first public disclosure of the US government’s investigation into the Kirin chip.  We believe the statement is significant and has positive implications for AAPL.

Going into the 4FQ earnings event, the stock hit our $170 PT that we had set three months before (link). The gain in AAPL stock after the earnings event has been largely due to macro tailwinds. Of the big tech peers, we think investor expectations are the most muted for AAPL. With the Fed Chair yesterday markedly shifting towards a neutral policy stance, we think AAPL has more to gain than its big tech peers.

Going forward, whatever may be the headwinds the iPhone15 faces, we have come to view that the Mate60 is not one of them. If investors layer on top of the macro story, diminishing idiosyncratic risk from China/Huawei, we think AAPL could see a nice run-up into year-end.

We are turning modestly positive on the stock and raising our PT to $220 as we nudge up our Dec quarter iPhone expectations slightly ahead of consensus. Our FY24 revenue growth estimate creeps into positive territory, while still below consensus. We will however temper enthusiasm by noting that an earlier than usual seasonal shutdown of a Foxconn factory may not bode well for the March quarter.

–xx–

AAPL – muted investor sentiment: In a note titled ‘Investor expectations need resetting’ we published more than a year ago, we called out FY23 revenue down 1.5% vs. consensus up 4.7% at the time (link). A year later the company printed FY23 revenue down 2.3%, slightly worse than our estimate and significantly worse than consensus. After four quarters of negative revenue growth in FY23, investors have reconciled themselves to expect little to no growth. The guidance provided for the December (1FQ24) quarter did little to enthuse investors. Investor expectations have indeed reset from the go-go days of iPhone 12/13. While consensus calls for FY24 revenue up ~4%, we wonder if there is much conviction in the estimate. The China/Huawei risk adds to the gloom.

In our view, going forward whatever may be the headwinds the iPhone15 faces, we believe the risk of share loss to the Mate60 is not one of them as there isn’t adequate supply of its modem chip Kirin9000. In the context of muted investor expectations, reduced idiosyncratic risk out of China/Huawei could send AAPL stock off to a nice run-up into year-end.

Why would Mate60 modem chip have yield and performance issues?

  • Yield: We believe the potential for yield issue is due to the Kirin9000’s large die size and complications arising from printing 7nm features using older generation 28nm fab equipment. The quadruple exposure on the stepper tools needed to reduce native 28nm feature to 7nm, results in die size larger than if the chip was processed on native 7nm technology. Larger die leads to lower yields.
  • Performance: Why would the performance be lower than traditional 7nm modems out of say TSMC? We think the Kirin9000 transistor is based on planar architecture vs. the typical FinFET architecture used in traditional 7nm process. This difference could result in higher leakage currents, a critical metric in benchmarking transistor technologies.
  • Indirect evidence of low output of Kirin9000 chip out of SMIC: Given the large die size of the Kirin chip, Huawei orders to SMIC should have translated into high sequential revenue growth at SMIC. But it apparently hasn’t. SMIC printed Q3 below-expectations and provided disappointing Q4 guidance.

Thoughts on the Mate60: We believe Huawei’s Mate60 is intended to demonstrate China’s ability to produce 7nm chips using older generation equipment. This ability may have military applications, thus triggering a US government investigation. However, for consumer products, which require high volume at high yields, the SMIC process may not be capable of producing silicon on the scale required to move the needle for AAPL iPhones. Insofar as Apple’s iPhone15 is concerned, we think the Mate60 is a paper tiger.

Bookend the iPhone upside – Foxconn action raises a red flag: Our checks show that the Schengen iPhone factory may have furloughed its workers as early as in mid-November which is unusually early for its seasonal shutdown ahead of the Chinese New Year. Part of the reason could be to decommission and move out manufacturing equipment to new geographies, such as Vietnam and India. And a part of the reason could be due to lack of iPhone order visibility into the 1st half of CY24, potentially a red flag.

And yet, on the positive side:

  • The China market could be seeing a modest resurgence for high-end smartphone models as mobile gaming makes a comeback. The iPhone15 and Xiaomi’s X14 could benefit from the trend.
  • US telco carriers are left with little option on the premium end as Samsung has not refreshed its Galaxy flagship in ~2 years. Therefore, carriers are having to spend marketing $s promoting iPhone 15.
  • Even pre-paid vendors in the US, such as Cricket, are promoting Apple’s flagship this year vs. their usual preference for refurbished models. We suspect Apple got rid of a lot of iPhone11 inventory at the end of the June quarter, resulting in reduced availability of refurbished models. This is a positive for iPhone’s overall ASP.

Raising AAPL PT to $220 from our $170 PT we set back in Aug’23 ahead of the June quarter earnings (link). Heading into the Sept quarter earnings, the stock hit our previous $170 PT. It has since then run up largely on macro tailwinds. However, the risk of share loss to the Mate60 risk has remained a concern for investors. Were the concern to go away we expect AAPL’s Dec consensus estimates to go up.

For the December quarter we model iPhone and overall revenue ahead of consensus. We are tweaking up our Dec iPhone revenue growth from up 2% to up 7.5% vs. consensus of up 5% (Exhibit 1). We model overall Dec revenue up 2.8% vs. consensus up 1.7%. For the fiscal year 2024 though, we are model iPhone and overall revenue below consensus (Exhibit 2). We model iPhone revenue down 0.5% vs. consensus up 3%. We are modeling overall earnings for FY24 at $391bn/$6.4 with revenue up 1.9% vs. consensus of $397bn/$6.56 with revenue growth up 3.7%. Our $220 PT is based on 34x to FY24 eps.

MU: Peak earnings + nascent worries = downside risk

A quarter ago when we raised our FY25 eps estimate to ~$12 (link) and with consensus estimates at ~$7, we felt it appropriate to value the stock at $150 by applying a 12x multiple, in the upper half of historical range. With the stock having hit out PT, and with Street eps estimates now likely at peak earnings, we think it reasonable to consider applying a lower multiple to the stock.

MU management reiterated its outlook that HBM capacity is sold out though Cy25. With HBM pricing through Cy25 have having been contracted out and with little intention by management to increase DRAM bit supply during the Fy25 capex cycle, we do not see further upside to FY25 DRAM consensus estimates. Management’s outlook for HBM revenue in the ‘multiple $billions’ was modeled into estimates ahead of the earnings call. Management’s outlook for record overall revenue in Fy25 was already dialed in a quarter ago when consensus revenue estimate was at $32.6bn vs. previous peak of $30bn. Fy25 revenue estimates now run more than $10bn above the previous high sent of $30bn in Fy18. At the recent earnings call, management did little to coax up Fy25 estimates. Given the visibility management provided, we think Street expectations could be closing in on peak earnings

Meanwhile, nascent worries have descended on investors’ minds. 1) August guidance came in under buyside expectations, with no clear reason from the Street for the shortfall. We provide a possible reason for the guidance miss – Share loss in Mobile DRAM to Samsung. 2) More worrisome are the emerging concerns on HBM yield, which management did not quite snuff out during the call, in our view. Going into the call we had warned of a ‘wrinkle in the AI narrative’ and that management may not provide adequate clarity. Unlike many on the Street, going into the call, we pointedly did NOT raise our PT. We fear the issues could be related to NVDA’s H200 ramp, which MU may have little control over.

We believe the Street has pulled forward the ramp in revenue/margins expected for the next 12-18 months. As such we think the stock should be trading at multiple associated with peak eps (10x or lower) and not trough eps (12x or higher). Based on peak earnings and nascent yield worries, we cut our eps multiple to 10x. At our FY25 estimates of $39.5bn/$11.8, we cut our PT to $120 from $150.

We note that in the Fy18 cycle, with eps peak at $12, the stock peaked at ~$60. In this cycle, we think eps potential needs to be substantially higher than $12-$14 for the stock to trade more than 2x the previous stock peak set during the previous DRAM surge.

–xx–

Odd statements leave investors guessing: At the top of the Q&A session, in response to a question, the CEO stated ‘we are very much focused on continuing to ramp our [HBM] production and also to improve our yields’. This may sound like a banal enough statement, except that the question was not about HBM yields.

Rather, the question was about qualifying HBM3e for a broader base of customers beyond NVDA. Instead of providing an answer to this question, the CEO’s statement on yields sounded more like a disclosure, perhaps lending credence to yield-relayed chatter that had been circulating on the Street before the earnings call.

Then there is the matter of HBM ‘trade ratio’. When management says it takes 3x the DRAM wafers to get to a given target for HBM bits output versus the wafers required for traditional D5 bits, does this ‘trade ratio’ refer to loss of bits MU incurs before shipping HBM3e product to NVDA? Or does it refer to loss of HBM bits at packaging and testing of the H200 module at NVDA? Who bears the cost of loss of DRAM bits? MU or NVDA?

Our view on yield issues: While yield loss at CoWoS packaging has been known for a while, we are coming around to believe that there could be additional yield loss of NVDA’s module (GPU and HBM) during the testing of H200 modules. The H200 module perhaps has some of the highest current densities ever recorded in electronic devices, we believe higher than that in H100. Even though the NVDA GPU module is cooled during operation, challenges in heat-dissipation could be higher than previously anticipated, leading to heating-related failures.

During the initial stages of the product ramp, perhaps it is a matter of testing enough number of H200 modules to filter in working devices. But as the unit volume ramps the logistic of testing devices could turn into an exponential problem, slowing down output and the pace of revenue ramp. There was little on the MU call that gave us confidence that yield issues do not exist. We think the management’s odd statement at the top of the Q&A session is cautionary.

If the HBM trade ratio were to decrease, as yields improve, wouldn’t that increase bit supply and reduce MU’s profitability? If the trade ratio were to increase above 3x, wouldn’t the additional yield loss raise concerns with NVDA’s data center customers about reliability of H200?

Is there upside to HBM revenue expectations? We do not think so. 1) Management stated that HBM is sold out for Cy24 and Cy25, 2) further pricing increase of HBM is unlikely as pricing has been contracted out for Cy25, 3) further increase of HBM output is unlikely as little of the Fy25 capex has been allocated to bit supply growth. Even if NVDA were to increase its demand forecast for HBM, we do not see how MU would be able to increase output or pricing. 4) Samsung’s potential entry into the HBM market as some point in the next 6-12 months could dilute MU’s share if not at H200 but possibly at B100. Recall that B100 is expected start ramping in CQ4/CQ1.

Management reiterated the Fy25 guidance for HBM at ‘several $billions’. Heading into earnings, we think Street models already dialed in HBM revenue north of $5bn, and perhaps quite a bit higher. In this context, reiteration at the earnings event of ‘several $billions’ does little to move the needle.

Management spoke of overall revenue setting a ‘substantial revenue record’ in Fy25. But with sell-side models already at or slightly above ~$40bn versus the previous revenue record of $30bn in Fy18, management’s outlook does not move the needle.

Why did Aug guidance miss enhanced expectations? We think this is in part due to weakness in MU’s mobile business arising from China’s import policy. We believe the surprising strength in Samsung’s reported Q2 revenue/profitability comes, not from AI end market, but from share gain in China’s mobile DRAM market.

Due to restrictions imposed by the China government, we think there is excess inventory of MU’s lpDDR5 in the disti channel. So much so, we believe that MU’s lpDDR5’s disti pricing is lower than lpDDR4’s pricing. Under normal channel inventory conditions, the lpD5 should be priced higher than lpD4.

Due to China import policy restrictions on MU, we think China mobile handset vendors prefer Samsung’s lpDRR5 over MU’s, even for handsets allocated for export purposes. Oppo/Vivo/Xiaomi provide ~30% of global smartphone units. China’s policy largely locks MU out of this market. Due to weak sell-thru, we think lpDRR5 inventory at China distis has been building up for a while. We think MU decided to cut sell-in in May and Aug quarter to normalize disti inventory.

We think lpDRR5 channel inventory resulted in MU’s Mobile segment revenue in the May quarter printing down 1% q/q, while each of the other 3 segments reported revenue up for the quarter. We think MU’s overall DRAM bits printing down ~7% q/q is largely due to decline in lpD5 bits sold into China’s mobile market. We do not expect lpDRR5 demand in China distis to normalize until China’s import policy changes.

The next positive catalyst for the stock is the upcoming TSM earnings event. We expect TSM to sound enthusiastically positive on AI revenue potential for the rest of the year, which will be seen as a positive read-thru for MU’s HBM. While the stock may make another attempt to cut above the previous high, we expect the drive to stall, as worries regarding revenue ramp at HBM/H200 persist.

The arc of our PT changes: We started raising our MU PT right after NVDA’s April’23 earnings on the basis of HBM potential (link) when there still was considerable inventory in DRAM channel. Into MU’s Nov’23 earnings, we spoke of a sudden ‘urgency’ in the DRAM channel, as DRAM customers began to sense a tightening of supply. We called for upside to our $80 PT (link). Three months later, into the Feb’24 earnings, and with the stock in the mid-$90s, we raised our PT sharply to $150 (link) into our expectations of MU essentially becoming tied at the hip to the 2nd phase of NVDA’s AI GPU ramp. Three short months later, into the Apr’24 earnings, with the stock having hit our PT, we decided not to raise our PT into Apr’24 (link).

The velocity at which consensus earnings expectations have risen, we think expectations may have become over-extended. We might even say HBM expectations may have entered bubble territory. With little upside to Fy25 estimates and with nascent yield worries, we think it appropriate now to cut our PT for the first time this cycle.

Net/Net: The months of May/June witnessed a series of positive catalysts from the AI ecosystem, all through which MU stock ran up, finally stalling at our previous PT of $150. At the earnings event two weeks ago, MU management did not provide incremental guidance for taking up Street’s Fy25 revenue estimates. Additionally, investors were left with nascent worries about HBM yield and qualification.

We expect the stock to remain volatile as it ricochets between 1) positive commentary from MU/TSM/NVDA management regarding demand pull from hyperscale customers and 2) investor worries about the pace of HBM/H200 ramp.

We believe the Street has pulled forward ramp in revenue/margins expected for the next 12-18 months. Based on peak earnings, nascent worries on HBM yield and the potential for a slower ramp of HBM revenue, we cut our eps multiple to 10x. At our FY25 estimates of $39.5bn/$11.8, we cut out PT to $120 from our previous PT of $150.

AAPL: Stock re-rates, more to go

Even after the sharp move in the stock post-wwdc, we do not believe many investors appreciate the real value in the stock. The value lies not in gigantic LLMs or snazzy demos. Rather, AAPL’s value lies in a rich vein of gold that hyperscale peers do not have access to. And even if they did, they lack the means to mine it as efficiently as Apple can, thanks to the low-power consuming Apple Silicon. Even after the sharp move to the upside last week, the stock has merely caught up with SPX on an ytd basis

Apple’s value lies in its ability to personalize the AI experience for its 2+billion installed base of devices and billion+ paid subscribers based on its access to private data locked up in Apple devices and guarded by Apple’s privacy rules. From a financial perspective, Apple’s value lies in its apparent ability to ramp up AI offerings without having to spend a king’s ransom in capex. Evidence of this ability was demonstrated by Apple announcing a major stock buyback at the previous earnings call, which would not have been possible if capex were to increase anywhere close to the spending at hyperscale peers.

The debate ought not to be merely about whether iPhone16 benefits from an upgrade cycle. We are confident it will, as iPhone16 offers a new class of capabilities. Gen AI joins a long line of capability upgrades over the years – touch-based apps, 5G connectivity, enhanced cameras – each of which triggered a multi-year device upgrade cycle. After two tepid years of iPhone growth, it is only reasonable to expect pent-up demand from the early adopters.

In keeping with historical precedence, we expect iPhone16 to carry mostly in-house developed Gen AI applications. We expect iPhone16 to garner demand from early adopters. We expect Gen AI apps developed by 3rd party ISVs to start showing up next year in iPhone17, which should then encourage another round of upgrades, this time from mainstream consumers and enterprise customers.

But it is more than just iPhones. We expect Apple to integrate Gen AI functionality into its Macs, iPads and even Watches. We expect functionality to steadily improve over the years which then incentivizes more users to upgrade the gamut of Apple devices. We are not looking for heroics. We are modeling back-to-back years (FY25/26) of overall revenue growth slightly above mid-single digit and eps growth in the high single digits (Exhibit 1). We model Fy25 as $415bn/$7.15 and Fy26 at $443bn/$7.73; revenue in both years up ~7% and eps up 8%.

The two-day surge following the wwdc event took the stock up intra-day to our current PT$220. Based on 34x to our Fy26 eps estimate, discounted at 5% back to this year we derive a new PT of $240.

–xx–

The next leg in the AI revolution: AI models in the cloud having reached a certain level of maturity, the next leg in the AI revolution is the distribution of distilled versions of LLMs to hundreds of millions of client/edge devices. This step needs to be taken with great care, as novel forms of attacks and malware running on powerful client SoCs could spell disaster on a scale unimaginable. To preserve its brand as a trusted personal device, we think Apple needs to necessarily stay deliberate in launching AI apps and in allowing 3rd party ISVs into its walled garden of devices. In our view, Apple is not being slow in its AI roll-out; Apple is merely being cautious in its approach to AI. And there is value in that.

Personal data – the real store of value: Having exhausted publicly available data for training giant LLMs, AI vendors thrash about looking for new sources of data to train their models (to reduce hallucinations further). However, there are just two giant untapped data sources, both in private domains, inaccessible to internet-facing Gen AI players such as OpenAI. One such domain is the vast amount of enterprise data locked up in secure corporate data centers, the kind of data which the likes of CRM are trying to monetize. The other untapped domain is the vast amount of personal data locked up in secure personal devices such as the iPhone, Mac, iPad and Apple Watch.

Have LLMs become commoditized? The one idea that we can take away from wwdc – AAPL plans to provide a personalized AI experience to its billion-plus paying customers and 2+billion Apple devices based on ultra-small models trained on data locked up in Apple devices and Apple Cloud. And if there is a corollary, it is this – LLMs have become commoditized. Why else would OpenAI give away its prized models, on which it may have spent $billions, to Apple for free?

The value going forward lies not in LLMs by themselves, but in access to proprietary data for training the LLMs further and for developing useful apps. Personalized Gen AI apps so built and distributed to billions of users worldwide are a work in progress, in our view. Whereas cloud-based Gen AI applications, such as ChatGPT and CoPilot have reached a level of maturity, client-device based Gen AI applications are very much in their infancy.

Apple’s AI apps to launch in waves: We think the first generation of AI-centric devices from AAPL, such as the iPhone16, are likely to carry mostly internally developed Apple apps. To recall, iPhone2 the model that first launched the apps revolution, carried mostly home-built apps. Apps from ISVs took some time to make their way into Apple’s roster of iPhone apps. We expect a similar cadence from Gen AI apps as well. We expect Gen AI apps from ISVs to appear on the second generation of Apple devices, such as the iPhone17. This second wave of Ai apps is a Fy26 opportunity.

Apple has a moat others lack: Even after the sharp move to the upside after wwdc, the stock has merely caught up with SPX on an year-to-date basis. Its under-performance all year we believe was due to a series of investor misperceptions which got corrected after Apple events, the most notable being 1) iPhones were in secular decline due to share loss and 2) Apple did not have an AI strategy. The first one was addressed at Apple’s earnings event, the second was addressed at the wwdc event

However, investor skepticism lingers. Apple is seen as having been late to the AI party. But is it though? Just as NVDA is seen as having a moat due to CUDA, Apple too has a moat due to its control over billions of personal devices, every one of them a carrier of future AI apps. Google may have had a jumpstart over Apple in the development of in-house AI infrastructure. But Google does not have the kind of control over Android as Apple does over iPhones. And Google does not have Apple’s ecosystem of non-phone personal devices.

Apple is a one-stop shop for ISV access to Apple’s client devices. ISVs could build AI apps based on a range of inputs – touch, pencil, visual inputs and voice activates. That is Apple’s moat. And that is why we think OpenAI is willing to give away its prized LLMs to Apple.

Apple silicon in the cloud: Apple has another weapon its hyperscale peers, outside of Google, lack – Apple Silicon capable of running complex LLM workloads in the cloud. While peers such as MSFT, AMZN and META struggle to develop internal Ai silicon just so they could escape the onerous pricing and power consumption of AI GPUs, Apple announced at the wwdc event that it plans to use its internally developed, fully mature, low-power consuming Apple Silicon for cloud AI workloads.

Will Apple servers be made available in the merchant server market? Down the road, after Apple established an AI beachhead, we think merchant Apple severs are a possibility. As META’s Llama LLMs are compatible with Apple Silicon, we can see META becoming a potential future customer. But we will leave this possibility out of our fin models.

Net/net: We are raising our PT to $240 from $220. We expect AI adoption for Apple customers in waves, the 1st generation of Apple devices carrying internally developed apps to launch in Fy25, the 2nd generation introducing apps from ISVs to launch in Fy26. We see at the very least two years of ~7% revenue growth and ~8% growth in EPS growth. After two years of stagnation, we expect pent-up demand from early adopters this year to extend to mainstream adopters next year and beyond. Furthermore, thanks to its internally developed Apple Silicon, muted capex needs may allow Apple exhibit superior ROI and free cash flow compared to its peers

AAPL: What Is The Cloud Strategy?

If 2023 was the year of text-based interaction with AI models, 2024 appears to be the year of interaction via human voice, ‘chatting’ in the real sense of the word. We were positive on AAPL through the lean months earlier this year, and we remain positive on the expectation of an AI-triggered refresh cycle. While investor focus today will be on dev tools made available to AI developers, our core interest in the name remains unchanged – we believe Apple Silicon offers the highest efficiency performance-to-power profile for inferencing on-device and in the cloud. But it is up to AAPL to convince investors.

Expectations of the event today – besides mundane text summaries and generative image capabilities, AAPL needs to provide assurance that its Gen AI related audio-interactivity and multi-modality capabilities are comparable to peers and offer clear value over existing Apple devices. A second topic of interest is AAPL’s business model in its engagement with major 3rd party partners. Investors will need to assess whether the engagements could trigger business conflicts and anti-trust issues.

In recent weeks consensus has drifted to the notion that AAPL could form a strategic alliance with OpenAI. In hindsight perhaps this idea should not have been a surprise. We wonder how many took note that the mobile device on which OpenAI demo-ed its real-time conversational capability at the GPT-4o event on May 13th was an iPhone 15 Pro, and not an Android handset, or for that matter, a Microsoft AI PC.

AAPL is hardly going to preview future Apple devices at the event today. Demos if any will necessarily be on existing devices, i.e. iPhone15 Pro and iPad Pro. Given that the iPad Pro has the more advanced Apple Silicon, we are hoping to see more of this device in demos today. But client device demos hardly answer a key question on investors’ minds. Investors necessarily need to know AAPL’s cloud strategy. AAPL needs to inform developers and investors how it offloads larger tasks to the cloud, and more importantly, what sort of silicon it would use in the cloud.

The stock has run-up into the event today. A bout of profit-taking is to be expected. We are buyers on weakness. But our opinion is conditional on tight LLM integration with AAPL o/s and native applications. And just as important, we’d like to hear from AAPL if Apple Silicon has a future in the cloud (link). We maintain our $220 PT.

–xx–

Cloud latency forces AI workloads to the edge: Many of the capabilities OpenAI showcased at the GPT-4o event are now available on any smartphone via the ChatGPT4.0 app. So how would iPhones differentiate? We think the OpenAI demo on real-time conversational AI offers a possible hint. The iPhone 15 Pro running the demo had to use a wired ethernet connectivity vs using the more convenient WiFi connectivity. We infer from the choice in connectivity that the conversational AI workload was not running on-device. Rather, it was running on the cloud, presumably on NVDA infrastructure. For current generation mobile devices, we are aware that latency is an issue for real-time conversations with an audio AI agent, not just when only a cellular network is available but even with the availability of typical enterprise WiFi networks.

Mobile devices can hardly be expected to be hooked up to wired ethernet all the time. One way to reduce latency issues is to perform AI tasks, including real-time audio AI, as much as possible on the local client device. This requires beefed up neural engines on the device. We are sure to hear a lot more about the TOPS metric. AAPL disclosed at the iPad Pro launch event that the NPU on the M4 chip delivers 38 TOPS.

On-device Small Language models: On-device workloads on mobile devices necessarily are based on SLMs. Even as it partners with 3rd party LLM vendors, we think it is only natural for the orchestration of on-device workloads to be controlled by AAPL’s proprietary Small Language Models. Such models then need to work with 3rd party models from OpenAI and elsewhere, and when required, help partition AI tasks to the cloud. We need to hear the state of AAPL’s SLMs. Google provided a whole bunch of details at its recent Gemini event as to how it provides seamless hand-off from cloud-based tasks to remote client devices based off a spectrum of models – from the large Gemini Pro to the smaller Gemma and Nano models (link). We need to hear a similar narrative from AAPL.

Crucial questions for AAPL – Cloud-based AI workloads: Internet data as well as access to giant databases of images and videos necessarily require cloud data access and cloud-based LLMs. For cloud-based tasks, Google plans to use its Gemini Pro models running on its internal silicon.

What is the equivalent at AAPL? Will AAPL’s AI cloud workloads be using OpenAI LLMs running on NVDA servers hosted by Microsoft Azure? Alternatively, is there a future for Apple Silicon in the cloud? If AAPL plans to have a long-term relationship with OpenAI, would GTP-4o be ported to Apple Silicon? Where does MSFT feature in the mix of business relationships?

Net/Net: While the Street appears focused on client device-based AI applications, we think the real question is related to AAPL’s cloud strategy. On this hinges company valuation. The stock has run-up into the event today. A bout of profit-taking is to be expected. We are buyers on weakness. But our opinion is conditional on tight LLM integration with AAPL o/s and native applications. The company needs to overcome its historical coyness and become more forthcoming about its strategy for cloud-based Gen AI workloads.