Jahanara Nissar

AAPL: What Is The Cloud Strategy?

If 2023 was the year of text-based interaction with AI models, 2024 appears to be the year of interaction via human voice, ‘chatting’ in the real sense of the word. We were positive on AAPL through the lean months earlier this year, and we remain positive on the expectation of an AI-triggered refresh cycle. While investor focus today will be on dev tools made available to AI developers, our core interest in the name remains unchanged – we believe Apple Silicon offers the highest efficiency performance-to-power profile for inferencing on-device and in the cloud. But it is up to AAPL to convince investors.

Expectations of the event today – besides mundane text summaries and generative image capabilities, AAPL needs to provide assurance that its Gen AI related audio-interactivity and multi-modality capabilities are comparable to peers and offer clear value over existing Apple devices. A second topic of interest is AAPL’s business model in its engagement with major 3rd party partners. Investors will need to assess whether the engagements could trigger business conflicts and anti-trust issues.

In recent weeks consensus has drifted to the notion that AAPL could form a strategic alliance with OpenAI. In hindsight perhaps this idea should not have been a surprise. We wonder how many took note that the mobile device on which OpenAI demo-ed its real-time conversational capability at the GPT-4o event on May 13th was an iPhone 15 Pro, and not an Android handset, or for that matter, a Microsoft AI PC.

AAPL is hardly going to preview future Apple devices at the event today. Demos if any will necessarily be on existing devices, i.e. iPhone15 Pro and iPad Pro. Given that the iPad Pro has the more advanced Apple Silicon, we are hoping to see more of this device in demos today. But client device demos hardly answer a key question on investors’ minds. Investors necessarily need to know AAPL’s cloud strategy. AAPL needs to inform developers and investors how it offloads larger tasks to the cloud, and more importantly, what sort of silicon it would use in the cloud.

The stock has run-up into the event today. A bout of profit-taking is to be expected. We are buyers on weakness. But our opinion is conditional on tight LLM integration with AAPL o/s and native applications. And just as important, we’d like to hear from AAPL if Apple Silicon has a future in the cloud (link). We maintain our $220 PT.

–xx–

Cloud latency forces AI workloads to the edge: Many of the capabilities OpenAI showcased at the GPT-4o event are now available on any smartphone via the ChatGPT4.0 app. So how would iPhones differentiate? We think the OpenAI demo on real-time conversational AI offers a possible hint. The iPhone 15 Pro running the demo had to use a wired ethernet connectivity vs using the more convenient WiFi connectivity. We infer from the choice in connectivity that the conversational AI workload was not running on-device. Rather, it was running on the cloud, presumably on NVDA infrastructure. For current generation mobile devices, we are aware that latency is an issue for real-time conversations with an audio AI agent, not just when only a cellular network is available but even with the availability of typical enterprise WiFi networks.

Mobile devices can hardly be expected to be hooked up to wired ethernet all the time. One way to reduce latency issues is to perform AI tasks, including real-time audio AI, as much as possible on the local client device. This requires beefed up neural engines on the device. We are sure to hear a lot more about the TOPS metric. AAPL disclosed at the iPad Pro launch event that the NPU on the M4 chip delivers 38 TOPS.

On-device Small Language models: On-device workloads on mobile devices necessarily are based on SLMs. Even as it partners with 3rd party LLM vendors, we think it is only natural for the orchestration of on-device workloads to be controlled by AAPL’s proprietary Small Language Models. Such models then need to work with 3rd party models from OpenAI and elsewhere, and when required, help partition AI tasks to the cloud. We need to hear the state of AAPL’s SLMs. Google provided a whole bunch of details at its recent Gemini event as to how it provides seamless hand-off from cloud-based tasks to remote client devices based off a spectrum of models – from the large Gemini Pro to the smaller Gemma and Nano models (link). We need to hear a similar narrative from AAPL.

Crucial questions for AAPL – Cloud-based AI workloads: Internet data as well as access to giant databases of images and videos necessarily require cloud data access and cloud-based LLMs. For cloud-based tasks, Google plans to use its Gemini Pro models running on its internal silicon.

What is the equivalent at AAPL? Will AAPL’s AI cloud workloads be using OpenAI LLMs running on NVDA servers hosted by Microsoft Azure? Alternatively, is there a future for Apple Silicon in the cloud? If AAPL plans to have a long-term relationship with OpenAI, would GTP-4o be ported to Apple Silicon? Where does MSFT feature in the mix of business relationships?

Net/Net: While the Street appears focused on client device-based AI applications, we think the real question is related to AAPL’s cloud strategy. On this hinges company valuation. The stock has run-up into the event today. A bout of profit-taking is to be expected. We are buyers on weakness. But our opinion is conditional on tight LLM integration with AAPL o/s and native applications. The company needs to overcome its historical coyness and become more forthcoming about its strategy for cloud-based Gen AI workloads.

Leave a Comment