Apple Inc.
What Happened: Apple's AI research has resulted in a model that can convert any given context into text, making it easier for Large Language Models (LLMs) to parse. Apple could leverage this to enhance Siri, its virtual assistant.
The AI research paper, titled ReALM (Reference Resolution As Language Modeling) details a method for resolving vague language inputs, such as "this" or "that," using LLMs.
Apple's approach involves converting all contextual information into text, which allows for more efficient parsing.
The smallest ReALM models performed similarly to GPT-4, but with fewer parameters, making them better suited for on-device use. Increasing the parameters in ReALM led to a significant improvement in performance over GPT-4.
One reason for this performance boost is GPT-4's reliance on image parsing to understand on-screen information. Apple's method, which converts images into text, eliminates the need for advanced image recognition parameters, making the model smaller and more efficient.
Why It Matters: Apple is gearing up to unveil its comprehensive AI strategy at WWDC 2024. Rumors suggest that the company will utilize smaller on-device models to ensure privacy and security, while also licensing LLMs from other companies for off-device processing.
The upcoming iOS 18 update is also expected to bring major AI upgrades.
Apple has been making strategic moves to enhance its AI capabilities, including the acquisition of Canadian AI startup DarwinAI and discussions with Google to license Google's Gemini AI models for future iPhones.
Analysts predict that Apple's AI initiatives could be a $33 billion-a-year opportunity.
The upcoming WWDC is expected to mark the biggest change in Apple's operating system design since the 1980s, with AI being a key focus.
Price Action: Apple's stock closed 0.45% lower on Monday at $170.03, according to Benzinga Pro.