Apple's Siri Gets a Gemini Brain in iOS 26.4 Overhaul
Apple is fundamentally rebuilding Siri with Google's Gemini AI models in iOS 26.4, introducing on-screen context awareness and multi-step actions — the biggest change to Siri since its 2011 debut. Some features may slip to later updates amid reported technical delays.
A New Brain for an Old Assistant
For the first time in fifteen years, Siri is getting a genuine architectural overhaul. Apple confirmed in January 2026 that its long-maligned voice assistant will be rebuilt on top of Google's Gemini AI models, handling complex reasoning tasks that Apple's own on-device systems have struggled to manage. The update, expected to ship with iOS 26.4 in spring 2026, represents the most significant change to Siri since it launched on the iPhone 4S in 2011.
What's Actually Changing
The headline feature is on-screen awareness. The rebuilt Siri will be able to read and reference whatever content is currently displayed on the device — making it, for the first time, a genuinely context-sensitive assistant. If a flight confirmation email is open, Siri can automatically add the trip to Calendar and set departure reminders. If a restaurant page is showing in Safari, Siri can initiate a reservation without the user copying a name or address.
Beyond screen awareness, the new Siri gains the ability to chain multiple actions from a single request and sustain natural, multi-turn conversations — moving away from the isolated, one-shot commands that have defined (and limited) Siri's usefulness since launch. According to MacRumors, Gemini handles Siri's summarizer and planner functions, while Apple retains full control over the user interface and data routing.
The Privacy Architecture
Apple and Google have built a deliberate privacy buffer into the system. As Apple explained in January, simple queries remain on-device, moderately complex ones route through Apple's Private Cloud Compute servers, and only the most demanding reasoning tasks reach Google's infrastructure — with Apple acting as a privacy proxy that strips user identity before any data leaves its control.
CNBC reported that Apple is paying roughly $1 billion per year for access to Gemini's capabilities — a striking figure that underscores how seriously the company is taking the competitive pressure from ChatGPT and Google's own Gemini app.
Delays Cloud the Rollout
The path to launch has not been smooth. In February, both 9to5Mac and MacRumors reported that several Gemini-powered features are being pushed beyond iOS 26.4 into iOS 26.5 (expected in May) and possibly iOS 27 in the fall. Internal testers flagged response latency, mid-sentence cutoffs, and an unusual fallback where Siri was routing some queries to ChatGPT instead of the Gemini pipeline Apple had contracted for. Deep personal-data access — such as searching old messages — also reportedly remains unstable.
Apple's iOS 26.4 beta launched in February without any of the new Siri features visible, suggesting the public rollout may be more gradual than initially announced.
A Strategic Pivot — and a Competitive Admission
The Google partnership signals something significant: Apple is acknowledging that building frontier AI reasoning models in-house, at Siri's required scale, is currently beyond its reach — or at least its timeline. The company that built its brand on vertical integration is outsourcing one of its most visible user-facing features to its oldest search rival.
The full vision — including ChatGPT-level multi-app workflows and deep personalisation — is now expected to arrive with iOS 27 in September 2026. Spring's update, whenever it lands, will be the opening act of what Apple is calling a two-phase transformation of its AI strategy.