The publication Information has revealed new details about the next stage of development of the voice assistant Siri, based on the Gemini AI model.

According to the publication's sources, Apple and Google have agreed on deep customization of the model: Google will tailor Gemini to Apple's requirements, while the Cupertino team will retain the ability to customize the AI to its own quality standards. Furthermore, there will be no mention of Gemini in the iOS interface – as for now with ChatGPT, the underlying technology will remain “behind the scenes”.
According to journalists, the updated Siri will gain advanced knowledge on general topics without redirecting users to the browser, will learn to provide basic emotional support and perform applicable tasks – from booking tickets to taking notes.
The assistant will also become more independent in interpreting ambiguous requests: for example, it will be able to analyze message history to accurately identify the recipient, even if that person is saved in contacts under a different name.
The introduction of new AI capabilities will be gradual. Some functions are expected to launch in the spring with the release of iOS 26.4, and more advanced scenarios – including context remembering and proactive tips – are expected to launch in the summer at WWDC 2026. Among them, such as recommendations to get to the airport first, taking into account traffic conditions.

































