Apple punts on Siri updates as it struggles to keep up in the AI race

Apple’s WWDC 2025 had new software, Formula 1 references, and a piano man crooning the text of different app reviews. But one key feature got the short end of the stick: Siri.

Although the company continuously referenced Apple Intelligence and pushed new features like live translation for Messages, FaceTime, and phone calls, Apple’s AI assistant was barely mentioned. In fact, the most attention Siri got was when Apple explained that some of its previously promised features were running behind schedule.

To address what many saw as the elephant in the room, Apple’s keynote briefly mentioned that it had updated Siri to be “more natural and more helpful,” but that personalization features were still on the horizon. Those features were first mentioned at last year’s WWDC, with a rollout timeline “over the course of the next year.”

“We’re continuing our work to deliver the features that make Siri even more personal,” Craig Federighi, Apple’s SVP of software engineering, said during Monday’s keynote. “This work needed more time to reach our high quality bar, and we look forward to sharing more about it in the coming year.”

Apple’s relative silence on Siri stands out

Apple has long been criticized for the shortcomings of Apple Intelligence and for falling behind competitors like OpenAI, Anthropic, and Google in the race to build generative AI apps and services. Apple’s AI models are rarely mentioned alongside its competitors when it comes to power users exploring real-life benefits, let alone more advanced AI agent capabilities.

The company’s relative silence on a personalized Siri this year stood out for a handful of reasons. For one, there’s Apple’s own marketing push: last year, it ran TV ads for a revamped Siri showing features that still haven’t arrived.

Then there are Apple’s competitors. Google and Microsoft are both pushing hard on AI and moving to rapidly integrate it into their operating systems. For example, ahead of Google’s I/O conference last month, Android users were the first to get free access to a live Gemini feature that allowed the AI assistant to see and respond to images and items on your screen. And as part of Microsoft’s Build conference last month, Microsoft announced AI shortcuts in Windows 11’s File Explorer that let users click on a file and immediately see suggestions like blurring aspects of a photo or summarizing content.

Meanwhile, Apple Intelligence’s stumbles have given people plenty to poke fun at in the months since their rollout. The feature’s rocky debut included notification summaries so off the mark that the company disabled them for some app categories after the BBC reported the tool would conflate multiple headlines into inaccurate synopses.

The company’s strategy on Monday was to roll out a wide swath of small, functional updates powered by Apple Intelligence — and partly by ChatGPT — that could help it catch up to competitors in terms of translation and search. Apple’s Image Playground now integrates with OpenAI’s technology, and users can tap into ChatGPT to change a friend’s photo into the style of an oil painting or other types of art. Apple gave developers access to the on-device large language model behind Apple Intelligence, and it also debuted live translation features that allow users to translate between languages in Messages, FaceTime, and phone calls.

A photo of live translation on a FaceTime call in iOS 26.

A photo of live translation on a FaceTime call in iOS 26.
Photo by Allison Johnson / The Verge

At WWDC, Apple also pushed visual intelligence features aimed at allowing users to “search and take action on anything they’re viewing” across different apps. “Users can ask ChatGPT questions about what they’re looking at on their screen to learn more, as well as search Google, Etsy, or other supported apps to find similar images and products,” according to Apple.

Some had been waiting for Apple to use WWDC as an opportunity to announce it was expanding its AI options for iOS beyond ChatGPT — for instance, allowing Siri to tap into Google’s Gemini for complex user queries — but that didn’t happen this time around.

Last June, during a live session after an Apple keynote, Federighi mentioned that he hopes Apple Intelligence will eventually allow users “to choose the models they want,” specifically name-dropping Gemini. One of Apple’s backend updates in February hinted at a Gemini integration, and in April, during Google’s search monopoly trial proceedings, CEO Sundar Pichai said the company plans to ink a deal with Apple by mid-2025, with a rollout by the end of this year. Everyone’s still waiting for that.