Google adds AI-generated summaries to Discover
Google is bringing AI‑generated summaries to Discover, the personalized stream of articles and videos found in the Google app curated based on a user's interests. The search giant remains undeterred by the imperfect nature of AI Overviews, or what it might mean for publishers whose content largely makes up Google's search results.
TechCrunch has reported that some Android and iOS users in the US are seeing cards containing AI‑generated summaries on their Discover page. These cards appear with news sites’ logos in the top left of the card, with an accompanying snippet that is presumably generated from the body or headlines of those publishers’ content. When users tap "see more," the card expands to show all the contributing articles for the summary. Each summary card carries a warning that it was generated by AI, which it notes "can make mistakes."
A Google spokesperson told TechCrunch that this is a US launch of a new feature, not a test. The feature will first focus on trending lifestyle topics like sports and entertainment. In speaking with TechCrunch, Google claimed the summaries would make it easier for people to decide which pages they want to visit, though publishers are already vocal that Google’s AI tools are tanking clickthrough traffic. Some estimates say as many as 64 percent of search results that include AI Overviews end without a click.
Google has been aggressively rolling out AI‑powered features. Tools like AI Overviews, AI mode in Search and AI‑generated video summary represent, in part, Google’s determination to maintain its user base in the face of would‑be search‑engine replacements like ChatGPT.
The pace of this new rollout was not made clear.
Google expands AI Mode with Gemini 2.5 Pro, Deep Search, and agentic phone calls
Google is continuing to double down on its AI Mode, bringing more features to its dedicated Search chatbot. Today, the company is adding the Gemini 2.5 Pro model and the Deep Search capability to AI Mode. These features will be available to Google AI Pro and Google AI Ultra subscribers. Although both of these tools can still be accessed through other means, Google's move to incorporate them into the chatbot points to an end goal of AI Mode being the primary form of engagement with the company's signature search service.
These developments are follow-ups to announcements made during Google's I/O conference this spring. AI Mode began rolling out to all Google users in May, and Deep Think was also announced as an option for the Gemini 2.5 Pro model at that time.
Outside of AI Mode, Search is also getting a small update. Another AI tool teased at I/O was the ability for Gemini to place phone calls with Project Astra. This agentic option is coming, albeit in a limited form. For starters, it will only be able to contact local businesses and its topics will be limited to inquiring about availability and pricing. When a person searches for companies or services, they may see an option such as "Have AI check prices" that will initiate a call to that business. These AI phone calls are rolling out today to all Search users, but Google AI Pro and AI Ultra subscribers will have higher limits.
Adobe adds new AI-powered sound effects to Firefly videos
Since rolling out the redesign of its Firefly app in April, Adobe has been releasing major updates for the generative AI hub at a near monthly clip. Today, the company is introducing a handful of new features to assist those who use Firefly's video capabilities.
To start, Adobe is making it easier to add sound effects to AI-generated clips. Right now, the majority of video models create footage without any accompanying audio. Adobe is addressing this with a nifty little feature that allows users to first describe the sound effect they want to generate and then record themselves making it. The second part isn't so Adobe's model can mimic the sound. Rather, it's so the system can get a better idea of the intensity and timing the user wants from the effect.
In the demo Adobe showed me, one of the company's employees used the feature to add the sound of a zipper being unzipped. They made a "zzzztttt" sound, which Adobe's model faithfully used to reproduce the effect at the intended volume. The translation was less convincing when the employee used the tool to add the sound of footsteps on concrete, though if you're using the feature for ideation as Adobe intended, that may not matter. When adding sound effects, there's a timeline editor along the bottom of the interface to make it easy to time the audio properly.