September’s Biggest Google AI Updates: A User-Friendly Overview

Introduction
September was a significant month for Google AI, introducing a plethora of new features that enhance everything from Chrome to Search capabilities and innovative tools in the Gemini app. With enhancements in robotics and learning functionalities, there’s a lot to cover. Below is a straightforward guide detailing the changes, their importance, and how you can start experimenting with these new features today.
What’s New in Chrome with Gemini
Gemini now serves as an AI browsing assistant in Chrome, capable of answering questions and providing context from your current page or across multiple tabs. You’ll notice a small Gemini icon in the top right corner—click it to receive quick explanations, summaries, comparisons, or to jump to a specific moment in a YouTube video without the hassle of scrubbing through.
Some key features include:
– Ask complex questions directly from Chrome’s address bar using AI Mode.
– Compare information across tabs and retrieve items from your browser history using natural language.
– Enhanced safety with AI-driven scam detection and streamlined password recovery.
If you’re in the U.S. using a desktop, the rollout has already begun. Reports from independent outlets emphasize the agent-like functionalities, omnibox integration, and security enhancements aimed at identifying fake alerts and phishing attempts.
Search: A More Visual, Conversational Experience
Visual Exploration in AI Mode
AI Mode in Search now has a deeper understanding of your questions and the visuals you provide, making it easier to transition from vague ideas to concrete results. You can describe the aesthetic you’re after, upload inspiring photos, or ask follow-up questions to get even more refined results. Shopping queries have also become more conversational, benefitting from Google’s extensive product graph.
Search Live: Combining Voice and Camera
Search Live introduces a real-time conversational feature in the Google app. Tap the Live icon to interact with AI Mode using voice commands and activate the camera to receive hands-free assistance while troubleshooting, exploring new areas, or learning new skills. Google also offers practical tips to help you get started with a how-to post and launch guide.
Language Expansion
AI Mode is rolling out to more users globally, including new language support for Hindi, Indonesian, Japanese, Korean, Brazilian Portuguese, and more. The goal is not only translation but also to provide a nuanced understanding of local information.
Gemini App: Creativity, Collaboration, and Enhanced No-Code Tools
The latest Gemini Drop includes several exciting upgrades:
– Nano Banana Image Model: DeepMind’s revamped image generation and editing model, simplifying the process of making complex, consistent edits. It has been well-received by users and developers alike.
– Shareable Gems: You can now share your custom Gems (personalized AI helpers) with others, similar to sharing Google Docs. This feature is ideal for creating reusable playbooks like vacation guides, team checklists, and meal plans.
– Canvas App Builder: With Canvas, you can build web apps without any coding and visually edit elements by clicking and describing your desired changes.
These updates transform the Gemini app into both a creative studio and a collaborative platform, especially when used alongside other connected Google apps. Reports indicate that Nano Banana has driven a significant increase in downloads and usage.
New Features on Android: Enhanced Writing, Audio Sharing, and File Transfers
Android has received several practical features that users will find beneficial:
– AI Writing Tools in Gboard: Refine tone, spelling, and grammar directly on your device.
– Audio Sharing: Connect two Bluetooth LE headphones to listen to the same audio.
– Private QR Code Audio Broadcasts: For silent listening in groups.
– Redesigned Quick Share: Offers previews and real-time progress updates.
DeepMind’s Robotics Advancements: Bridging Reasoning and Physical Actions
Google DeepMind unveiled Gemini Robotics 1.5 and Gemini Robotics-ER 1.5, a two-model system aimed at integrating AI agents into the real world. Simply put, ER 1.5 focuses on high-level planning, while Robotics 1.5 translates vision and instructions into actionable motor commands. This setup enables robots to plan, handle multi-step tasks, and adapt tools across different models.
Why It Matters
Transitioning from screen-only AI to tangible agents opens doors to real-world applications like organizing, sorting, and assisting in semi-structured environments. Early demonstrations were showcased at the CoRL and Humanoids 2025 conferences.
NotebookLM: Evolving into an Active Learning Partner
NotebookLM is now more than just a summarization tool. New features empower you to actively engage with your learning materials:
– Auto-Generate Flashcards and Quizzes: Based on your notes and readings.
– Create Richer Reports: In formats like study guides and blog posts.
– Learning Guide with Step-by-Step Help: Plus, Audio Overviews in multiple languages.
The aim is to bridge the gap between reading and understanding, providing tools that support recall, reflection, and explanation across various subjects.
Guided Learning in Gemini
A new mode called Guided Learning acts as an interactive study partner, breaking problems into manageable steps and adapting explanations with visuals and examples. This mode is powered by LearnLM, a Google initiative that incorporates learning science into AI. Guides and press coverage further detail its functionalities and how it differs from traditional quick-answer bots.
AI Literacy for Families and Schools
Google launched a dedicated hub containing AI literacy resources for parents, students, and educators. This includes the podcast “Raising Kids in the Age of AI,” the Be Internet Awesome AI Literacy curriculum, and classroom-ready AI Quests for middle schoolers, developed in collaboration with Stanford’s Accelerator for Learning. The company reports that it has trained over 650,000 educators and allocated $40 million in grants to enhance AI literacy programs.
Commitments to U.S. AI Education
At a recent White House event focused on AI education, Google pledged significant support, offering Gemini for Education to all American high schools and committing $1 billion over three years towards education and job training initiatives, including $150 million in grants and expanded college programs.
A Research Milestone: Competitive Programming
DeepMind announced that Gemini 2.5 Deep Think achieved gold-medal level performance in the 2025 ICPC World Finals, extending its accolades from a previous gold at the International Mathematical Olympiad. This achievement highlights rapid advancements in long-form reasoning and algorithmic problem-solving capabilities.
How These Updates Fit Together
- Chrome now features Gemini and Search integrates AI Mode to create a more conversational web experience.
- The Gemini app serves both as a creative studio and a personal toolkit that can be shared.
- Android introduces useful enhancements for writing, sharing, and file transfers.
- DeepMind’s robotics research indicates progress towards safe, AI-driven actions in physical environments.
- NotebookLM and Guided Learning shift AI from merely providing answers to supporting comprehensive understanding.
- Educational initiatives aim to ensure responsible use of these tools among students and educators.
Quick Start: Try the Updates
- Chrome: Click the Gemini icon or use AI Mode from the address bar for more detailed questions.
- Google App: Tap Live to engage with Search and enable the camera for real-time assistance when available.
- Gemini: Experiment with Nano Banana on a photo, create a Gem for a recurring task, or develop an app in Canvas.
- Android: Explore AI writing tools in Gboard and try out the new audio sharing and Quick Share enhancements.
FAQs
What is AI Mode in Search, and how does it differ from regular Google Search?
AI Mode is a conversational, multimodal layer for Search that utilizes a custom Gemini model to handle longer, more intricate questions, show visual results, and support interactive Live sessions with voice and camera. Regular Search remains available, while AI Mode simply provides additional ways to explore.
What is Nano Banana in the Gemini app?
Nano Banana is the enhanced image generation and editing model within Gemini, designed for precise, consistent edits. It has quickly gained popularity since its release in August.
What are Gems, and why would I share them?
Gems are custom configurations of Gemini tailored for specific tasks, like a study assistant or content generator. Sharing allows teams or family members to reuse the same setup instead of recreating prompts from scratch, with permission settings similar to Google Drive.
How is Google using AI to improve online safety?
Chrome now employs AI to identify scam patterns and streamline password fixes on supported sites. Additionally, Google has implemented broader safety measures and literacy initiatives across its products for younger users.
What does DeepMind’s robotics work mean for everyday users?
While this is still primarily research, the VLA and ER approach indicates a future where AI can safely plan and act in complex environments, potentially aiding in homes, warehouses, and laboratories.
Conclusion
September’s updates signal a clear intention to integrate AI more seamlessly into your daily activities, whether it be browsing, searching, creating, learning, or collaborating with others. The technology is becoming increasingly conversational, visual, and collaborative. If you’re going to try just one feature, make it the Live interaction in Search for real-time help, or the Nano Banana in Gemini for sophisticated image edits. Together, these experiences illustrate how swiftly AI is evolving into a daily companion rather than a distant tool.
Thank You for Reading this Blog and See You Soon! 🙏 👋
Let's connect 🚀
Latest Blogs
Read My Latest Blogs about AI

Inside OpenAI’s Custom Chip Leap: The Broadcom Deal, 10GW of Compute, and Its Implications for AI
OpenAI and Broadcom announce a partnership to develop 10GW of custom AI chips starting in 2026. Discover how this reshapes costs, performance, and supply for AI infrastructure.
Read more