Google I/O 2025: Gemini is everywhere because Google is bottom down on AI, brings a new ultra plan
For Gemini, Google’s ambitious vision is beyond being just another chatbot. It wants Gemini to become a “world model” – which is eventually capable of planning, imagining new experiences, and imitating aspects of the world, like the human brain.
Listen to the story

Google I/O 2025 is officially a departure and, as was expected on a large scale, the opening day was about the main event AI (low for artificial intelligence), in which CEO Sundar Pichai clarified it abundantly-for critics, investors and fans-it is not behind-this is not behind, but the most for the modern technology. This writing was on the wall when Google held a separate Android-focused program ahead of I/O, so it can focus on its AI progression in great length during the annual developers’ phenomenon-and can show.
As it has been revealed, Google had a pack slate with some expected and some surprising announcements in the store near the store, in which CEO Sundar Pichai set up the tone quickly when he said, “All this means that we are in a new phase of AI platform shift. Where it is in a new phase of AI platforms. Where it is becoming a reality for people, businesses and communities around the world. From increasing its basic models to integrating AI into almost every product and service, Google performed a future, where AI is not just a feature, but a omnipresent, intelligent accessory designed to make life easy, more creative and more productive.
Gemini is everywhere
There are updated Gemini 2.5 models at the center of Google’s AI Push. By starting, Google is introducing deep think in Gemini 2.5 Pro, which has been billed as a increased logic mode to deal with highly complex mathematical and coding challenges. Google says that it can detect several hypotheses before reaching a solution, underlining the depth of its cognitive processing.
2.5 Pro and Flash are gaining new capabilities, including native audio outputs, so doubling of safety security measures against dangers such as indirect early injections can be more natural. They are also obtaining computer use capabilities of project mergeer.
Saying that, Google’s ambitious vision for Mithun is beyond being just another chatbot. It wants Gemini to become a “world model” – which is eventually capable of planning, imagining new experiences, and imitating aspects of the world, like the human brain. To do this, it is integrating technology-live capabilities from its research prototype project Estra in Gemini app to enable real-time visual aid through your phone’s camera. The system has also been improved to detect emotions in a user’s voice and react wisely by ignoring background interactions for an easy experience.
Another major component project is merrinner, another research prototype discovers the future of human-agent interactions, especially within web browsers. Project merrinner now facilitates a system of agents that can complete ten different tasks simultaneously, saying Google. Cases of use include research on information to booking and shopping.
According to the growing trend, Google is also rolling new abilities in generic AI for media, such as for photos and video editing. The latest VEO 3 video generation model can generate videos with native audio, while the old VEO 2 is getting a bunch of new features including reference-operated videos, accurate camera control, outpanting, and the ability to intelligently connect or remove objects. If this was not enough, Google also moved forward and produced an AI film production tool called Flow.
Seriously, Google has emphasized its commitment to responsible creation, sharing that its watermarking technique called Syntid has already identified 10 billion AI-Janit images, videos, audio and texts. To add it, it is also starting a syntid detector verification portal to help people identify AI-related materials.
In the Google workpiece, AI is providing more than 2 billion assistance for Monthly, Google claims. And so, it is putting even more AI in some of its most popular products. Gmail is getting more individual smart answers that detect references and tones and are favorable. Google Meat is getting speech translation. Google Vids are becoming more accessible by allowing users to convert Google slides into video with AI-borne script, voiceover and animation. Google Docs is becoming more reliable by only drawing information from reliable and reliable documents.
New Google AI Schemes
Google’s AI capabilities are expanding so fast, more membership tier was unavoidable and therefore, it is announcing two new plans:
Google ai pro: The scheme, which was previously known as Mithun Advanced, offers a complete suit of AI products with high rate limit and special features. This includes the flow to AI film production capabilities (with VO2) and Mithun’s early access to Chrome. Google is expanding free access to Google AI Pro for a school year for university students in America, Japan, Brazil, Indonesia and UK.
Google ai ultra: Google is positioning it as the last “VIP Pass to Google AI”, designed for users seeking higher level access and capabilities. It is priced at $ 249.99/month, which broadly translates Rs 21,400 (with 50 percent of the introductory proposal for the first three months) and deep research, the highest use limits in VO2, and the VO3 models and the upcoming 2.5 Pro deep themes provide early uses for the mode. Ultra subscribers have the greatest boundaries in the flow, access to whiskey conscious, increased notebook capacity, Gemini in workspace apps, Gemini in Chrome, Project Meriner, YouTube Premium, and 30 TB storage. It promises early access to “agent mode”, which is an experimental ability to manage complex, multistap functions with minimal inspection.