Gemini 2.5 Pro won this popular 29 -year -old game, even beautiful pichai is impressed
Google Gemini 2.5 Pro, the smarteststash, has recently terminated Pokémon Blue. CEO Sundar Pichai praised Jeet, said, “What a Finnish !!”.
Listen to the story

Google launched Gemini 2.5 Pro a month ago and claimed that it is the “most intelligent AI model” to date. During the launch, the Tech veteran highlighted that this model is much better than its competition, including the Openai O3 model, Deepsek R1, Cloud and more. While the benchmark (provided by Google) is its livelihood, recently a win against a 29 -year -old video game, Pocomon Blue also added another wing to his hat. Since these are only claims from Google, we wanted to see how good the model is, and here we got. But before you read our experience, the question is: Why is winning a milestone for the AI model against a video game? Let’s know.
Gemini 2.5 Pro Finnish Pokémon Blue
For reference, Pokmon Blue (released in 1996) is known for its complex gameplay mechanics, strategic matches and open world discovery-an elements that face important challenges for the AI system. To perform well in the game, an AI must demonstrate capabilities such as long-term plan, target management and visual navigation-core proficiency in search of general artificial intelligence. Now that Mithun 2.5 Pro has won against the complexities of the game, the AI model has proved her title “Most Intelligent Model”.
Reacting to this victory, CEO Sundar Pichai said in X (East Twitter), “What Finnish! Gemini 2.5 Pro just completed Pokémon Blue!

To clarify, Mithun has played Pokémon Livestream, which was not launched by Google, but “with a 30 -year software engineer unaffected Google” Joel Jade “. Nevertheless, Google officials have shown enthusiastic support for the project. Product Lead Logan Kilpatrick shared an update last month for Google AI Studio, given that Mithun “was making a lot of progress in completing Pokémon” and “” his 5th badge (the next best model is only 3 so far, although a separate agent had earned) earned. “
During the launch, Google highlighted that one of the standout reforms in this model lies in its increased coding abilities, described as “more than 2.0 more than 2.0” with “upcoming more improvements”. According to Google, “With code change and editing -the blind compelling web app and agentic code application 2.5 Pro Excel.”
In the recognized industry benchmark for agent coding, Gemini 2.5 Pro gave a strong performance-a custom agent setup using 63.8 percent maintenance on the self-bench-complex software engineering work. Now that we are comparing, the cloud AI model of anthropic is also in the race to defeat another Pokemon version, Red. But it has not been successful yet.
In February, Anthropic showed that its cloud AI models were made in Pokémon Red, given that Cloud’s “extended thinking and agent training” gave it “a major boost” while dealing with “more unexpected” tasks, such as playing a classic video game. While Claude has made remarkable progress, it is yet to complete Pokémon Red.
As it can be impressive, Gemini’s performance does not yet indicate true general intelligence. The developer still lends one hand from time to time – to fix the bug or to restrict some tasks, such as eliminating the fleeing items. He said that no direct walkthrough or step-by-step guidance is provided, apart from a known case related to a known mess.
This is still an open question whether Gemini can fully manage the same achievement. Nevertheless, its ability to navigate a game as a complex as Pokmon Blue – even with some support – reflects the remarkable ability of the big language model when posted within a carefully stationed environment.