Home » Google makes Gemini thinking model available on the app and launches Gemini 2.0 Pro

Google makes Gemini thinking model available on the app and launches Gemini 2.0 Pro

Google makes Gemini thinking model available on the app and launches Gemini 2.0 Pro

[[{“value”:”Google Gemini 2.0 logo on a smartphone thats sitting on top of a laptop keyboard

The Google Gemini app now includes a reasoning model that shows its “thought process.”

On Wednesday, the tech giant announced the availability of Gemini 2.0 Flash Thinking Experimental on Gemini’s desktop and mobile versions. In addition to making its reasoning model, 2.0 Flash Thinking, more accessible to users, Google shared other announcements related to the Gemini family: the launch of Gemini 2.0 Pro, expanded access to Gemini 2.0 Flash in AI Studio and Vertex AI, and a low-cost model called Flash-Lite.

Over at Google, it looks like business as usual, as the company stays on the course of advancing its AI strategy of incrementally adding more capabilities to its Gemini family. But the introduction of Gemini 2.0 Flash-Lite is interesting timing given DeepSeek’s disruption last week. DeepSeek spooked the AI industry with its R1 model that’s just as capable as competitor models, but made for a fraction of the cost, upending beliefs that more money equals better models.

Google has previously released a cost-efficient model called 1.5 Flash, and CEO Sundar Pichai downplayed DeepSeek’s impact on Google in its Q4 2024 earnings call on Tuesday, saying, “For us, it’s always been obvious” that models could become more cost-efficient over time. That said, Google plans to spend $75 billion in capital expenditure, which is a huge uptick from $32.3 billion in 2023.

Google’s announcements today seem to cover all the bases of AI development: reasoning models, advanced models, and low-cost models.

Announced in December, 2.0 Flash Thinking rivals OpenAI’s o1 and o3-mini reasoning models in that it’s capable of working through more complex problems and showing how it reasons through them. Previously, it debuted in AI Studio but now has broad availability in experimental mode as a dropdown option in the Gemini app.

There’s also a version that integrates with YouTube, Google Maps, and Google Search. With this version, the model might opt to search the web for answers, pull up relevant YouTube links, or query Google Maps instead of relying solely on its own training data.

For example, if you ask 2.0 Flash Thinking without app integration, “How long would it take to walk to China?” it would rely on its own knowledge base to reason through the vagueness and various factors involved. But in the version with the aforementioned apps, it immediately goes to Google Maps for answers (as laid out in its “thinking.”)

The Gemini 2.0 Pro release is the most noteworthy announcement in terms of technical advancement. According to Google, 2.0 Pro is its most capable model, best for coding and handling complex tasks (not to be confused with 2.0 Flash Thinking, which can also handle complex problems but shows its work.) Gemini 2.0 Pro is available as an experimental version to Gemini Advanced subscribers and in AI Studio and Vertex AI.

Gemini 2.0 Flash-Lite has a 1 million token context window and multimodal input and runs at the same speed and price as the earlier 1.5 Flash version, so it packs a punch for its size. Flash-Lite is available in preview mode on AI Studio and Vertex AI.

“}]] Mashable Read More 

 Google dropped a bunch of Gemini announcements, including broader availability of its reasoning model and the flagship Gemini 2.0 Pro.