[[{“value”:”
Google has just announced (but not released) Gemini 1.5, an update to its flagship language model — the model used in the chatbot once known as Bard, but synergistically renamed Gemini a week ago.
The big claim with this release is “a breakthrough in long-context understanding across modalities.” It’s also meant to be a step-up in terms of efficiency, having been built on an architecture type known as “Mixture-of-Experts (MoE),” meaning performance supposedly akin to Gemini 1.0, but relying on fewer electricity-hungry GPUs cranking away to achieve it.
That first big claim about multi-modal “long-context” understanding is as jargony as it sounds, but Google Deepmind co-founder posted a demo on X meant to show what this means in practice.
Smartly utilizing a large piece of public-domain text that won’t rankle any copyright sticklers — in this case a 402-page transcript of the NASA mission that landed on the moon — the LLM is able to narrow its focus to what the user needs (“context”) in spite of the prompt being absolutely gigantic (“long”), so apparently that’s what “long-context” means.
In the demo, Gemini 1.5 is able to pick out three amusing moments from the novel-length piece of text. It’s also able to spot the event in the transcript that matches a picture of a lunar boot print — the part where, y’know, Neil Armstrong walks on the moon — which elucidates what “multimodal” is intended to mean in this context: an image recognition model working hand-in-hand with the LLM.
This upgrade is part of an ongoing effort to keep Google in the AI conversation after OpenAI ate everyone’s AI lunch in 2022 by releasing ChatGPT. Late last year, Google began seriously hyping changes to come with Bard and the model powering it, which remains an also-ran large language model, more known for being shoehorned into popular Google and Android products than for being used like ChatGPT to solve day-to-day problems and blow minds at cocktail parties. In particular, a December 2023 research paper touted a version of Gemini that had exceeded the performance of OpenAI’s GPT-4 model in certain cases, and become the first LLM to get a passing score on a specific AI test of “Multitask Language Understanding ” or MLU.
Among other assertions about Gemini 1.5, Google says the new model can crunch large datasets with impressive accuracy, and — in a somewhat more eyebrow raising claim — perform well at reasoning across all sorts of data types. Reasoning is the most famous weakness among most LLMs.
According to CEO Sundar Pichai, Google is releasing Gemini 1.5 to a limited group. “We’re excited to offer a limited preview of this experimental feature to developers and enterprise customers,” Pichai wrote in Google’s blog post.
The broader base of Gemini users will be the ultimate judges of Google’s performance claims when they’re actually permitted to try out Gemini 1.5 as part of an officially-released product. Google’s most powerful model, Gemini Ultra was released a week ago, so it may be a while, and it’s probably safe to assume that Gemini 1.5, will one day be part of Google’s new premium — in other words “paid” — package of Workspace products called Google One AI Premium.
“}]] Mashable Read More
Google announces a major upgrade to its flagship AI large language model, Gemini, which can crunch massive amounts of text, and purportedly uses logic at high levels.