[[{“value”:”
Consumers and investors are sick of AI hype, and Google knows it.
“There have been so many promises, so many ‘coming soon’s, and not enough real-world helpfulness when it comes to AI,” Google senior VP Rick Osterloh said at the “Made by Google” event that unveiled new Pixel phones in Mountain View Tuesday. “Which is why today, we’re getting real … we’re going to answer the biggest question people have about AI, what can AI do for me?”
OK, so did Google live up to that promise? When you strip the keynote of all the bells and whistles — the celebrity appearances, the jargon about “Tensor Processing Units,” the Pixel phone tech specs, the visions of what Gemini might be able to do in the long run — what was new about the Android experience here? And does any of it qualify as a must-have killer app?
Here’s a complete list of everything the 90-minute event offered in actual functioning demonstrations. Real-world helpfulness, in other words, as opposed to ads or promises.
1. Gemini can check your Google calendar to see if you’re free for a future concert, based on a poster.
Credit: Google
Poor Dave Citron. In the keynote’s most awkward moment, this Google product lead had to invoke “the demo spirits” and switch phones before Gemini would actually display an answer to “check my calendar and see if I’m free when she’s coming to San Francisco this year” (“she” being the artist Sabrina Carpenter; Citron had just sent Gemini a photo of her concert poster).
“Sabrina Carpenter is coming to San Francisco on November 9, 2024,” Gemini eventually responded. “I don’t see any events on your calendar during that time.”
AI reading the text in an image and understanding the context isn’t new. The calendar add is, and that’s to Google’s advantage. In theory, Apple Intelligence will do the same thing when it debuts.
Citron’s next demos showed how Gemini could draft a letter to a landlord about a broken AC unit, or a professor about a class — well-trod ground for all AI assistants.
2. Gemini Live offers ‘free-flowing conversation’
Next up, Google VP Jenny Blackburn showed off the Gemini Live voice assistant. They had a chat about science experiments her niece and nephew might like, and after some back-and-forth, settled on making invisible ink. The discussion had a conversational flow.
All well and good, except that OpenAI demonstrated its GPT-4o voice assistant, with similarly interruptible conversations, back in May. That feature is currently live for a small group of ChatGPT Plus users, but not all. So Google got there first, we guess?
3. Gemini Nano offers on-device summaries of your phone calls
Here’s a feature that may be less creepy than it sounds: Call Notes, which “follows up on your phone calls with a completely private summary of the conversation.” But don’t worry, because it’s using Gemini Nano, an AI service that is based entirely on the Pixel 9 phone without requiring cloud access. (The on-device part is not new; Samsung does the same with Galaxy AI.)
4. Screenshots are searchable.
Score one more success for Gemini Nano on what we’re calling the most useful AI feature of 2024.
But after that, we got a lot of visual stuff we’ve seen AI assistants do a dozen times before. To wit: creating a party invite in Pixel Studios, auto-framing in Magic Editor, adding generative AI images to your image, inserting yourself into a family photo or a picture with a celebrity (the new and embarrassingly named “Add Me” feature). Plus stuff that was cute but not AI at all (the “Made You Look” feature that will point your child’s attention at the Pixel’s rear-facing screen).
So, will this feature set be enough to reverse the skepticism that has set in around the AI bubble? Don’t count on Gemini to answer that one any time soon.
“}]] Mashable Read More
Google unveiled its new Pixel phones with Gemini AI. Is there anything here that you can’t already do via OpenAI — or even Siri?