Home » Google is working on generative AI soundtracks and dialogue for videos

Google is working on generative AI soundtracks and dialogue for videos

Google is working on generative AI soundtracks and dialogue for videos

[[{“value”:”

Everyone knows sound is a critical component to most films and videos. After all, even when films were silent, there was still a musical accompanist letting the audience know how to feel.

This natural law remains the same for the new crop of generative AI videos, which emerge eerily silent. That’s part of why Google has been working on “video-to-audio” technology (V2A) which “makes synchronized audiovisual generation possible.” On Monday, Google’s AI lab, DeepMind, shared progress on generating such audio including soundtracks and dialogue that automatically match up with AI-generated videos.

Google has been hard at work developing multimodal generative AI technology to compete with rivals. OpenAI has its AI video generator Sora (yet to be publicly released) and GPT-4o, which creates AI voice responses. Companies like Meta and Suno have been exploring AI-generated audio and music, but pairing audio with video is relatively new. ElevenLabs has a similar tool that matches audio to text prompts, but DeepMind says V2A is different because it doesn’t require text prompts.

V2A can be paired with AI video tools like Google Veo or existing archival footage and silent films. This can be used for soundtracks, sound effects, and even dialogue. It works by using a diffusion model trained with visual inputs, natural language prompts, and video annotations to gradually refine random noise into audio that fits the tone and context of videos.

Google DeepMind says V2A can “understand raw pixels” therefore you don’t actually need a text prompt to generate the audio, but it does help with the accuracy. The model can also be prompted to make the tone of the audio sound positive or negative. Along with the announcement, DeepMind released some demo videos, including a video of a dark, creepy hallway accompanied by horror music, a lone cowboy at sunset scored to a mellow harmonica tune, and an animated figure talking about its dinner.

V2A will include Google’s SynthID watermarking as a safeguarding measure against misuse, and Deepmind’s blog post says the feature is currently undergoing testing before it’s released to the public.

“}]] Mashable Read More 

​ Google DeepMind is working on a generative AI model that can create soundtracks, sound effects, and dialogue based on visual input. Here are the details.