Click on each video to unmute or mute its generated audio.
Text input: An otter growls and soft wind is blowing.
Text input: A car races down the road and police siren blares.
Text input: A sound of toilet flushing.
Text input: A race car accelerates.
Text input: Electronic music is playing.
Text input: A person plays a flute and an acoustic guitar is strummed.
Text input: Police car siren blares.
Text input: A rooster crows.
Text input: A flute and an acoustic guitar are being played.
Text input: A train horn blares.
Text input: Fire crackles and a sheep bleats.
Recent advances in audio generation have focused on text-to-audio (T2A) and video-to-audio (V2A) tasks. However, T2A or V2A methods cannot generate holistic sounds (onscreen and off-screen). This is because T2A cannot generate sounds aligning with onscreen objects, while V2A cannot generate semantically complete (offscreen sounds missing). In this work, we address the task of holistic audio generation: given a video and a text prompt, we aim to generate both onscreen and offscreen sounds that are temporally synchronized with the video and semantically aligned with text and video. Previous approaches for joint text and video-to-audio generation often suffer from modality bias, favoring one modality over the other. To overcome this limitation, we introduce VinTAGe, a flow-based transformer model that jointly considers text and video to guide audio generation. Our framework comprises two key components: a Visual-Text Encoder and a Joint VT-SiT model. To reduce modality bias and improve generation quality, we employ pretrained uni-modal text-to-audio and video-to-audio generation models for additional guidance. Due to the lack of appropriate benchmarks, we also introduce VinTAGeBench, a dataset of 636 video-text-audio pairs containing both onscreen and offscreen sounds. Our comprehensive experiments on VinTAGe-Bench demonstrate that joint text and visual interaction is necessary for holistic audio generation. Furthermore, VinTAGe achieves state-of-the-art results on the VGGSound benchmark.
We generate a soundtrack for silent videos given a text prompt. In the examples below, you can select the text prompt using the buttons on the right of the video to hear the corresponding generated audio for that video. (Click the button to play the video, and click it again to pause.)
Our model is capable of generating realistic audio for Sora videos in a zero-shot setting.
@article{kushwaha2024vintage,
title={VinTAGe: Joint Video and Text Conditioning for Holistic Audio Generation},
author={Kushwaha, Saksham Singh and Tian, Yapeng},
journal={arXiv preprint arXiv:2412.10768},
year={2024}
}