VinTAGe: Joint Video and Text Conditioning for Holistic Audio Generation

University of Texas at Dallas

TL;DR: We introduce VinTAGe model, that can generate visually aligned and text-corresponding sounds providing a more holistic audio experience.

Click on each video to unmute or mute its generated audio.

Abstract

Recent advances in audio generation have focused on text-to-audio (T2A) and video-to-audio (V2A) tasks. However, T2A or V2A methods cannot generate holistic sounds (onscreen and off-screen). This is because T2A cannot generate sounds aligning with onscreen objects, while V2A cannot generate semantically complete (offscreen sounds missing). In this work, we address the task of holistic audio generation: given a video and a text prompt, we aim to generate both onscreen and offscreen sounds that are temporally synchronized with the video and semantically aligned with text and video. Previous approaches for joint text and video-to-audio generation often suffer from modality bias, favoring one modality over the other. To overcome this limitation, we introduce VinTAGe, a flow-based transformer model that jointly considers text and video to guide audio generation. Our framework comprises two key components: a Visual-Text Encoder and a Joint VT-SiT model. To reduce modality bias and improve generation quality, we employ pretrained uni-modal text-to-audio and video-to-audio generation models for additional guidance. Due to the lack of appropriate benchmarks, we also introduce VinTAGeBench, a dataset of 636 video-text-audio pairs containing both onscreen and offscreen sounds. Our comprehensive experiments on VinTAGe-Bench demonstrate that joint text and visual interaction is necessary for holistic audio generation. Furthermore, VinTAGe achieves state-of-the-art results on the VGGSound benchmark.

Joint Visual and Text-Conditioned Audio Generation

We generate a soundtrack for silent videos given a text prompt. In the examples below, you can select the text prompt using the buttons on the right of the video to hear the corresponding generated audio for that video. (Click the button to play the video, and click it again to pause.)

Our model is capable of generating realistic audio for Sora videos in a zero-shot setting.

Demo Video

BibTeX

@article{kushwaha2024vintage,
  title={VinTAGe: Joint Video and Text Conditioning for Holistic Audio Generation},
  author={Kushwaha, Saksham Singh and Tian, Yapeng},
  journal={arXiv preprint arXiv:2412.10768},
  year={2024}
}