JavisDiT:

Joint Audio-Video Diffusion Transformer with
Hierarchical Spatio-Temporal Prior Synchronization

1 Zhejiang University, 2 National University of Singapore,
3 University of Science and Technology of China, 4 University of Rochester
Work in progress (*Correspondence)

Abstract

This paper introduces JavisDiT, a novel Joint Audio-Video Diffusion Transformer designed for synchronized audio-video generation (JAVG). Built upon the powerful Diffusion Transformer (DiT) architecture, JavisDiT is able to generate high-quality audio and video content simultaneously from open-ended user prompts. To ensure optimal synchronization, we introduce a fine-grained spatio-temporal alignment mechanism through a Hierarchical Spatial-Temporal Synchronized Prior (HiST-Sypo) Estimator. This module extracts both global and fine-grained spatio-temporal priors, guiding the synchronization between the visual and auditory components. Furthermore, we propose a new benchmark, JavisBench, consisting of 10,140 high-quality text-captioned sounding videos spanning diverse scenes and complex real-world scenarios. Further, we specifically devise a robust metric for evaluating the synchronization between generated audio-video pairs in real-world complex content. Experimental results demonstrate that JavisDiT significantly outperforms existing methods by ensuring both high-quality generation and precise synchronization, setting a new standard for JAVG tasks.


Technical Description


• Motivation

JAVG Requirements: (1) High-quality single-modality generation; (2) Fine-grained inter-modality synchronization.

Teaser

Figure 1: Given the input text prompt, a JAVG system generates a spatial-temporally synchronized sounding video. The sounds align perfectly with the temporal progression of the actions.


  • Spatial Synchronization. Specific objects (e.g., alien dog) in video shall make specific sounds (e.g., mechanical barking) in audio.
  • Temporal Synchronization. Sound's starting and ending timestamp in audio shall align with event's happening and vanishing timestamp in video.
  • Spatio-Temporal Synchronization. Different objects in video shall make different sounds in audio and perfectly start/end at corresponding timestamps.

• JavisDiT Architecture

Teaser

Figure 2: JavisDiT is built on top of STDiT blocks with two branches for joint audio-video generation. Spatio-temporal priors and bidirectional across-attention are utilized to ensure synchronized generation.


  • Overall Architecture. Video (frame sequence) and audio (mel-spectrogram) representations are unified as 4D tensors with shape of [B, T, S, C], processed by sequential spatial-temporal self-attention or cross-attention blocks to comprise the scalable STDiT architecture.
  • Hierarchical Spatio-Temporal Prior. The coarse-grained st-prior is inherited from original text-encoder (T5-xxl), which encodes the overall souding-event description. The fine-grained st-prior is estimated from our proposed estimator, which infers to where and when this event will happen and end.
  • Multi-Stage Training Pipeline. The training pipeline is divided into three stages: (1) audio pretraining: extending the video generation branch to a new audio generation branch with 788K audio-text pairs. (2) st-prior estimation: learning the hierarchical prior estimator by contrastive learning with various negative (asynchronized) audio-video construction strategies. (3) joint av-generation: training the interactive modules (ST-CrossAttn and BiCorssAttn) to empower synchronized JAVG, where modules learned in previous steps are frozen.

• JavisBench Benchmark

A strong generative model must ensure diverse video content, audio types, and fine-grained spatio-temporal synchronization. However, current JAVG benchmarks might lack diversity and robustness for comprehensive evaluation. For instance, AIST++ contains only 20 samples for human dancing, while Landscape comprises 100 entries on landscapes merely. To combat this shortcoming, we propose a more challenging benchmark named JavisBench, consisting of 10,140 samples from multiple data sources across various real-world scenarios. We also leverage Qwen-LLM families to annotate visual-audio attributes for each sample, including event scenario, visual style, sound type, and spatial-temporal composition for sounding events. We hope the hierarchical attribute annotations can support in-depth analysis for different JAVG scenarios, and inspire the community to derive more applicable JAVG models in real-world.


Teaser

Figure 3: Large-scale and diversified JavisBench dataset supports in-depth spatio-temporal synchronization analysis.


JavisScore Metric

We have also noticed that AV-Align, the widely-adopted metric to evaluate synchronization degree of generated audio-video pairs, may struggle with complex scenarios (i.e., with multiple sounding events or subtle vi- sual movements) and produce misleading results. Therefore, we propose a more robust evaluation metric, namely JavisScore, to measure visual-audio synchro- nization in diverse, real-world contexts. We also provide a human-labeled dataset for quantitative evaluation for the accuracy and reliability of different synchronization metric, which contains 3,000 synchronized and asynchronized audio-video pairs from diverse data sources and scenarios. We look forward to a 100% perfect synchronization metric to greatly promote the development of the JAVG community.

Teaser

Figure 4: Qualitative and quantitative comparison with previous AV-Synchronization metric.

Demonstrations



• Application-1: High-Resolution Sounding-Video Generation

A group of three small birds with vibrant green plumage and distinctive yellow heads are perched on a branch in a serene woodland area. The trees around them have dry, leafless branches and autumnal foliage. The birds are engaged in various activities, such as grooming themselves and looking around, while light chirping fills the air.

In a post-apocalyptic urban landscape, the city is filled with rubble, debris, and damaged structures. Tall buildings, power lines, and industrial elements are scattered throughout, creating a dystopian cityscape. The sounds of debris falling, broken glass shattering, and occasional explosions or fires can be heard in the background.


• Application-2: Diversified Sounding-Video Generation

Industrial
Camera
Ambient,Mechanical
Multiple
Simultaneous

In a factory, a person is welding a metal structure while wearing protective gear. Various tools and equipment are visible in background. The sound of welding is prominent, accompanied by the continuous hum of a heavy engine and occasional generic impact sounds.

Urban
Camera
Music,Speech
Multiple
Simultaneous

There is a large, enthusiastic crowd illuminated by vibrant lights. A live performance features a steel guitar melody, acoustic rhythm guitar chords, and passionate male vocals. Crowd noises add to the lively ambiance, making it both emotional and spirited.

Natural
Camera
Ambient,Musical
Multiple
Simultaneous

An aerial shot captures the rugged coastal landscape with dramatic cliffs and rocky outcrops. Waves crash against the rocks. A tambourine adds rhythmic the beat. The combination of the natural coastal sounds and the upbeat music enhances the overall serene of the scene.

Urban
3D-Animate
Ambient,Mechanical
Multiple
Simultaneous

A chaotic battlefield with a tank in the background fires, causing a large explosion and fire in one of the buildings. The setting is in a realistic 3D animation. Soldiers, wearing helmets and military uniforms, are visible, with one aiming a weapon in the foreground.

Virtual
2D-Animate
Ambient,Musical
Multiple,Off-screen
Simultaneous

A fantastical landscape features vibrant, colorful rock formations, a flowing river, and autumnal trees set against a surreal backdrop with a large, glowing planet in the sky. The scene is animated in vivid 2D, blending natural and alien elements, with music playing in background.

Natural
Camera
Biological
Single
Single

A young monkey with light-colored fur, possibly white or cream, lies playfully on the ground surrounded by fallen leaves and twigs. It has a round, expressive face with large, dark eyes and a small nose, and its mouth is open as if it is smiling or making a sound.

Natural
Camera
Ambient,Biological
Multiple
Simultaneous

A small bird, possibly a wren, perches on a branch in a dense forest, preening its feathers with brownish plumage and a short tail. Rain falls and wind blows through the trees, creating a soothing backdrop as the bird cleans and arranges its feathers.

Related Links

You may refer to related work that serves as foundations for our framework and code repository, such as Open-Sora, ImageBind, and AudioLDM2. We also partially draw inspirations from Open-Sora-Plan, GroundingDINO, and FoleyCrafter.