Seeing Beyond Views: Multi-View Driving Scene Video Generation with Holistic Attention

1Harbin Institute of Technology, 2Changan Automobile, 3The University of Adelaide

Abstract

Generating multi-view videos for autonomous driving training has recently gained attention, with the challenge of addressing both cross-view and cross-frame consistency. Existing methods typically apply decoupled attention mechanisms for spatial, temporal, and view dimensions. However, these approaches often struggle to maintain consistency across dimensions, particularly when handling fast-moving objects that appear at different times and viewpoints. In this paper, we present CogDriving, a novel network designed for synthesizing high-quality multi-view driving videos. CogDriving leverages a Diffusion Transformer architecture with holistic-4D attention modules, enabling simultaneous associations across the spatial, temporal, and viewpoint dimensions. We also propose a lightweight controller tailored for CogDriving, i.e., Micro-Controller, which uses only 1.1% of the parameters of the standard ControlNet, enabling precise control over Bird’s-Eye-View layouts. To enhance the generation of object instances crucial for autonomous driving, we propose a re-weighted learning objective, dynamically adjusting the learning weights for object instances during training. CogDriving demonstrates strong performance on the nuScenes validation set, achieving an FVD score of 37.8, highlighting its ability to generate realistic driving videos.

Diffusion transformer with holistic 4D-Attention

Recovering the Pre-Fine-Tuning Weight of an Aligned Mistral 7B

Overview of our CogDriving. (a) depicts the training process of CogDriving, facilitated by the diffusion transformer with holistic 4D-Attention. (b) illustrates the detailed architecture of the diffusion transformer, especially the holistic 4D-Attention to achieve the spatial-temporal-perspective mutual interaction. (c) shows the proposed Micro-Controller for the integration of various conditions.

BEV layouts controlled generation

Recovering the Pre-Fine-Tuning Weight of an Aligned Mistral 7B

Our lightweight Micro-Controller encodes road maps, box IDs, class IDs, and depth maps independently from 3D annotations for precise, geometry-guided synthesis.

Attribute control using text description

CogDriving can generates diverse driving videos, including different weather, seasons, times, even extreme scenarios such as thunderstorms.

BibTeX

@misc{lu2024cogdriving,
        title={Seeing Beyond Views: Multi-View Driving Scene Video Generation with Holistic Attention}, 
        author={Hannan Lu and Xiaohe Wu and Shudong Wang and Xiameng Qin and Xinyu Zhang and Junyu Han and Wangmeng Zuo and Ji Tao},
        year={2024},
        eprint={2412.03520},
        archivePrefix={arXiv},
        primaryClass={cs.CV},
        url={https://arxiv.org/abs/2412.03520}, 
  }