PerlDiff: Controllable Street View Synthesis Using Perspective-Layout Diffusion Models

1School of Computer Science and Engineering, UESTC, 2Independent Researcher, 3Beijing Institute of Technology
PerlDiff enhances controllability over BEVControl* using geometric priors.f

PerlDiff enhances controllability (orientation, position and size) over BEVControl* using geometric priors.

Abstract

Controllable generation is considered a potentially vital approach to address the challenge of annotating 3D data, and the precision of such controllable generation becomes particularly imperative in the context of data production for autonomous driving. Existing methods focus on the integration of diverse generative information into controlling inputs, utilizing frameworks such as GLIGEN or ControlNet, to produce commendable outcomes in controllable generation. However, such approaches intrinsically restrict generation performance to the learning capacities of predefined network architectures. In this paper, we explore the integration of controlling information and introduce PerlDiff (Perspective-Layout Diffusion Models), a method for effective street view image generation that fully leverages perspective 3D geometric information. PerlDiff employs 3D geometric priors to guide the generation of street view images with precise object-level control within the network learning process, resulting in a more robust and controllable output. Moreover, it demonstrates superior controllability compared to alternative layout control methods. Empirical results justify that PerlDiff markedly enhances the precision of generation on the NuScenes and KITTI datasets.

Algorithm description of
      PerlDiff

PerlDiff utilizes perspective layout masking maps derived from BEV annotations to integrate scene information and object bounding boxes for multi view street scene generation.

Visualization of Controllability

Orientation & Size

From top to bottom, the orientation of the object increases incrementally (-60°, 60°, and 90°), and from left to right, we display objects of different sizes (car, truck, and bus).

size control

Position

From left to right, the objects move forward, and from top to bottom, displaying objects of varying sizes (car, truck, and bus).

pos control

Lighting and Weather

Qualitative visualizations on day, night, and rain scenarios synthesized.

lighting and weather control

Visualization of Cross Attention

BEVControl* produces disorganized and vague attention maps. PerlDiff fine-tunes the response within the attention maps, resulting in more accurate control information at the object level.

attn

NuScenes Samples

Visualization of street view images generated on NuScenes validation dataset. (show the ground truth (1st row))

NuScenes samples

KITTI Samples

Visualization of street view images generated on KITTI validation dataset. (show the ground truth (left))

KITTI samples

Downstream Support

Generation from PerlDiff can be used as data augmentation, supporting various 3D object detection tasks such as BEVFormer and StreamPETR.

data aug

BibTeX

@article{zhang2024perldiff,
  title={PerlDiff: Controllable Street View Synthesis Using Perspective-Layout Diffusion Models},
  author={Zhang, Jinhua and Sheng, Hualian and Cai, Sijia and Deng, Bing and Liang, Qiao and Li, Wen and Fu, Ying and Ye, Jieping and Gu, Shuhang},
  journal={arXiv preprint arXiv:2407.06109},
  year={2024}
}