ChatStitch: Visualizing Through Structures via Surround-View Unsupervised Deep Image Stitching with Collaborative LLM-Agents

1Beijing Institute of Technology

Poster

teaser

ChatStitch Reveals Occluded Vehicles in Stitched Surround-View Images via Language Commands.

Video

Abstract

Surround-view perception has garnered significant attention for its ability to enhance the perception capabilities of autonomous driving vehicles through the exchange of information with surrounding cameras. However, existing surround-view perception systems are limited by inefficiencies in unidirectional interaction pattern with human and distortions in overlapping regions exponentially propagating into non-overlapping areas. To address these challenges, this paper introduces ChatStitch, a surround-view human-machine co-perception system capable of unveiling obscured blind spot information through natural language commands integrated with external digital assets. To dismantle the unidirectional interaction bottleneck, ChatStitch implements a cognitively grounded closed-loop interaction multi-agent framework based on Large Language Models. To suppress distortion propagation across overlapping boundaries, ChatStitch proposes SV-UDIS, a surround-view unsupervised deep image stitching method under the non-global-overlapping condition. We conducted extensive experiments on the UDIS-D, MCOV-SLAM open datasets, and our real-world dataset. Specifically, our SV-UDIS method achieves state-of-the-art performance on the UDIS-D\footnote{we extract a subset of data from the original UDIS-D dataset for the multi-image stitching experiment.} dataset for 3, 4, and 5 image stitching tasks, with PSNR improvements of 9\%, 17\%, and 21\%, and SSIM improvements of 8\%, 18\%, and 26\%, respectively.

SV-UDIS Framework

Overview of our proposed SV-UDIS. The pipeline mainly includes three stages: Masked cylindrical projection and feature extraction, multi-image warping, and multi-image composition. Our main contributions are shown in detail at the bottom of the figure.

teaser

Results

Two-image stitching on the UDIS-D Results

teaser

Qualitative comparison of two-image stitching on the UDIS-D

Two-image stitching on the MCOV-SLAM Results

teaser

Qualitative comparison of two-image stitching on the MCOV-SLAM

Multi-image stitching on the UDIS-D Results

teaser

Qualitative comparison of multi-image stitching on the UDIS-D

Multi-image stitching on the MCOV-SLAM Results

teaser

The results of multi-image stitching by our SV-UDIS on the MCOV-SLAM dataset.

LLM Command

teaser

Result under a complex command. ”With” and ”without” represent the outcomes before and after our processing

Results in real-world

BibTeX

@article{ChatStitch,
      title={ChatStitch: Visualizing Through Structures via Surround-View Unsupervised Deep Image Stitching with Collaborative LLM-Agents}, 
      author={Hao Liang and Zhipeng Dong and Kaixin Chen and Jiyuan Guo and Yufeng Yue and Yi Yang and Mengyin Fu},
      year={2025},
      eprint={2503.14948},
      archivePrefix={arXiv},
      primaryClass={cs.CV},
      url={https://arxiv.org/abs/2503.14948}}