🎉🎉🎉 Accepted by ICME 2025 🎉🎉🎉

DAE-Fuse
An Adaptive Discriminative Autoencoder
for Multi-Modality Image Fusion

Department of Computer Science Guangdong Provincial Key Laboratory of Interdisciplinary Research and Application for Data Science Beijing Normal - Hong Kong Baptist University

Visual 1

Infrared 1

Fusion 1

Visual 2

Infrared 2

Fusion 2

Visual 3

Infrared 3

Fusion 3

Video Fusion Experimental Results of Our DAE-Fuse Method.

Abstract

In extreme scenarios such as nighttime or low-visibility environments, achieving reliable perception is critical for applications like autonomous driving, robotics, and surveillance. Multi-modality image fusion, particularly integrating infrared imaging, offers a robust solution by combining complementary information from different modalities to enhance scene understanding and decision-making. However, current methods face significant limitations: GAN-based approaches often produce blurry images that lack fine-grained details, while AE-based methods may introduce bias toward specific modalities, leading to unnatural fusion results. To address these challenges, we propose DAE-Fuse, a novel two-phase discriminative autoencoder framework that generates sharp and natural fused images. Furthermore, We pioneer the extension of image fusion techniques from static images to the video domain while preserving temporal consistency across frames, thus advancing the perceptual capabilities required for autonomous navigation. Extensive experiments on public datasets demonstrate that DAE-Fuse achieves state-of-the-art performance on multiple benchmarks, with superior generalizability to tasks like medical image fusion.

Results

Qualitative Comparison on Infrared-Visible Image Fusion (IVIF)

Qualitative Comparison on IVIF

Qualitative Comparison on Medical Image Fusion (MIF)

Qualitative Comparison on IVIF

BibTeX

@article{guo2024dae-fuse,
  title={DAE-Fuse: An Adaptive Discriminative Autoencoder for Multi-Modality Image Fusion},
  author={Guo, Yuchen and Xu, Ruoxiang and Li, Rongcheng and Su, Weifeng},
  journal={arXiv preprint arXiv:2409.10080},
  year={2024}
}