Infrared and Visible Light Image Fusion Method Based on Non-Subsampled Contourlet Transform
- Login to Download
- 1 Credits
Resource Overview
This paper proposes an infrared and visible light image fusion method utilizing the Non-Subsampled Contourlet Transform (NSCT). The approach implements weighted fusion for high-frequency coefficients by combining activity measures and multi-resolution coefficient correlations, while low-frequency coefficients are processed through local gradient-based activity measurement with a hybrid weighted-selection fusion rule.
Detailed Documentation
This paper presents an infrared and visible light image fusion method based on the Non-Subsampled Contourlet Transform (NSCT), designed to enhance both fusion quality and computational efficiency. The key improvements implemented in our approach include:
- For high-frequency coefficients, we implement weighted fusion by integrating activity measures and inter-scale correlations of multi-resolution coefficients. This weighting strategy utilizes coefficient energy analysis and neighborhood correlation comparisons to better preserve detailed information from source images while effectively reducing noise and artifacts. The implementation typically involves calculating local variance maps and designing adaptive weighting functions based on coefficient significance.
- Low-frequency coefficients are processed through activity measurement using local gradient analysis. This method employs gradient magnitude and orientation calculations within local windows to better retain low-frequency component information while minimizing artifacts and distortions. The algorithm can be implemented using Sobel or Prewitt operators for gradient computation followed by regional energy assessment.
- We adopt a hybrid fusion rule combining weighted averaging and coefficient selection to further optimize fusion results. This adaptive rule incorporates threshold-based switching mechanisms that can be adjusted according to specific image characteristics and application requirements, leading to superior fusion performance. The implementation involves creating decision maps based on activity level comparisons and applying different fusion operators to respective regions.
Through these enhancements, our method demonstrates significant improvements in experimental results, confirming its potential and application prospects for infrared and visible light image fusion tasks. The code implementation typically involves NSCT decomposition using directional filter banks, multi-scale feature extraction, and reconstruction with optimized fusion operators.
- Login to Download
- 1 Credits