Open Access Open Access  Restricted Access Subscription or Fee Access

Multi Sensor Image Fusion Using Saliency Map Detection

(*) Corresponding author

Authors' affiliations



Image fusion is a process of generating an informative image from more than one complementary image. It finds applications in military, navigation, concealed weapon detection, medical imaging, digital photography and remote sensing etc. A new image fusion method based on two-scale image decomposition and saliency map detection is proposed in this paper for multi-sensor images. The algorithm is as follows: First, each source image is decomposed into base and detail layers. Second, saliency map of each source image is calculated with help of frequency tuned saliency map detection. Third, detail images are fused by using the proposed decision map based on the saliency maps and base layers are averaged to get the fused base layer. Finally, fused image is generated by taking the linear combination of fused base and detail layers. This algorithm is very advantageous because the saliency map used in this paper highlights the saliency information uniformly with well defined boundaries. So the decision map based on these saliency maps can effectively transfer the complementary information from source images to the fused image. Unlike traditional multi-scale decomposition fusion methods, proposed method uses two-scale decomposition to get base and detail layers. So it is computationally efficient. Outcomes of the proposed method are compared with existing multi-scale decomposition techniques along with spatial domain techniques with help of traditional and objective fusion metrics. Results reveal that proposed method outperforms the existing methods.
Copyright © 2015 Praise Worthy Prize - All rights reserved.


Saliency Detection; Decision Map; Fusion; Multi-Sensor; Two-Scale Decomposition

Full Text:



Blum, Rick S. & Zheng, Liu. Multi-sensor image fusion and its applications. CRC Press, Taylor & Francis Group, Boca Raton, 2006.

Cover, T.M. & Thomas, J.A. Elements of information theory. Wiley, New York, 1991.

C.S. Xydeas and V.S. Petrović, “Objective image fusion performance measure,” Electron. Lett., vol. 36, no. 4, pp. 308–309, Feb. 2000.

Z. Wang and A. Bovik, “A universal image quality index,” IEEE Signal Process. Lett., vol. 9, no. 3, pp. 81–84, Mar. 2002.

R. Achanta, S. Hemami, F. Estrada, and S. S¨usstrunk, “Frequency-tuned salient region detection,” IEEE International Conference on Computer Vision and Pattern Recognition, pp. 1597–1604, June 2009.

Ardeshir Goshtasby, A. and Nikolov, S. (2007). Image fusion: advances in the state of the art. Information Fusion, 8(2), 114-118.

Stathaki, Tania. Image fusion: algorithms and applications. Academic Press, 2011.

Pajares, Gonzalo, and Jesus Manuel De La Cruz. "A wavelet-based image fusion tutorial." Pattern recognition 37.9 (2004): 1855-1872.

Zhang, Zhong, and Rick S. Blum. "A categorization of multiscale-decomposition-based image fusion schemes with a performance study for a digital camera application." Proceedings of the IEEE 87.8 (1999): 1315-1326.

Kumar, BK Shreyamsha. "Image fusion based on pixel significance using cross bilateral filter." Signal, Image and Video Processing (2013): 1-12.

Li, Shutao, Xudong Kang, and Jianwen Hu. "Image fusion with guided filtering." IEEE transactions on image processing: a publication of the IEEE Signal Processing Society 22.7 (2013): 2864-2875.

Singh, Harbinder, Vinay Kumar, and Sunil Bhooshan. "Anisotropic diffusion for details enhancement in multiexposure image fusion." International Scholarly Research Notices 2013 (2013).

Jiang, Yong, and Minghui Wang. "Image fusion using multiscale edge-preserving decomposition based on weighted least squares filter." Image Processing, IET 8.3 (2014): 183-190.

Oliver Rockinger. Image sequence fusion using a shift-invariant wavelet transform. In International Conference Proceedings on Image Processing., volume 3, pages 288–291. IEEE, 1997.

Choi, Myungjin, et al. "Fusion of multispectral and panchromatic satellite images using the curvelet transform." Geoscience and remote sensing letters, IEEE 2.2 (2005): 136-140.

Zhang, Qiang, and Bao-long Guo. "Multifocus image fusion using the nonsubsampled contourlet transform." Signal Processing 89.7 (2009): 1334-1346.

Naidu, V. P. S., and J. R. Raol. "Pixel-level image fusion using wavelets and principal component analysis." Defence Science Journal 58.3 (2008): 338-352.

ER Mapper 5.0 Reference, Earth Resource Mapping Pty Ltd. (1995).

Chavez, P., Sides, S. C. and Anderson, J. A. (1991). Comparison of three different methods to merge multiresolution and multispectral data- Landsat TM and SPOT panchromatic. Photogrammetric Engineering and remote sensing,57(3), 295-303.

Liang, J., He, Y., Liu, D. and Zeng, X. (2012). Image fusion using higher order singular value decomposition. Image Processing, IEEE Transactions on, 21(5), 2898-2909.

Tian, J., Chen, L., Ma, L. and Yu, W. (2011). Multi-focus image fusion using a bilateral gradient-based sharpness criterion. Optics communications, 284(1), 80-87.

VPS Naidu. Image fusion technique using multi-resolution singular value decomposition. Defence Science Journal, 61(5):479–484, 2011.

Rockinger, Oliver. Multiresolution-Verfahren zur fusion dynamischer bildfolgen. dissertation. de, 1999.

Gomathi, P.S., Kalaavathi, B., Wavelet based image fusion for medical applications, (2013) International Review on Computers and Software (IRECOS), 8 (11), pp. 2740-2745.

Gattim, N.K., Rajesh, V., Multimodal medical image fusion under redundant transforms, (2015) International Review on Computers and Software (IRECOS), 10 (3), pp. 241-248.

Dheepa, G., Sukumaran, S., Hybrid fusion technique using dual tree complex wavelet transform for satellite remote sensor images, (2014) International Review on Computers and Software (IRECOS), 9 (9), pp. 1560-1567.


  • There are currently no refbacks.

Please send any question about this web site to
Copyright © 2005-2023 Praise Worthy Prize