Abstract
Haze during bad weather degrades visibility of the scene drastically. Degradation of the scene visibility varies concerning the transmission map (TrMap) of the scene. Estimation of accurate TrMap is a key step to reconstruct the haze-free scene. Previous approaches, DehazeNet (Cai et al. in IEEE Trans Image Process 25(11):5187–5198, 2016), MSCNN (Ren et al., in: European conference on computer vision, Springer, New York, 2016), overcome the drawbacks of handcrafted features. However, they fail to recover the haze-free scene without color distortion. In this paper, cardinal color (red, green and blue) fusion network is proposed to estimate the robust TrMap. Proposed network recovers haze-free scene without color distortion. In the first stage, it generates multi-channel depth maps using a novel parallel convolution filter bank. The second stage comprises of multi-scale convolution filter bank which extracts the scale-invariant features. Further, spatial pooling followed by convolution filter estimates the robust scene transmission map. In the last stage, estimated TrMap is given to the optical model to recover the haze-free scene. Three benchmark datasets namely D-HAZY (Ancuti et al., in: 2016 IEEE international conference on image processing (ICIP), 2016), ImageNet (Deng et al., in: IEEE conference on computer vision and pattern recognition, 2009 (CVPR 2009), 2009) and a set of real-world hazy scenes are used to analyze the effectiveness of the proposed approach. Structural similarity index, mean square error and peak signal-to-noise ratio are used to evaluate the proposed network for single image haze removal. Further, the effect of haze on object detection is analyzed and robust cascaded pipeline, i.e., haze removal followed by Faster RCNN is proposed for object detection in the hazy environment. Performance analysis shows that the proposed network estimates robust TrMap and recovers haze-free scene without color distortion.
https://ift.tt/2SUeTZ5
Δεν υπάρχουν σχόλια:
Δημοσίευση σχολίου