Abstract
H.265/HEVC (high efficiency video coding) adopts in-loop filters to reduce artifacts such as blocking artifacts, ringing artifacts, but those above-mentioned artifacts are still obvious in the case of large quantization step. This paper presents a novel in-loop filtering mechanism based on three-dimensional (3D) sub-bands in adaptive group of frames and convolutional neural network (CNN) models for further improvement of in-loop filtering capability. Firstly, the video frame sequence is adaptively grouped; then, a group of video frames is decomposed into frequency sub-bands by 3D wavelet transform; afterward, frequency sub-bands with different types are filtered by corresponding CNN model; finally, the group of video frames is synthesized by 3D wavelet inverse transform. We apply the nonlinear mapping ability of CNN into the modification of wavelet coefficients. The sub-bands of high frequency and low frequency are filtered by four offline CNN models with different directional characteristics trained with training data generated by 3D wavelet transform, respectively. The details of the high frequency part are enhanced, while the quality of low-pass image is promoted, so that those artifacts can be effectively alleviated. By means of comparing experiments, the subjective and objective results show that our proposed method has better filtering performance than the in-loop filtering mechanism in HM16.18; especially the video frame quality can be effectively improved in the case of large quantization step.
http://bit.ly/2BFvi9g
Δεν υπάρχουν σχόλια:
Δημοσίευση σχολίου