Paper: GradNet Image Denoising

August 31st, 2020

High-frequency regions like edges compromise the image denoising performance. In traditional hand-crafted systems, image edges/textures were regularly used to restore the frequencies in these regions. However, this practice seems to be left forgotten in the deep learning era.

In this paper, we revisit this idea of using the image gradient and introduce the GradNet. Our major contribution is fusing the image gradient in the network. Specifically, the image gradient is computed from the denoised network input and is subsequently concatenated with the feature maps extracted from the shallow layers.

In this step, we argue that image gradient shares intrinsically similar nature with features from the shallow layers, and thus that our fusion strategy is superior. One minor contribution in this work is proposing a gradient consistency regularization, which enforces the gradient difference of the denoised image and the clean ground-truth to be minimized.

Putting the two techniques together, the proposed GradNet allows us to achieve competitive denoising accuracy on three synthetic datasets and three real-world datasets.

We show through ablation studies that the two techniques are indispensable. Moreover, we verify that our system is particularly capable of removing noise from textured regions.

Figure 7: Visual examples of denoising results on (a) a textured region and (b) a smooth region. In (a), PSNR produced by GradNet is 0.14dB higher than the state-of-art FFDNet. Particularly, GradNet preserves much finer scaled textures on the statue while the other three methods smooth out the details. In (b), the GradNet removes the noise clearly without generating artifacts. Notably the shade in the original image is reserved well at the same time, indicating that GradNet does not simply smooth out the pixels.

Y. Liu, S. Anwar, L. Zheng and Q. Tian, “GradNet Image Denoising,” 2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops (CVPRW), Seattle, WA, USA, 2020, pp. 2140-2149, doi: 10.1109/CVPRW50498.2020.00262.

Download theĀ full paper here.

For more information, contact us.


Subscribe to our News via Email

Enter your email address to subscribe and receive notifications of new posts by email.