Paper: From Depth What Can You See? Depth Completion via Auxiliary Image Reconstruction

August 20th, 2020

Depth completion recovers dense depth from sparse measurements, e.g., LiDAR. Existing depth-only methods use sparse depth as the only input. However, these methods may fail to recover semantics consistent boundaries, or small/thin objects due to:

  1. The sparse nature of depth points and
  2. The lack of images to provide semantic cues.

This paper continues this line of research and aims to overcome the above shortcomings.

The unique design of our depth completion model is that it simultaneously outputs a reconstructed image and a dense depth map. Specifically, we formulate image reconstruction from sparse depth as an auxiliary task during training that is supervised by the unlabelled gray-scale images.

During testing, our system accepts sparse depth as the only input, i.e., the image is not required. Our design allows the depth completion network to learn complementary image features that help to better understand object structures. The extra supervision incurred by image reconstruction is minimal, because no annotations other than the image are needed.

We evaluate our method on the KITTI depth completion benchmark and show that depth completion can be significantly improved via the auxiliary supervision of image reconstruction. Our algorithm consistently outperforms depth-only methods and is also effective for indoor scenes like NYUv2.

Figure 7. Visual comparison of depth completion results after incorporating image reconstruction and feature sharing. (a) RGB images for reference. (b) Only with depth features cannot recover the full structure of objects. (c) With image features but without sharing, the results are slightly improved. (d) With shared features, the model performs better in recovering consistent object structures and small/thin objects.

 

K. Lu, N. Barnes, S. Anwar and L. Zheng, “From Depth What Can You See? Depth Completion via Auxiliary Image Reconstruction,” 2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), Seattle, WA, USA, 2020, pp. 11303-11312, doi: 10.1109/CVPR42600.2020.01132.

Download the full paper here.

For more information, contact us.


Subscribe to our News via Email

Enter your email address to subscribe and receive notifications of new posts by email.