Skip to the content.

Challenge Title:

Real DSR Challenge: Real World Depth Map Super-Resolution on the RGB-D-D Dataset

Registration URL: HERE!

Challenge Description:

As a supplement of the RGB modality, the depth map can provide useful depth information, which has been applied in bokeh rendering, AR modeling, face recognition, gesture recognition, etc. However, the resolution of depth maps cannot match the resolution of RGB images, thereby limiting the practical applications to some extent. Although numerous deep learning methods have been proposed for depth map SR and presented impressive performance, there are still some unsatisfactory points in detail preserving, computation complexity, and real-world application.

In order to be applied on the platforms of the mobile devices and embedded systems, the depth map SR algorithms should take into account both the efficiency and accuracy. Furthermore, down-sampling as a straight-forward strategy has been widely used in the existing depth map SR algorithms to construct paired HR and LR depth maps training samples, which fails to simulate the real correspondences between HR and LR depth maps. In this competition, we encourage the participants to design depth map SR models that can suit the real-world depth map SR task. Giving depth maps captured by low-power depth sensor, they are supposed to be up-sampled not only fit for embedded systems but also achieve high accuracy at the same time.

Dataset Download:

We randomly split 1586 portraits, 380 plants, 249 models from RGB-D-D dataset as the training set for this challenge. Meanwhile, we randomly select 50 samples from the test set in RGB-D-D dataset to evaluate your models by two phases. The dataset can only be used for academic purposes. By using this dataset and related codes, you should agree to cite our dataset and baseline paper. You can apply the training set and get more detailed content according to the home page of our group: http://mepro.bjtu.edu.cn/resource.html

Baseline Method (FDSR):

Our previous work will be used as the baseline method for this challenge, which is accepted by CVPR 2021 with high performance. This paper is also used to provide the dataset in this challenge.

The source code for the baseline method (FDSR) can be found: https://github.com/lingzhi96/RGB-D-D-Dataset. Please cite our baseline paper if it is helpful for your research:

@inproceedings{he2021towards,
        title={Towards Fast and Accurate Real-World Depth Super-Resolution: Benchmark Dataset and Baseline},
        author={He, Lingzhi and Zhu, Hongguang and Li, Feng and Bai, Huihui and Cong, Runmin and Zhang, Chunjie and Lin, Chunyu and Liu, Meiqin and Zhao, Yao},
        booktitle={Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition},
        pages={9229--9238},
        year={2021}
      }
      

Important Dates:

Registration Start Date: August 10, 2021

Training Dataset Available Date: August 10, 2021

Test Dataset Available Data: October 10, 2021

First-phase Submission Start Date: October 10, 2021

First-phase Submission Deadline: October 17, 2021

Second-phase Submission Start Date: October 20, 2021

Second-phase Submission Deadline: October 27, 2021

Winner Announcement Date Around: October 30, 2021

Challenge Procedures:

The Training set (a subset of RGB-D-D dataset) for this challenge will be available to the participants once the challenge starts. The participates are required to use the provided training set with annotations to develop a depth map super-resolution method using the low-resolution depth maps as the input or use RGB image to guide it. We will release our testing samples according to schedule. There will be two phases for this challenge:

Some detailed rules are listed as follows:

Host Organization:

MePro, Institute of Information Science, Beijing Jiaotong University

Department of Computer Science, City University of Hong Kong

Organizers: