Press 'Tab' to the content

Seminar

05 MAY 2020 Seminar

Reconstructing indoor 3D scene from RGB panorama image with a deep learning approach

Mr. Chan Cheuk Pong

Mr. Chan Cheuk Pong

Abstract

For humans, it is easy for us to infer the 3D environment based on a single RGB image. For example, from an indoor room image, the information of the room layout, objects inside the room and their respective transformations can be extracted with ease. If computers are enabled to do the same, it can potentially benefit industries such as Virtual Reality and Robotics. Many approaches for 3D scene reconstruction from a RGB image have been proposed. However, a significant number of them use perspective images as input, which contain less field of view and contextual information compared to 360 panorama images. Also, with continuous academic advancement of Convolutional Neural Network(CNN), it can be heavily utilized to estimate different aspects during the scene reconstruction process, with a relatively high accuracy. The goal of the research is to estimate complete indoor rooms that feature the room layout, and are populated with CAD models objects that are present in the image, positioned and rotated accordingly to the image. The final visualization of the environment will be displayed in the popular game engine, Unity. In the presentation, some preliminary terminology will first be introduced. After that, 5 main parts of this research will be covered, namely room layout estimation, object detection, object pose estimation, a 3D model dataset, and scene reconstruction display in Unity. For each section, the basic motivation, as well as the general approach will be discussed, with sufficient supporting figures for further illustration.

Date

May 5,2020

Time

9:05am

Zoom ID

805-665-6124

< Back