PanoContext: A Whole-room 3D Context Model
for Panoramic Scene Understanding



Abstract

The field-of-view of standard cameras is very small, which is one of the main reasons that contextual information is not as useful as it should be for object detection. To overcome this limitation, we advocate the use of 360◦ full-view panoramas in scene understanding, and propose a whole-room context model in 3D. For an input panorama, our method outputs 3D bounding boxes of the room and all major objects inside, together with their semantic categories. Our method generates 3D hypotheses based on contextual constraints and ranks the hypotheses holistically, combining both bottom-up and top-down context information. To train our model, we construct an annotated panorama dataset and reconstruct the 3D model from single-view using manual annotation. Experiments show that solely based on 3D context without any image region category classifier, we can achieve a comparable performance with the state-of-the-art object detector. This demonstrates that when the FOV is large, context is as powerful as object appearance. All data and source code are available online.

Paper

Video


Talk

Oral presentation at the main conference: Keynote slides (409MB) and PDF slides (73MB).

Video recording of the talk: http://videolectures.net/eccv2014_zhang_panoramic_scene/.

Dataset and Source code

Panoramic Image Processing Toolbox

Poster

Annotated Panorama Dataset

Supplementary material

  • supp.pdf: Due to page limit, we moved a lot of technical details to this file. This file also contains more visualized result of our method.

Algorithm Analysis

  • theory.pdf: This file contains an analytical formulation of our model for people who wish to do theoretical study on our model.