3D ShapeNets: A deep representation for volumetric shapes
Refereed conference paper presented and published in conference proceedings


Times Cited
Altmetrics Information
.

Other information
Abstract3D shape is a crucial but heavily underutilized cue in today's computer vision systems, mostly due to the lack of a good generic shape representation. With the recent availability of inexpensive 2.5D depth sensors (e.g. Microsoft Kinect), it is becoming increasingly important to have a powerful 3D shape representation in the loop. Apart from category recognition, recovering full 3D shapes from view-based 2.5D depth maps is also a critical part of visual understanding. To this end, we propose to represent a geometric 3D shape as a probability distribution of binary variables on a 3D voxel grid, using a Convolutional Deep Belief Network. Our model, 3D ShapeNets, learns the distribution of complex 3D shapes across different object categories and arbitrary poses from raw CAD data, and discovers hierarchical compositional part representation automatically. It naturally supports joint object recognition and shape completion from 2.5D depth maps, and it enables active object recognition through view planning. To train our 3D deep learning model, we construct ModelNet - a large-scale 3D CAD model dataset. Extensive experiments show that our 3D deep representation enables significant performance improvement over the-state-of-the-arts in a variety of tasks.
All Author(s) ListWu Z., Song S., Khosla A., Yu F., Zhang L., Tang X., Xiao J.
Name of ConferenceIEEE Conference on Computer Vision and Pattern Recognition, CVPR 2015
Start Date of Conference07/06/2015
End Date of Conference12/06/2015
Place of ConferenceBoston
Country/Region of ConferenceUnited States of America
Detailed descriptionorganized by IEEE,
Year2015
Month10
Day14
Volume Number07-12-June-2015
Pages1912 - 1920
ISBN9781467369640
ISSN1063-6919
LanguagesEnglish-United Kingdom

Last updated on 2021-12-05 at 02:41