3D Modeling from Multiple Images
Refereed conference paper presented and published in conference proceedings

Full Text

Times Cited
Web of Science2WOS source URL (as at 22/09/2021) Click here for the latest count

Other information
AbstractAlthough the visual perception of 3D shape from 2D images is a basic capability of human beings, it remains challenging to computers. Hence, one goal of vision research is to computationally understand and model the latent 3D scene from the captured images, and provide human-like visual system for machines. In this paper, we present a method that is capable of building a realistic 3D model for the latent scene from multiple images taken at different viewpoints. Specifically, the reconstruction proceeds in two steps. First, generate dense depth map for each input image by a Bayesian-based inference model. Second, build a complete 3D model for the latent scene by integrating all reliable 3D information embedded in the depth maps. Experiments are conducted to demonstrate the effectiveness of the proposed approach.
All Author(s) ListZhang W, Yao JA, Cham WK
Name of Conference7th International Symposium on Neural Networks
Start Date of Conference06/06/2010
End Date of Conference09/06/2010
Place of ConferenceShanghai
Country/Region of ConferenceChina
Journal nameLecture Notes in Artificial Intelligence
Detailed description, Springer,
Volume Number6064
Pages97 - 103
LanguagesEnglish-United Kingdom
Keywords3D modeling; Depth map; Fusion
Web of Science Subject CategoriesComputer Science; Computer Science, Artificial Intelligence; Computer Science, Information Systems; Computer Science, Theory & Methods

Last updated on 2021-22-09 at 23:55