3D Modeling from Multiple Images
Refereed conference paper presented and published in conference proceedings

香港中文大學研究人員

全文

引用次數

其它資訊
摘要Although the visual perception of 3D shape from 2D images is a basic capability of human beings, it remains challenging to computers. Hence, one goal of vision research is to computationally understand and model the latent 3D scene from the captured images, and provide human-like visual system for machines. In this paper, we present a method that is capable of building a realistic 3D model for the latent scene from multiple images taken at different viewpoints. Specifically, the reconstruction proceeds in two steps. First, generate dense depth map for each input image by a Bayesian-based inference model. Second, build a complete 3D model for the latent scene by integrating all reliable 3D information embedded in the depth maps. Experiments are conducted to demonstrate the effectiveness of the proposed approach.
著者Zhang W, Yao JA, Cham WK
會議名稱7th International Symposium on Neural Networks
會議開始日06.06.2010
會議完結日09.06.2010
會議地點Shanghai
會議國家/地區中國
期刊名稱Lecture Notes in Artificial Intelligence
詳細描述, Springer,
出版年份2010
月份1
日期1
卷號6064
出版社SPRINGER-VERLAG BERLIN
頁次97 - 103
國際標準書號978-3-642-13317-6
國際標準期刊號0302-9743
語言英式英語
關鍵詞3D modeling; Depth map; Fusion
Web of Science 學科類別Computer Science; Computer Science, Artificial Intelligence; Computer Science, Information Systems; Computer Science, Theory & Methods

上次更新時間 2020-24-11 於 23:03