A vision-based indoor positioning method with high accuracy and efficiency based on self-optimized-ordered visual vocabulary
Refereed conference paper presented and published in conference proceedings

香港中文大學研究人員
替代計量分析
.

其它資訊
摘要In this paper, we present a novel indoor positioning method with high accuracy and efficiency, requiring only the camera of a mobile device. The proposed method takes advantage of a novel visual vocabulary, Self-Optimized-Ordered Visual (SOO) Vocabulary under Bag-of-Visual-Word framework to exploit deep connections between physical locations and feature clusters. Additionally, related techniques improving positioning performance such as feature selection and visual word filtering are also designed and examined. Evaluation results show that when the training image size varies from 20 to 640, our method can save up to 80% processing time in both phases compared to two existing vision-based indoor positioning methods that use state-of-art image query techniques. In the meantime, the average image query accuracy of our method among all evaluated indoor scenes is above 95%, which highly increases positioning accuracy and makes the method a very suitable option for smart-phone based indoor positioning and navigation.
著者Wu T., Chen L.-K., Hong Y.
會議名稱IEEE/ION Position, Location and Navigation Symposium, PLANS 2016
會議開始日11.04.2016
會議完結日14.04.2016
會議地點Savannah
會議國家/地區格魯吉亞
出版年份2016
月份5
日期26
頁次48 - 56
國際標準書號9781509020423
語言英式英語
關鍵詞indoor positioning, self-optimized-ordered visual vocabulary, vision based positioning

上次更新時間 2020-30-11 於 00:28