Pyramid-based Visual Tracking Using Sparsity Represented Mean Transform
Refereed conference paper presented and published in conference proceedings


引用次數
替代計量分析
.

其它資訊
摘要In this paper, we propose a robust method for visual tracking relying on mean shift, sparse coding and spatial pyramids. Firstly, we extend the original mean shift approach to handle orientation space and scale space and name this new method as mean transform. The mean transform method estimates the motion, including the location, orientation and scale, of the interested object window simultaneously and effectively. Secondly, a pixel-wise dense patch sampling technique and a region-wise trivial template designing scheme are introduced which enable our approach to run very accurately and efficiently. In addition, instead of using either holistic representation or local representation only, we apply spatial pyramids by combining these two representations into our approach to deal with partial occlusion problems robustly. Observed from the experimental results, our approach outperforms state-of-theart methods in many benchmark sequences.
著者Zhang Z, Wong KH
會議名稱27th IEEE Conference on Computer Vision and Pattern Recognition (CVPR)
會議開始日23.06.2014
會議完結日28.06.2014
會議地點Columbus
會議國家/地區美國
詳細描述organized by IEEE and CVF,
出版年份2014
月份1
日期1
出版社IEEE
頁次1226 - 1233
電子國際標準書號978-1-4799-5117-8
國際標準期刊號1063-6919
語言英式英語
Web of Science 學科類別Computer Science; Computer Science, Artificial Intelligence

上次更新時間 2020-21-05 於 01:47