Pyramid-based Visual Tracking Using Sparsity Represented Mean Transform
Refereed conference paper presented and published in conference proceedings

Times Cited
Web of Science17WOS source URL (as at 29/03/2020) Click here for the latest count
Altmetrics Information

Other information
AbstractIn this paper, we propose a robust method for visual tracking relying on mean shift, sparse coding and spatial pyramids. Firstly, we extend the original mean shift approach to handle orientation space and scale space and name this new method as mean transform. The mean transform method estimates the motion, including the location, orientation and scale, of the interested object window simultaneously and effectively. Secondly, a pixel-wise dense patch sampling technique and a region-wise trivial template designing scheme are introduced which enable our approach to run very accurately and efficiently. In addition, instead of using either holistic representation or local representation only, we apply spatial pyramids by combining these two representations into our approach to deal with partial occlusion problems robustly. Observed from the experimental results, our approach outperforms state-of-theart methods in many benchmark sequences.
All Author(s) ListZhang Z, Wong KH
Name of Conference27th IEEE Conference on Computer Vision and Pattern Recognition (CVPR)
Start Date of Conference23/06/2014
End Date of Conference28/06/2014
Place of ConferenceColumbus
Country/Region of ConferenceUnited States of America
Detailed descriptionorganized by IEEE and CVF,
Pages1226 - 1233
LanguagesEnglish-United Kingdom
Web of Science Subject CategoriesComputer Science; Computer Science, Artificial Intelligence

Last updated on 2020-30-03 at 01:13