Locally Aligned Feature Transforms across Views
Refereed conference paper presented and published in conference proceedings

香港中文大學研究人員

引用次數
替代計量分析
.

其它資訊
摘要In this paper, we propose a new approach for matching images observed in different camera views with complex cross-view transforms and apply it to person re-identification. It jointly partitions the image spaces of two camera views into different configurations according to the similarity of cross-view transforms. The visual features of an image pair from different views are first locally aligned by being projected to a common feature space and then matched with softly assigned metrics which are locally optimized. The features optimal for recognizing identities are different from those for clustering cross-view transforms. They are jointly learned by utilizing sparsity-inducing norm and information theoretical regularization. This approach can be generalized to the settings where test images are from new camera views, not the same as those in the training set. Extensive experiments are conducted on public datasets and our own dataset. Comparisons with the state-of-the-art metric learning and person re-identification methods show the superior performance of our approach.
著者Li W, Wang XG
會議名稱26th IEEE Conference on Computer Vision and Pattern Recognition (CVPR)
會議開始日23.06.2013
會議完結日28.06.2013
會議地點Portland
會議國家/地區美國
出版年份2013
月份1
日期1
出版社IEEE
頁次3594 - 3601
電子國際標準書號978-0-7695-4989-7
國際標準期刊號1063-6919
語言英式英語
Web of Science 學科類別Computer Science; Computer Science, Artificial Intelligence

上次更新時間 2021-24-02 於 00:04