Action recognition with trajectory-pooled deep-convolutional descriptors
Refereed conference paper presented and published in conference proceedings


Times Cited
Altmetrics Information
.

Other information
AbstractVisual features are of vital importance for human action understanding in videos. This paper presents a new video representation, called trajectory-pooled deep-convolutional descriptor (TDD), which shares the merits of both hand-crafted features [31] and deep-learned features [24]. Specifically, we utilize deep architectures to learn discriminative convolutional feature maps, and conduct trajectory-constrained pooling to aggregate these convolutional features into effective descriptors. To enhance the robustness of TDDs, we design two normalization methods to transform convolutional feature maps, namely spatiotemporal normalization and channel normalization. The advantages of our features come from (i) TDDs are automatically learned and contain high discriminative capacity compared with those hand-crafted features; (ii) TDDs take account of the intrinsic characteristics of temporal dimension and introduce the strategies of trajectory-constrained sampling and pooling for aggregating deep-learned features. We conduct experiments on two challenging datasets: HMD-B51 and UCF101. Experimental results show that TDDs outperform previous hand-crafted features [31] and deep-learned features [24]. Our method also achieves superior performance to the state of the art on these datasets.
All Author(s) ListWang L., Qiao Y., Tang X.
Name of ConferenceIEEE Conference on Computer Vision and Pattern Recognition, CVPR 2015
Start Date of Conference07/06/2015
End Date of Conference12/06/2015
Place of ConferenceBoston
Country/Region of ConferenceUnited States of America
Detailed descriptionorganized by IEEE,
Year2015
Month10
Day14
Volume Number07-12-June-2015
Pages4305 - 4314
ISBN9781467369640
ISSN1063-6919
LanguagesEnglish-United Kingdom

Last updated on 2020-08-07 at 02:42