MoFAP: A Multi-level Representation for Action Recognition
Publication in refereed journal


Times Cited
Altmetrics Information
.

Other information
AbstractThis paper proposes a multi-level video representation by stacking the activations of motion features, atoms, and phrases (MoFAP). Motion features refer to those low-level local descriptors, while motion atoms and phrases can be viewed as mid-level “temporal parts”. Motion atom is defined as an atomic part of action, and captures the motion information of video in a short temporal scale. Motion phrase is a temporal composite of multiple motion atoms defined with an AND/OR structure. It further enhances the discriminative capacity of motion atoms by incorporating temporal structure in a longer temporal scale. Specifically, we first design a discriminative clustering method to automatically discover a set of representative motion atoms. Then, we mine effective motion phrases with high discriminative and representative capacity in a bottom-up manner. Based on these basic units of motion features, atoms, and phrases, we construct a MoFAP network by stacking them layer by layer. This MoFAP network enables us to extract the effective representation of video data from different levels and scales. The separate representations from motion features, motion atoms, and motion phrases are concatenated as a whole one, called Activation of MoFAP. The effectiveness of this representation is demonstrated on four challenging datasets: Olympic Sports, UCF50, HMDB51, and UCF101. Experimental results show that our representation achieves the state-of-the-art performance on these datasets.
Acceptance Date21/09/2015
All Author(s) ListLimin Wang, Yu Qiao, Xiaoou Tang
Journal nameInternational Journal of Computer Vision
Year2016
Month9
Volume Number119
Issue Number3
PublisherSpringer
Pages254 - 271
ISSN0920-5691
eISSN1573-1405
LanguagesEnglish-United States
KeywordsAction recognition, Motion Feature, Motion Atom, Motion Phrase

Last updated on 2020-11-07 at 01:59