Learning Scene-Independent Group Descriptors for Crowd Understanding
Publication in refereed journal


摘要Groups are the primary entities that make up a crowd. Understanding group-level dynamics and properties is thus scientifically important and practically useful in a wide range of applications, especially for crowd understanding. In this paper, we show that fundamental group-level properties, such as intra-group stability and inter-group conflict, can be systematically quantified by visual descriptors. This is made possible through learning a novel collective transition prior, which leads to a robust approach for group segregation in public spaces. From the former, we further devise a rich set of group-property visual descriptors. These descriptors are scene-independent and can be effectively applied to public scenes with a variety of crowd densities and distributions. Extensive experiments on hundreds of public scene video clips demonstrate that such property descriptors are complementary to each other, scene-independent, and they convey critical information on physical states of a crowd. The proposed group-level descriptors show promising results and potentials in multiple applications, including crowd dynamic monitoring, crowd video classification, and crowd video retrieval.
著者Jing Shao, Chen Change Loy, Xiaogang Wang
期刊名稱IEEE Transactions on Circuits and Systems for Video Technology
出版社Institute of Electrical and Electronics Engineers (IEEE)
頁次1290 - 1303

上次更新時間 2020-19-10 於 03:15