ViP-CNN: Visual Phrase Guided Convolutional Neural Network
Refereed conference paper presented and published in conference proceedings


其它資訊
摘要As the intermediate level task connecting image captioning and object detection, visual relationship detection started to catch researchers' attention because of its descriptive power and clear structure. It detects the objects and captures their pair-wise interactions with a subject-predicate-object triplet, e.g. person-ride-horse. In this paper, each visual relationship is considered as a phrase with three components. We formulate the visual relationship detection as three inter-connected recognition problems and propose a Visual Phrase guided Convolutional Neural Network (ViP-CNN) to address them simultaneously. In ViP-CNN, we present a Phrase-guided Message Passing Structure (PMPS) to establish the connection among relationship components and help the model consider the three problems jointly. Corresponding non-maximum suppression method and model training strategy are also proposed. Experimental results show that our ViP-CNN outperforms the state-of-art method both in speed and accuracy. We further pretrain ViP-CNN on our cleansed Visual Genome Relationship dataset, which is found to perform better than the pretraining on the ImageNet for this task.
出版社接受日期21.07.2017
著者Yikang Li, Wanli Ouyang, Xiaogang Wang, Xiao'ou Tang
會議名稱2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR)
會議開始日21.07.2017
會議完結日26.07.2017
會議地點Honolulu, Hawaii
會議國家/地區美國
會議論文集題名Proceedings: 30th IEEE Conference on Computer Vision and Pattern Recognition CVPR 2017
出版年份2017
頁次7244 - 7253
國際標準書號978-1-5386-0457-1
語言美式英語

上次更新時間 2018-04-05 於 15:07