Joint Inference of Objects and Scenes with Efficient Learning of Text-Object-Scene Relations
Publication in refereed journal

香港中文大學研究人員

引用次數
替代計量分析
.

其它資訊
摘要The rapid growth of web images presents new challenges as well as opportunities to the task of image understanding. Conventional approaches rely heavily on fine-grained annotations, such as bounding boxes and semantic segmentations, which are not available for web-scale images. In general, images over the Internet are accompanied with descriptive texts, which are relevant to their contents. To bridge the gap between textual and visual analysis for image understanding, this paper presents an algorithm to learn the relations between scenes, objects, and texts with the help of image-level annotations. In particular, the relation between the texts and objects is modeled as the matching probability between the nouns and the object classes, which can be solved via a constrained bipartite matching problem. On the other hand, the relations between the scenes and objects/texts are modeled as the conditional distributions of their co-occurrence. Built upon the learned cross-domain relations, an integrated model brings together scenes, objects, and texts for joint image understanding, including scene classification, object classification and localization, and the prediction of object cardinalities. The proposed cross-domain learning algorithm and the integrated model elevate the performance of image understanding for web images in the context of textual descriptions. Experimental results show that the proposed algorithm significantly outperforms conventional methods in various computer vision tasks.
著者Botao Wang, Dahua Lin, Hongkai Xiong, Yuan F Zheng
期刊名稱IEEE Transactions on Multimedia
出版年份2016
月份3
卷號18
期次3
頁次507 - 519
國際標準期刊號1520-9210
電子國際標準期刊號1941-0077
語言美式英語

上次更新時間 2021-25-01 於 02:40