Unsupervised learning of discriminative attributes and visual representations
Refereed conference paper presented and published in conference proceedings


全文

其它資訊
摘要Attributes offer useful mid-level features to interpret visual data. While most attribute learning methods are supervised by costly human-generated labels, we introduce a simple yet powerful unsupervised approach to learn and predict visual attributes directly from data. Given a large unlabeled image collection as input, we train deep Convolutional Neural Networks (CNNs) to output a set of discriminative, binary attributes often with semantic meanings. Specifically, we first train a CNN coupled with unsupervised discriminative clustering, and then use the cluster membership as a soft supervision to discover shared attributes from the clusters while maximizing their separability. The learned attributes are shown to be capable of encoding rich imagery properties from both natural images and contour patches. The visual representations learned in this way are also transferrable to other tasks such as object detection. We show other convincing results on the related tasks of image retrieval and classification, and contour detection.
著者Huang C., Lo C.C., Tang X.
會議名稱2016 IEEE Conference on Computer Vision and Pattern Recognition, CVPR 2016
會議開始日26.06.2016
會議完結日01.07.2016
會議地點Las Vegas
會議國家/地區美國
出版年份2016
月份1
日期1
卷號2016-January
頁次5175 - 5184
國際標準書號9781467388511
國際標準期刊號1063-6919
語言英式英語

上次更新時間 2020-09-08 於 04:34