Deepfashion: Powering robust clothes recognition and retrieval with rich annotations
Refereed conference paper presented and published in conference proceedings


全文

其它資訊
摘要Recent advances in clothes recognition have been driven by the construction of clothes datasets. Existing datasets are limited in the amount of annotations and are difficult to cope with the various challenges in real-world applications. In this work, we introduce DeepFashion1, a large-scale clothes dataset with comprehensive annotations. It contains over 800,000 images, which are richly annotated with massive attributes, clothing landmarks, and correspondence of images taken under different scenarios including store, street snapshot, and consumer. Such rich annotations enable the development of powerful algorithms in clothes recognition and facilitating future researches. To demonstrate the advantages of DeepFashion, we propose a new deep model, namely FashionNet, which learns clothing features by jointly predicting clothing attributes and landmarks. The estimated landmarks are then employed to pool or gate the learned features. It is optimized in an iterative manner. Extensive experiments demonstrate the effectiveness of FashionNet and the usefulness of DeepFashion.
著者Liu Z., Luo P., Qiu S., Wang X., Tang X.
會議名稱2016 IEEE Conference on Computer Vision and Pattern Recognition, CVPR 2016
會議開始日26.06.2016
會議完結日01.07.2016
會議地點Las Vegas
會議國家/地區美國
詳細描述organized by IEEE,
出版年份2016
月份1
日期1
卷號2016-January
頁次1096 - 1104
國際標準書號9781467388511
國際標準期刊號1063-6919
語言英式英語

上次更新時間 2020-06-09 於 01:17