Deepfashion: Powering robust clothes recognition and retrieval with rich annotations
Refereed conference paper presented and published in conference proceedings

Full Text

Times Cited

Other information
AbstractRecent advances in clothes recognition have been driven by the construction of clothes datasets. Existing datasets are limited in the amount of annotations and are difficult to cope with the various challenges in real-world applications. In this work, we introduce DeepFashion1, a large-scale clothes dataset with comprehensive annotations. It contains over 800,000 images, which are richly annotated with massive attributes, clothing landmarks, and correspondence of images taken under different scenarios including store, street snapshot, and consumer. Such rich annotations enable the development of powerful algorithms in clothes recognition and facilitating future researches. To demonstrate the advantages of DeepFashion, we propose a new deep model, namely FashionNet, which learns clothing features by jointly predicting clothing attributes and landmarks. The estimated landmarks are then employed to pool or gate the learned features. It is optimized in an iterative manner. Extensive experiments demonstrate the effectiveness of FashionNet and the usefulness of DeepFashion.
All Author(s) ListLiu Z., Luo P., Qiu S., Wang X., Tang X.
Name of Conference2016 IEEE Conference on Computer Vision and Pattern Recognition, CVPR 2016
Start Date of Conference26/06/2016
End Date of Conference01/07/2016
Place of ConferenceLas Vegas
Country/Region of ConferenceUnited States of America
Detailed descriptionorganized by IEEE,
Volume Number2016-January
Pages1096 - 1104
LanguagesEnglish-United Kingdom

Last updated on 2020-12-07 at 02:38