Reading scene text in deep convolutional sequences
Refereed conference paper presented and published in conference proceedings

Full Text

Times Cited

Other information
AbstractWe develop a Deep-Text Recurrent Network (DTRN) that regards scene text reading as a sequence labelling problem. We leverage recent advances of deep convolutional neural networks to generate an ordered highlevel sequence from a whole word image, avoiding the difficult character segmentation problem. Then a deep recurrent model, building on long short-term memory (LSTM), is developed to robustly recognize the generated CNN sequences, departing from most existing approaches recognising each character independently. Our model has a number of appealing properties in comparison to existing scene text recognition methods: (i) It can recognise highly ambiguous words by leveraging meaningful context information, allowing it to work reliably without either pre- or post-processing; (ii) the deep CNN feature is robust to various image distortions; (iii) it retains the explicit order information in word image, which is essential to discriminate word strings; (iv) the model does not depend on pre-defined dictionary, and it can process unknown words and arbitrary strings. It achieves impressive results on several benchmarks, advancing the-state-of-the-art substantially.
All Author(s) ListHe P., Huang W., Qiao Y., Loy C.C., Tang X.
Name of Conference30th AAAI Conference on Artificial Intelligence, AAAI 2016
Start Date of Conference12/02/2016
End Date of Conference17/02/2016
Place of ConferencePhoenix
Country/Region of ConferenceUnited States of America
Detailed descriptionorganized by AAAI,
Pages3501 - 3508
LanguagesEnglish-United Kingdom

Last updated on 2021-05-12 at 23:48