Training-efficient feature map for shift-invariant kernels
Refereed conference paper presented and published in conference proceedings


Full Text

Times Cited

Other information
AbstractRandom feature map is popularly used to scale up kernel methods. However, employing a large number of mapped features to ensure an accurate approximation will still make the training time consuming. In this paper, we aim to improve the training efficiency of shift-invariant kernels by using fewer informative features without sacrificing precision. We propose a novel feature map method by extending Random Kitchen Sinks through fast data-dependent subspace embedding to generate the desired features. More specifically, we describe two algorithms with different tradeoffs on the running speed and accuracy, and prove that O(l) features induced by them are able to perform as accurately as O(l2) features by other feature map methods. In addition, several experiments are conducted on the real-world datasets demonstrating the superiority of our proposed algorithms.
All Author(s) ListChen X., Yang H., King I., Lyu M.R.
Name of Conference24th International Joint Conference on Artificial Intelligence, IJCAI 2015
Start Date of Conference25/07/2015
End Date of Conference31/07/2015
Place of ConferenceBuenos Aires
Country/Region of ConferenceArgentina
Detailed descriptionorganized by IJCAI and the national AI societie(s) of the host nation(s),
Year2015
Month1
Day1
Volume Number2015-January
Pages3395 - 3401
ISBN9781577357384
ISSN1045-0823
LanguagesEnglish-United Kingdom

Last updated on 2020-05-08 at 03:00