Generalized regularized least-squares learning with predefined features in a Hilbert space
Refereed conference paper presented and published in conference proceedings

Full Text

Times Cited

Other information
AbstractKernel-based regularized learning seeks a model in a hypothesis space by minimizing the empirical error and the model's complexity. Based on the representer theorem, the solution consists of a linear combination of translates of a kernel. This paper investigates a generalized form of representer theorem for kernel-based learning. After mapping predefined features and translates of a kernel simultaneously onto a hypothesis space by a specific way of constructing kernels, we proposed a new algorithm by utilizing a generalized regularizer which leaves part of the space unregularized. Using a squared-loss function in calculating the empirical error, a simple convex solution is obtained which combines predefined features with translates of the kernel. Empirical evaluations have confirmed the effectiveness of the algorithm for supervised learning tasks.
All Author(s) ListLi W., Lee K.-H., Leung K.-S.
Name of Conference20th Annual Conference on Neural Information Processing Systems, NIPS 2006
Start Date of Conference04/12/2006
End Date of Conference07/12/2006
Place of ConferenceVancouver, BC
Country/Region of ConferenceCanada
Detailed descriptionTo ORKTS: This is the online paper.
Pages881 - 888
LanguagesEnglish-United Kingdom

Last updated on 2020-29-05 at 01:14