ℓ1,p-Norm regularization: Error bounds and convergence rate analysis of first-order methods
Refereed conference paper presented and published in conference proceedings


Full Text

Times Cited

Other information
AbstractIn recent years, the ℓ1,p-regularizer has been widely used to induce structured sparsity in the solutions to various optimization problems. Currently, such ℓ1,p-regularized problems are typically solved by first-order methods. Motivated by the desire to analyze the convergence rates of these methods, we show that for a large class of ℓ1,p-regularized problems, an error bound condition is satisfied when p ∈ [1,2] or p = ∞ but fails to hold for any p ∈ (2, ∞). Based on this result, we show that many first-order methods enjoy an asymptotic linear rate of convergence when applied to ℓ1,p-regularized linear or logistic regression with p ∈ [1,2] or p = ∞. By contrast, numerical experiments suggest that for the same class of problems with p ∈ (2, ∞), the aforementioned methods may not converge linearly.
All Author(s) ListZhou Z., Zhang Q., So A.M.-C.
Name of Conference32nd International Conference on Machine Learning, ICML 2015
Start Date of Conference06/07/2015
End Date of Conference11/07/2015
Place of ConferenceLille
Country/Region of ConferenceFrance
Proceedings TitleProceedings of the 32 nd International Conference on Machine Learning
Detailed descriptionorganized by The International Machine Learning Society,
Year2015
Month1
Volume Number37
ISBN9781510810587
LanguagesEnglish-United Kingdom

Last updated on 2020-27-05 at 02:29