VR-SGD: A Simple Stochastic Variance Reduction Method for Machine Learning
Publication in refereed journal


Times Cited
Altmetrics Information
.

Other information
AbstractIn this paper, we propose a simple variant of the original SVRG, called variance reduced stochastic gradient descent (VR-SGD). Unlike the choices of snapshot and starting points in SVRG and Prox-SVRG, the two vectors of VR-SGD are set to the average and last iterate of the previous epoch, respectively. The settings allow us to use much larger learning rates, and also make our convergence analysis more challenging. We design two different update rules for smooth and non-smooth problems, respectively, which means that VR-SGD can tackle non-smooth and/or non-strongly convex problems directly without any reduction techniques. Moreover, we analyze the convergence properties of VR-SGD for strongly convex problems, which show that VR-SGD attains linear convergence. Different from most algorithms that have no convergence guarantees for non-strongly convex problems, we also provide the convergence guarantees of VR-SGD for this case, and empirically verify that VR-SGD achieves similar performance to its momentum accelerated variant that has the optimal convergence rate $O(1/T^2)$. Finally, we apply VR-SGD to solve various machine learning problems, such as empirical risk minimization and leading eigenvalue computation. Experimental results show that VR-SGD converges significantly faster than SVRG and Prox-SVRG, and usually outperforms state-of-the-art accelerated methods, e.g., Katyusha.
Acceptance Date30/10/2018
All Author(s) ListFanhua Shang, Kaiwen Zhou, Hongying Liu, James Cheng, Ivor Tsang, Lijun Zhang, Dacheng Tao, Jiao Licheng
Journal nameIEEE Transactions on Knowledge and Data Engineering
Year2020
Month1
Volume Number32
Issue Number1
PublisherInstitute of Electrical and Electronics Engineers (IEEE)
Pages188 - 202
ISSN1041-4347
eISSN1558-2191
LanguagesEnglish-United States

Last updated on 2020-11-09 at 03:35