VR-SGD: A Simple Stochastic Variance Reduction Method for Machine Learning
Publication in refereed journal
已正式接受出版

替代計量分析
.

其它資訊
摘要In this paper, we propose a simple variant of the original SVRG, called variance reduced stochastic gradient descent (VR-SGD). Unlike the choices of snapshot and starting points in SVRG and Prox-SVRG, the two vectors of VR-SGD are set to the average and last iterate of the previous epoch, respectively. The settings allow us to use much larger learning rates, and also make our convergence analysis more challenging. We design two different update rules for smooth and non-smooth problems, respectively, which means that VR-SGD can tackle non-smooth and/or non-strongly convex problems directly without any reduction techniques. Moreover, we analyze the convergence properties of VR-SGD for strongly convex problems, which show that VR-SGD attains linear convergence. Different from most algorithms that have no convergence guarantees for non-strongly convex problems, we also provide the convergence guarantees of VR-SGD for this case, and empirically verify that VR-SGD achieves similar performance to its momentum accelerated variant that has the optimal convergence rate $O(1/T^2)$. Finally, we apply VR-SGD to solve various machine learning problems, such as empirical risk minimization and leading eigenvalue computation. Experimental results show that VR-SGD converges significantly faster than SVRG and Prox-SVRG, and usually outperforms state-of-the-art accelerated methods, e.g., Katyusha. IEEE
出版社接受日期30.10.2018
著者Fanhua Shang, Kaiwen Zhou, Hongying Liu, James Cheng, Ivor Tsang, Lijun Zhang, Dacheng Tao, Jiao Licheng
期刊名稱IEEE Transactions on Knowledge and Data Engineering
出版年份2018
出版社Institute of Electrical and Electronics Engineers (IEEE)
國際標準期刊號1041-4347
語言美式英語

上次更新時間 2020-31-07 於 23:15