A Linearly Convergent Optimization Framework for Learning Graphs from Smooth Signals
Publication in refereed journal

Altmetrics Information
.

Other information
AbstractLearning graph structures from a collection of smooth graph signals is a fundamental problem in data analysis and has attracted much interest in recent years. Although various optimization formulations of the problem have been proposed in the literature, existing methods for solving them either are not practically efficient or lack strong convergence guarantees. In this article, we consider a unified graph learning formulation that captures a wide range of static and time-varying graph learning models and develop a first-order method for solving it. By showing that the set of Karush-Kuhn-Tucker points of the formulation possesses a so-called error bound property , we establish the linear convergence of our proposed method. Moreover, through extensive numerical experiments on both synthetic and real data, we show that our method exhibits sharp linear convergence and can be substantially faster than a host of other existing methods. To the best of our knowledge, our work is the first to develop a first-order method that not only is practically efficient but also enjoys a linear convergence guarantee when applied to a large class of graph learning models.
All Author(s) ListXiaolu Wang, Chaorui Yao, Anthony Man-Cho So
Journal nameIEEE Transactions on Signal and Information Processing over Networks
Year2023
Month8
Day10
Volume Number9
PublisherIEEE
Pages490 - 504
eISSN2373-776X
LanguagesEnglish-United States
KeywordsConvergence, Topology, Network topology, Laplace equations, Information processing, Convex functions, Standards,
Graph learning, graph signal processing, proximal ADMM, error bound, linear convergence

Last updated on 2023-21-12 at 14:17