Massively Parallel Algorithms for Personalized PageRank
Publication in refereed journal


Times Cited
Web of Science0WOS source URL (as at 12/09/2021) Click here for the latest count
Altmetrics Information
.

Other information
AbstractPersonalized PageRank (PPR) has wide applications in search engines, social recommendations, community detection, and so on.
Nowadays, graphs are becoming massive and many IT companies need to deal with large graphs that cannot be fitted into the memory of most commodity servers. However, most existing state-of-the-art solutions for PPR computation only work for single-machines and are inefficient for the distributed framework since such solutions either {\em (i)} result in an excessively large number of communication rounds, or {\em (ii)} incur high communication costs in each round.

Motivated by this, we present {\em Delta-Push}, an efficient framework for single-source and top-$k$ PPR queries in distributed settings. Our goal is to reduce the number of rounds while guaranteeing that the load, i.e., the maximum number of messages an executor sends or receives in a round, can be bounded by the capacity of each executor. We first present a non-trivial combination of a redesigned parallel push algorithm and the Monte-Carlo method to answer single-source PPR queries. The solution uses pre-sampled random walks to reduce the number of rounds for the push algorithm. Theoretical analysis under the {\em Massively Parallel Computing (MPC)} model shows that our proposed solution bounds the communication rounds to $O(\log{\frac{n^2\log{n}}{\epsilon^2m}})$ under a load of $O(m/p)$, where $m$ is the number of edges of the input graph, $p$ is the number of executors, and $\epsilon$ is a user-defined error parameter. In the meantime, as the number of executors increases to $p' = \gamma \cdot p$, the load constraint can be relaxed since each executor can hold $O(\gamma \cdot m/p')$ messages with invariant local memory. In such scenarios, multiple queries can be processed in batches simultaneously. We show that with a load of $O(\gamma \cdot m/p')$, our {Delta-Push} can process $\gamma$ queries in a batch with $O(\log{\frac{n^2\log{n}}{\gamma\epsilon^2m}})$ rounds, while other baseline solutions still keep the same round cost for each batch. We further present a new top-$k$ algorithm that is friendly to the distributed framework and reduces the number of rounds required in practice. Extensive experiments show that our proposed solution is more efficient than alternatives.
All Author(s) ListGuanhao Hou, Xingguang Chen, Sibo Wang, Zhewei Wei
Journal nameProceedings of the VLDB Endowment
Year2021
Month5
Volume Number14
Issue Number9
Pages1668 - 1680
ISSN2150-8097
LanguagesEnglish-United States

Last updated on 2021-13-09 at 00:10