Debiasing samples from online learning using bootstrap
Refereed conference paper presented and published in conference proceedings


Full Text

Other information
AbstractIt has been recently shown in the literature (Nie et al., 2018; Shin et al., 2019a,b) that the sample averages from online learning experiments are biased when used to estimate the mean reward. To correct the bias, offpolicy evaluation methods, including importance sampling and doubly robust estimators, typically calculate the conditional propensity score, which is ill-defined for non-randomized policies such as UCB. This paper provides a procedure to debias the samples using bootstrap, which doesn’t require the knowledge of the reward distribution and can be applied to any adaptive policies. Numerical experiments demonstrate the effective bias reduction for samples generated by popular multiarmed bandit algorithms such as ExploreThen-Commit (ETC), UCB, Thompson sampling (TS) and -greedy (EG). We analyze and provide theoretical justifications for the procedure under the ETC algorithm, including the asymptotic convergence of the bias decay rate in the real and bootstrap worlds.
All Author(s) ListNingyuan Chen, Xuefeng Gao, Yi Xiong
Name of ConferenceThe 25th International Conference on Artificial Intelligence and Statistics
Start Date of Conference28/03/2022
End Date of Conference30/03/2022
Place of ConferenceVirtual
Country/Region of ConferenceUnited States of America
Year2022
LanguagesEnglish-United States

Last updated on 2022-28-09 at 10:01