Regret Bounds for Markov Decision Processes with Recursive Optimized Certainty Equivalents
Refereed conference paper presented and published in conference proceedings
CUHK Authors
Other information
AbstractThe optimized certainty equivalent (OCE) is a family of risk measures that cover important examples such as entropic risk, conditional value-at-risk and mean-variance models. In this paper, we propose a new episodic risk-sensitive reinforcement learning formulation based on tabular Markov decision processes with recursive OCEs. We design an efficient learning algorithm for this problem based on value iteration and upper confidence bound. We derive an upper bound on the regret of the proposed algorithm, and also establish a minimax lower bound. Our bounds show that the regret rate achieved by our proposed algorithm has optimal dependence on the number of episodes and the number of actions.
Acceptance Date25/04/2023
All Author(s) ListWenhao Xu, Xuefeng Gao, Xuedong He
Name of ConferenceThe 40th International Conference on Machine Learning
Start Date of Conference23/07/2023
End Date of Conference29/07/2023
Place of ConferenceHawaii
Country/Region of ConferenceUnited States of America
Proceedings TitleProceedings of the 40th International Conference on Machine Learning
Year2023
Month7
Volume Number202
Pages38400 - 38427
LanguagesEnglish-United States