Distributionally Robust Recourse Action
Refereed conference paper presented and published in conference proceedings


Full Text

Other information
AbstractA recourse action aims to explain a particular algorithmic decision by showing one specific way in which the instance could be modified to receive an alternate outcome. Existing recourse generation methods often assume that the machine learning model does not change over time. However, this assumption does not always hold in practice because of data distribution shifts, and in this case, the recourse action may become invalid. To redress this shortcoming, we propose the Distributionally Robust Recourse Action (DiRRAc) framework, which generates a recourse action that has a high probability of being valid under a mixture of model shifts. We formulate the robustified recourse setup as a min-max optimization problem, where the max problem is specified by Gelbrich distance over an ambiguity set around the distribution of model parameters. Then we suggest a projected gradient descent algorithm to find a robust recourse according to the min-max objective. We show that our DiRRAc framework can be extended to hedge against the misspecification of the mixture weights. Numerical experiments with both synthetic and three real-world datasets demonstrate the benefits of our proposed framework over state-of-the-art recourse methods.
All Author(s) ListDuy Nguyen, Ngoc Bui, Viet Anh Nguyen
Name of ConferenceInternational Conference on Learning Representations
Start Date of Conference01/05/2023
End Date of Conference05/05/2023
Place of ConferenceKigali
Country/Region of ConferenceRwanda
Year2023
LanguagesEnglish-United States

Last updated on 2023-14-09 at 12:26