Bayesian bridge-randomized penalized quantile regression
Publication in refereed journal

Times Cited
Altmetrics Information

Other information
AbstractQuantile regression (QR) is an ideal alternative for depicting the conditional quantile functions of a response variable when the conditions of linear regression are unavailable. One advantage of QR in relation to the traditional mean regression is that the QR estimates are more robust against outliers and a large class of error distributions. Regularization methods have been verified to be effective in QR literature for simultaneously conducting parameter estimation and variable selection. This study considers a bridge randomized penalty of regression coefficients by incorporating uncertainty penalty into Bayesian bridge QR. The asymmetric Laplace distribution (ALD) and the generalized Gaussian distribution (GGD) priors are imposed on model errors and regression coefficients, respectively, to establish a Bayesian bridge-randomized QR model. In addition, bridge penalty exponent is deemed as a parameter, and a Beta-distributed prior is forced on. By utilizing the normal-exponential and uniform-Gamma mixture representations of the ALD and the GGD, a Bayesian hierarchical model is constructed to conduct the fully Bayesian posterior inference. Gibbs sampler and Metropolis-Hastings algorithms are utilized to draw Markov chain Monte Carlo samples from the full conditional posterior distributions of all unknown parameters. Finally, the proposed procedures are illustrated by simulation studies and applied to a real-data analysis.
All Author(s) ListTian Y. Z., Song X. Y.
Journal nameComputational Statistics and Data Analysis
Volume Number144
Article number106876
LanguagesEnglish-United States

Last updated on 2020-19-10 at 01:06