Discriminative Sparse Neighbor Approximation for Imbalanced Learning
Publication in refereed journal


Times Cited
Altmetrics Information
.

Other information
AbstractData imbalance is common in many vision tasks where one or more classes are rare. Without addressing this issue, conventional methods tend to be biased toward the majority class with poor predictive accuracy for the minority class. These methods further deteriorate on small, imbalanced data that have a large degree of class overlap. In this paper, we propose a novel discriminative sparse neighbor approximation (DSNA) method to ameliorate the effect of class-imbalance during prediction. Specifically, given a test sample, we first traverse it through a cost-sensitive decision forest to collect a good subset of training examples in its local neighborhood. Then, we generate from this subset several class-discriminating but overlapping clusters and model each as an affine subspace. From these subspaces, the proposed DSNA iteratively seeks an optimal approximation of the test sample and outputs an unbiased prediction. We show that our method not only effectively mitigates the imbalance issue, but also allows the prediction to extrapolate to unseen data. The latter capability is crucial for achieving accurate prediction on small data set with limited samples. The proposed imbalanced learning method can be applied to both classification and regression tasks at a wide range of imbalance levels. It significantly outperforms the state-of-the-art methods that do not possess an imbalance handling mechanism, and is found to perform comparably or even better than recent deep learning methods by using hand-crafted features only.
All Author(s) ListChen Huang, Chen Change Loy, Xiaoou Tang
Journal nameIEEE Transactions on Neural Networks and Learning Systems
Year2017
Month3
Volume NumberPP
Issue Number99
Pages1 - 11
ISSN2162-237X
eISSN2162-2388
LanguagesEnglish-United States

Last updated on 2020-06-07 at 02:21