TY - GEN
T1 - Unifying subspace and distance metric learning with bhattacharyya coefficient for image classification
AU - Qingshan, Liu
AU - Metaxas, Dimitris N.
PY - 2009
Y1 - 2009
N2 - In this paper, we propose a unified scheme of subspace and distance metric learning under the Bayesian framework for image classification. According to the local distribution of data, we divide the k-nearest neighbors of each sample into the intra-class set and the inter-class set, and we aim to learn a distance metric in the embedding subspace, which can make the distances between the sample and its intra-class set smaller than the distances between it and its inter-class set. To reach this goal, we consider the intra-class distances and the inter-class distances to be from two different probability distributions respectively, and we model the goal with minimizing the overlap between two distributions. Inspired by the Bayesian classification error estimation, we formulate the objective function by minimizing the Bhattachyrra coefficient between two distributions. We further extend it with the kernel trick to learn nonlinear distance metric. The power and generality of the proposed approach are demonstrated by a series of experiments on the CMU-PIE face database, the extended YALE face database, and the COREL-5000 nature image database.
AB - In this paper, we propose a unified scheme of subspace and distance metric learning under the Bayesian framework for image classification. According to the local distribution of data, we divide the k-nearest neighbors of each sample into the intra-class set and the inter-class set, and we aim to learn a distance metric in the embedding subspace, which can make the distances between the sample and its intra-class set smaller than the distances between it and its inter-class set. To reach this goal, we consider the intra-class distances and the inter-class distances to be from two different probability distributions respectively, and we model the goal with minimizing the overlap between two distributions. Inspired by the Bayesian classification error estimation, we formulate the objective function by minimizing the Bhattachyrra coefficient between two distributions. We further extend it with the kernel trick to learn nonlinear distance metric. The power and generality of the proposed approach are demonstrated by a series of experiments on the CMU-PIE face database, the extended YALE face database, and the COREL-5000 nature image database.
UR - http://www.scopus.com/inward/record.url?scp=67649998143&partnerID=8YFLogxK
UR - http://www.scopus.com/inward/citedby.url?scp=67649998143&partnerID=8YFLogxK
U2 - 10.1007/978-3-642-00826-9_11
DO - 10.1007/978-3-642-00826-9_11
M3 - Conference contribution
AN - SCOPUS:67649998143
SN - 9783642008252
T3 - Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics)
SP - 254
EP - 267
BT - Emerging Trends in Visual Computing - LIX Fall Colloquium, ETVC 2008, Revised Invited Papers
T2 - LIX Fall Colloquium on Emerging Trends in Visual Computing, ETVC 2008
Y2 - 18 November 2008 through 20 November 2009
ER -