TY - GEN
T1 - Similarity features for facial event analysis
AU - Yang, Peng
AU - Liu, Qingshan
AU - Metaxas, Dimitris
PY - 2008
Y1 - 2008
N2 - Each facial event will give rise to complex facial appearance variation. In this paper, we propose similarity features to describe the facial appearance for video-based facial event analysis. Inspired by the kernel features, for each sample, we compare it with the reference set with a similarity function, and we take the log-weighted summarization of the similarities as its similarity feature. Due to the distinctness of the apex images of facial events, we use their cluster-centers as the references. In order to capture the temporal dynamics, we use the K-means algorithm to divide the similarity features into several clusters in temporal domain, and each cluster is modeled by a Gaussian distribution. Based on the Gaussian models, we further map the similarity features into dynamic binary patterns to handle the issue of time resolution, which embed the time-warping operation implicitly. The haar-like descriptor is used to extract the visual features of facial appearance, and Adaboost is performed to learn the final classifiers. Extensive experiments carried on the Cohn-Kanade database show the promising performance of the proposed method.
AB - Each facial event will give rise to complex facial appearance variation. In this paper, we propose similarity features to describe the facial appearance for video-based facial event analysis. Inspired by the kernel features, for each sample, we compare it with the reference set with a similarity function, and we take the log-weighted summarization of the similarities as its similarity feature. Due to the distinctness of the apex images of facial events, we use their cluster-centers as the references. In order to capture the temporal dynamics, we use the K-means algorithm to divide the similarity features into several clusters in temporal domain, and each cluster is modeled by a Gaussian distribution. Based on the Gaussian models, we further map the similarity features into dynamic binary patterns to handle the issue of time resolution, which embed the time-warping operation implicitly. The haar-like descriptor is used to extract the visual features of facial appearance, and Adaboost is performed to learn the final classifiers. Extensive experiments carried on the Cohn-Kanade database show the promising performance of the proposed method.
UR - http://www.scopus.com/inward/record.url?scp=56749091802&partnerID=8YFLogxK
UR - http://www.scopus.com/inward/citedby.url?scp=56749091802&partnerID=8YFLogxK
U2 - 10.1007/978-3-540-88682-2_52
DO - 10.1007/978-3-540-88682-2_52
M3 - Conference contribution
AN - SCOPUS:56749091802
SN - 3540886818
SN - 9783540886815
T3 - Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics)
SP - 685
EP - 696
BT - Computer Vision - ECCV 2008 - 10th European Conference on Computer Vision, Proceedings
PB - Springer Verlag
T2 - 10th European Conference on Computer Vision, ECCV 2008
Y2 - 12 October 2008 through 18 October 2008
ER -