Landmark-based fisher vector representation for video-based face verification

Jun Cheng Chen, Vishalm Patel, Rama Chellappa

Research output: Chapter in Book/Report/Conference proceedingConference contribution

6 Scopus citations

Abstract

Unconstrained video-based face verification is a challenging problem because of dramatic variations in pose, illumination, and image quality of each face in a video. In this paper, we propose a landmark-based Fisher vector representation for video-to-video face verification. The proposed representation encodes dense multi-scale SIFT features extracted from patches centered at detected facial landmarks, and face similarity is computed with the distance measure learned from joint Bayesian metric learning. Experimental results demonstrate that our approach achieves significantly better performance than other competitive video-based face verification algorithms on two challenging unconstrained video face dataseis, Multiple Biometric Grand Challenge (MBGC) and Face and Ocular Challenge Series (FOCS).

Original languageEnglish (US)
Title of host publication2015 IEEE International Conference on Image Processing, ICIP 2015 - Proceedings
PublisherIEEE Computer Society
Pages2705-2709
Number of pages5
ISBN (Electronic)9781479983391
DOIs
StatePublished - Dec 9 2015
EventIEEE International Conference on Image Processing, ICIP 2015 - Quebec City, Canada
Duration: Sep 27 2015Sep 30 2015

Publication series

NameProceedings - International Conference on Image Processing, ICIP
Volume2015-December
ISSN (Print)1522-4880

Other

OtherIEEE International Conference on Image Processing, ICIP 2015
CountryCanada
CityQuebec City
Period9/27/159/30/15

All Science Journal Classification (ASJC) codes

  • Software
  • Computer Vision and Pattern Recognition
  • Signal Processing

Keywords

  • Fisher vector
  • face verification
  • facial landmarks

Fingerprint Dive into the research topics of 'Landmark-based fisher vector representation for video-based face verification'. Together they form a unique fingerprint.

Cite this