Robust visual tracking using local sparse appearance model and k-selection

Baiyang Liu, Junzhou Huang, Casimir Kulikowski, Lin Yang

Research output: Contribution to journalArticlepeer-review

130 Scopus citations

Abstract

Online learned tracking is widely used for its adaptive ability to handle appearance changes. However, it introduces potential drifting problems due to the accumulation of errors during the self-updating, especially for occluded scenarios. The recent literature demonstrates that appropriate combinations of trackers can help balance the stability and flexibility requirements. We have developed a robust tracking algorithm using a local sparse appearance model (SPT) and K-Selection. A static sparse dictionary and a dynamically updated online dictionary basis distribution are used to model the target appearance. A novel sparse representation-based voting map and a sparse constraint regularized mean shift are proposed to track the object robustly. Besides these contributions, we also introduce a new selection-based dictionary learning algoritH.W.th a locally constrained sparse representation, called K-Selection. Based on a set of comprehensive experiments, our algorithm has demonstrated better performance than alternatives reported in the recent literature.

Original languageEnglish (US)
Article number6319318
Pages (from-to)2968-2981
Number of pages14
JournalIEEE Transactions on Pattern Analysis and Machine Intelligence
Volume35
Issue number12
DOIs
StatePublished - 2013

All Science Journal Classification (ASJC) codes

  • Software
  • Computer Vision and Pattern Recognition
  • Computational Theory and Mathematics
  • Artificial Intelligence
  • Applied Mathematics

Keywords

  • K-selection
  • Sparse representation
  • appearance model
  • dictionary learning
  • tracking

Fingerprint

Dive into the research topics of 'Robust visual tracking using local sparse appearance model and k-selection'. Together they form a unique fingerprint.

Cite this