Low-Rank and Joint Sparse Representations for Multi-Modal Recognition

Heng Zhang, Vishal Patel, Rama Chellappa

Research output: Contribution to journalArticle

4 Citations (Scopus)

Abstract

We propose multi-task and multivariate methods for multi-modal recognition based on low-rank and joint sparse representations. Our formulations can be viewed as generalized versions of multivariate low-rank and sparse regression, where sparse and low-rank representations across all modalities are imposed. One of our methods simultaneously couples information within different modalities by enforcing the common low-rank and joint sparse constraints among multi-modal observations. We also modify our formulations by including an occlusion term that is assumed to be sparse. The alternating direction method of multipliers is proposed to efficiently solve the resulting optimization problems. Extensive experiments on three publicly available multi-modal biometrics and object recognition data sets show that our methods compare favorably with other feature-level fusion methods.

Original languageEnglish (US)
Article number7962177
Pages (from-to)4741-4752
Number of pages12
JournalIEEE Transactions on Image Processing
Volume26
Issue number10
DOIs
StatePublished - Oct 1 2017

Fingerprint

Object recognition
Biometrics
Fusion reactions
Experiments

All Science Journal Classification (ASJC) codes

  • Software
  • Computer Graphics and Computer-Aided Design

Keywords

  • Multi-modal recognition
  • feature-level fusion
  • joint-sparse representation
  • low-rank representation

Cite this

Zhang, Heng ; Patel, Vishal ; Chellappa, Rama. / Low-Rank and Joint Sparse Representations for Multi-Modal Recognition. In: IEEE Transactions on Image Processing. 2017 ; Vol. 26, No. 10. pp. 4741-4752.
@article{ca8c49f37ebe43eb9eca44c1712479d0,
title = "Low-Rank and Joint Sparse Representations for Multi-Modal Recognition",
abstract = "We propose multi-task and multivariate methods for multi-modal recognition based on low-rank and joint sparse representations. Our formulations can be viewed as generalized versions of multivariate low-rank and sparse regression, where sparse and low-rank representations across all modalities are imposed. One of our methods simultaneously couples information within different modalities by enforcing the common low-rank and joint sparse constraints among multi-modal observations. We also modify our formulations by including an occlusion term that is assumed to be sparse. The alternating direction method of multipliers is proposed to efficiently solve the resulting optimization problems. Extensive experiments on three publicly available multi-modal biometrics and object recognition data sets show that our methods compare favorably with other feature-level fusion methods.",
keywords = "Multi-modal recognition, feature-level fusion, joint-sparse representation, low-rank representation",
author = "Heng Zhang and Vishal Patel and Rama Chellappa",
year = "2017",
month = "10",
day = "1",
doi = "10.1109/TIP.2017.2721838",
language = "English (US)",
volume = "26",
pages = "4741--4752",
journal = "IEEE Transactions on Image Processing",
issn = "1057-7149",
publisher = "Institute of Electrical and Electronics Engineers Inc.",
number = "10",

}

Low-Rank and Joint Sparse Representations for Multi-Modal Recognition. / Zhang, Heng; Patel, Vishal; Chellappa, Rama.

In: IEEE Transactions on Image Processing, Vol. 26, No. 10, 7962177, 01.10.2017, p. 4741-4752.

Research output: Contribution to journalArticle

TY - JOUR

T1 - Low-Rank and Joint Sparse Representations for Multi-Modal Recognition

AU - Zhang, Heng

AU - Patel, Vishal

AU - Chellappa, Rama

PY - 2017/10/1

Y1 - 2017/10/1

N2 - We propose multi-task and multivariate methods for multi-modal recognition based on low-rank and joint sparse representations. Our formulations can be viewed as generalized versions of multivariate low-rank and sparse regression, where sparse and low-rank representations across all modalities are imposed. One of our methods simultaneously couples information within different modalities by enforcing the common low-rank and joint sparse constraints among multi-modal observations. We also modify our formulations by including an occlusion term that is assumed to be sparse. The alternating direction method of multipliers is proposed to efficiently solve the resulting optimization problems. Extensive experiments on three publicly available multi-modal biometrics and object recognition data sets show that our methods compare favorably with other feature-level fusion methods.

AB - We propose multi-task and multivariate methods for multi-modal recognition based on low-rank and joint sparse representations. Our formulations can be viewed as generalized versions of multivariate low-rank and sparse regression, where sparse and low-rank representations across all modalities are imposed. One of our methods simultaneously couples information within different modalities by enforcing the common low-rank and joint sparse constraints among multi-modal observations. We also modify our formulations by including an occlusion term that is assumed to be sparse. The alternating direction method of multipliers is proposed to efficiently solve the resulting optimization problems. Extensive experiments on three publicly available multi-modal biometrics and object recognition data sets show that our methods compare favorably with other feature-level fusion methods.

KW - Multi-modal recognition

KW - feature-level fusion

KW - joint-sparse representation

KW - low-rank representation

UR - http://www.scopus.com/inward/record.url?scp=85021915825&partnerID=8YFLogxK

UR - http://www.scopus.com/inward/citedby.url?scp=85021915825&partnerID=8YFLogxK

U2 - 10.1109/TIP.2017.2721838

DO - 10.1109/TIP.2017.2721838

M3 - Article

AN - SCOPUS:85021915825

VL - 26

SP - 4741

EP - 4752

JO - IEEE Transactions on Image Processing

JF - IEEE Transactions on Image Processing

SN - 1057-7149

IS - 10

M1 - 7962177

ER -