Nonlinear autoassociation is not equivalent to PCA

Nathalie Japkowicz, Stephen José Hanson, Mark A. Gluck

Research output: Contribution to journalArticlepeer-review

139 Scopus citations

Abstract

A common misperception within the neural network community is that even with nonlinearities in their hidden layer, autoassociators trained with backpropagation are equivalent to linear methods such as principal component analysis (PCA). Our purpose is to demonstrate that nonlinear autoassociators actually behave differently from linear methods and that they can outperform these methods when used for latent extraction, projection, and classification. While linear autoassociators emulate PCA, and thus exhibit a flat or unimodal reconstruction error surface, autoassociators with nonlinearities in their hidden layer learn domains by building error reconstruction surfaces that, depending on the task, contain multiple local valleys. This interpolation bias allows nonlinear autoassociators to represent appropriate classifications of nonlinear multimodal domains, in contrast to linear autoassociators, which are inappropriate for such tasks. In fact, autoassociators with hidden unit nonlinearities can be shown to perform nonlinear classification and nonlinear recognition.

Original languageEnglish (US)
Pages (from-to)531-545
Number of pages15
JournalNeural computation
Volume12
Issue number3
DOIs
StatePublished - Mar 2000

All Science Journal Classification (ASJC) codes

  • Arts and Humanities (miscellaneous)
  • Cognitive Neuroscience

Fingerprint

Dive into the research topics of 'Nonlinear autoassociation is not equivalent to PCA'. Together they form a unique fingerprint.

Cite this