Why do similarity matching objectives lead to Hebbian/anti-Hebbian networks?

Cengiz Pehlevan, Anirvan M. Sengupta, Dmitri B. Chklovskii

Research output: Contribution to journalArticlepeer-review

34 Scopus citations


Modeling self-organization of neural networks for unsupervised learning using Hebbian and anti-Hebbian plasticity has a long history in neuroscience. Yet derivations of single-layer networks with such local learning rules from principled optimization objectives became possible only recently, with the introduction of similarity matching objectives. What explains the success of similarity matching objectives in deriving neural networks with local learning rules? Here, using dimensionality reduction as an example, we introduce several variable substitutions that illuminate the success of similarity matching. We show that the full network objective may be optimized separately for each synapse using local learning rules in both the offline and online settings. We formalize the long-standing intuition of the rivalry between Hebbian and anti-Hebbian rules by formulating a min-max optimization problem.We introduce a novel dimensionality reduction objective using fractional matrix exponents. To illustrate the generality of our approach, we apply it to a novel formulation of dimensionality reduction combined with whitening.We confirm numerically that the networkswith learning rules derived from principled objectives perform better than those with heuristic learning rules.

Original languageEnglish (US)
Pages (from-to)84-124
Number of pages41
JournalNeural computation
Issue number1
StatePublished - Jan 1 2018

All Science Journal Classification (ASJC) codes

  • Arts and Humanities (miscellaneous)
  • Cognitive Neuroscience


Dive into the research topics of 'Why do similarity matching objectives lead to Hebbian/anti-Hebbian networks?'. Together they form a unique fingerprint.

Cite this