Social learning and distributed hypothesis testing

Anusha Lalitha, Tara Javidi, Anand D. Sarwate

Research output: Contribution to journalArticlepeer-review

45 Scopus citations

Abstract

This paper considers a problem of distributed hypothesis testing over a network. Individual nodes in a network receive noisy local (private) observations whose distribution is parameterized by a discrete parameter (hypothesis). The marginals of the joint observation distribution conditioned on each hypothesis are known locally at the nodes, but the true parameter/hypothesis is not known. An update rule is analyzed in which nodes first perform a Bayesian update of their belief (distribution estimate) of each hypothesis based on their local observations, communicate these updates to their neighbors, and then perform a 'non-Bayesian' linear consensus using the log-beliefs of their neighbors. Under mild assumptions, we show that the belief of any node on a wrong hypothesis converges to zero exponentially fast. We characterize the exponential rate of learning, which we call the network divergence, in terms of the nodes' influence of the network and the divergences between the observations' distributions. For a broad class of observation statistics which includes distributions with unbounded support such as Gaussian mixtures, we show that rate of rejection of wrong hypothesis satisfies a large deviation principle, i.e., the probability of sample paths on which the rate of rejection of wrong hypothesis deviates from the mean rate vanishes exponentially fast and we characterize the rate function in terms of the nodes' influence of the network and the local observation models.

Original languageEnglish (US)
Article number8359193
Pages (from-to)6161-6179
Number of pages19
JournalIEEE Transactions on Information Theory
Volume64
Issue number9
DOIs
StatePublished - Sep 2018

All Science Journal Classification (ASJC) codes

  • Information Systems
  • Computer Science Applications
  • Library and Information Sciences

Keywords

  • Distributed algorithms
  • large deviation principle
  • message passing
  • rate of learning

Fingerprint

Dive into the research topics of 'Social learning and distributed hypothesis testing'. Together they form a unique fingerprint.

Cite this