Fast ADMM algorithm for distributed optimization with adaptive penalty

Changkyu Song, Sejong Yoon, Vladimir Pavlovic

Research output: Chapter in Book/Report/Conference proceedingConference contribution

31 Scopus citations

Abstract

We propose new methods to speed up convergence of the Alternating Direction Method of Multipliers (ADMM), a common optimization tool in the context of large scale and distributed learning. The proposed method accelerates the speed of convergence by automatically deciding the constraint penalty needed for parameter consensus in each iteration. In addition, we also propose an extension of the method that adaptively determines the maximum number of iterations to update the penalty. We show that this approach effectively leads to an adaptive, dynamic network topology underlying the distributed optimization. The utility of the new penalty update schemes is demonstrated on both synthetic and real data, including an instance of the probabilistic matrix factorization task known as the structure-from-motion problem.

Original languageEnglish (US)
Title of host publication30th AAAI Conference on Artificial Intelligence, AAAI 2016
PublisherAAAI press
Pages753-759
Number of pages7
ISBN (Electronic)9781577357605
StatePublished - 2016
Event30th AAAI Conference on Artificial Intelligence, AAAI 2016 - Phoenix, United States
Duration: Feb 12 2016Feb 17 2016

Publication series

Name30th AAAI Conference on Artificial Intelligence, AAAI 2016

Other

Other30th AAAI Conference on Artificial Intelligence, AAAI 2016
Country/TerritoryUnited States
CityPhoenix
Period2/12/162/17/16

All Science Journal Classification (ASJC) codes

  • Artificial Intelligence

Fingerprint

Dive into the research topics of 'Fast ADMM algorithm for distributed optimization with adaptive penalty'. Together they form a unique fingerprint.

Cite this