A globally convergent incremental Newton method

M. Gürbüzbalaban, A. Ozdaglar, P. Parrilo

Research output: Contribution to journalArticlepeer-review

21 Scopus citations

Abstract

Motivated by machine learning problems over large data sets and distributed optimization over networks, we develop and analyze a new method called incremental Newton method for minimizing the sum of a large number of strongly convex functions. We show that our method is globally convergent for a variable stepsize rule. We further show that under a gradient growth condition, convergence rate is linear for both variable and constant stepsize rules. By means of an example, we show that without the gradient growth condition, incremental Newton method cannot achieve linear convergence. Our analysis can be extended to study other incremental methods: in particular, we obtain a linear convergence rate result for the incremental Gauss–Newton algorithm under a variable stepsize rule.

Original languageEnglish (US)
Article number11
Pages (from-to)283-313
Number of pages31
JournalMathematical Programming
Volume151
Issue number1
DOIs
StatePublished - Jun 22 2015
Externally publishedYes

All Science Journal Classification (ASJC) codes

  • Software
  • Mathematics(all)

Keywords

  • Convex optimization
  • EKF algorithm
  • Gauss–Newton method
  • Incremental methods
  • Newton method
  • Strong convexity

Fingerprint Dive into the research topics of 'A globally convergent incremental Newton method'. Together they form a unique fingerprint.

Cite this