### Abstract

We study the problem of efficiently estimating the coefficients of generalized linear models (GLMs) in the large-scale setting where the number of observations n is much larger than the number of predictors p, i.e. n >> p >> 1. We show that in GLMs with random (not necessarily Gaussian) design, the GLM coefficients are approximately proportional to the corresponding ordinary least squares (OLS) coefficients. Using this relation, we design an algorithm that achieves the same accuracy as the maximum likelihood estimator (MLE) through iterations that attain up to a cubic convergence rate, and that are cheaper than any batch optimization algorithm by at least a factor of O(p). We provide theoretical guarantees for our algorithm, and analyze the convergence behavior in terms of data dimensions. Finally, we demonstrate the performance of our algorithm through extensive numerical studies on large-scale real and synthetic datasets, and show that it achieves the highest performance compared to several other widely used optimization algorithms.

Original language | English (US) |
---|---|

Pages (from-to) | 3332-3340 |

Number of pages | 9 |

Journal | Advances in Neural Information Processing Systems |

State | Published - Jan 1 2016 |

Event | 30th Annual Conference on Neural Information Processing Systems, NIPS 2016 - Barcelona, Spain Duration: Dec 5 2016 → Dec 10 2016 |

### All Science Journal Classification (ASJC) codes

- Computer Networks and Communications
- Information Systems
- Signal Processing

## Fingerprint Dive into the research topics of 'Scaled least squares estimator for GLMs in large-scale problems'. Together they form a unique fingerprint.

## Cite this

*Advances in Neural Information Processing Systems*, 3332-3340.