Finite-Sample Risk Bounds for Maximum Likelihood Estimation with Arbitrary Penalties

W. D. Brinda, Jason M. Klusowski

Research output: Contribution to journalArticle

Abstract

The minimum description length two-part coding index of resolvability provides a finite-sample upper bound on the statistical risk of penalized likelihood estimators over countable models. However, the bound does not apply to unpenalized maximum likelihood estimation or procedures with exceedingly small penalties. In this paper, we point out a more general inequality that holds for arbitrary penalties. In addition, this approach makes it possible to derive exact risk bounds of order 1/n for iid parametric models, which improves on the order (log n)/n resolvability bounds. We conclude by discussing implications for adaptive estimation.

Original languageEnglish (US)
Pages (from-to)2727-2741
Number of pages15
JournalIEEE Transactions on Information Theory
Volume64
Issue number4
DOIs
StatePublished - Apr 2018

Fingerprint

Maximum likelihood estimation
penalty
coding

All Science Journal Classification (ASJC) codes

  • Information Systems
  • Computer Science Applications
  • Library and Information Sciences

Keywords

  • Penalized likelihood estimation
  • codelength
  • minimum description length
  • redundancy
  • statistical risk

Cite this

@article{91ef26d543b14091ba2706e61112efb3,
title = "Finite-Sample Risk Bounds for Maximum Likelihood Estimation with Arbitrary Penalties",
abstract = "The minimum description length two-part coding index of resolvability provides a finite-sample upper bound on the statistical risk of penalized likelihood estimators over countable models. However, the bound does not apply to unpenalized maximum likelihood estimation or procedures with exceedingly small penalties. In this paper, we point out a more general inequality that holds for arbitrary penalties. In addition, this approach makes it possible to derive exact risk bounds of order 1/n for iid parametric models, which improves on the order (log n)/n resolvability bounds. We conclude by discussing implications for adaptive estimation.",
keywords = "Penalized likelihood estimation, codelength, minimum description length, redundancy, statistical risk",
author = "Brinda, {W. D.} and Klusowski, {Jason M.}",
year = "2018",
month = "4",
doi = "10.1109/TIT.2017.2789214",
language = "English (US)",
volume = "64",
pages = "2727--2741",
journal = "IEEE Transactions on Information Theory",
issn = "0018-9448",
publisher = "Institute of Electrical and Electronics Engineers Inc.",
number = "4",

}

Finite-Sample Risk Bounds for Maximum Likelihood Estimation with Arbitrary Penalties. / Brinda, W. D.; Klusowski, Jason M.

In: IEEE Transactions on Information Theory, Vol. 64, No. 4, 04.2018, p. 2727-2741.

Research output: Contribution to journalArticle

TY - JOUR

T1 - Finite-Sample Risk Bounds for Maximum Likelihood Estimation with Arbitrary Penalties

AU - Brinda, W. D.

AU - Klusowski, Jason M.

PY - 2018/4

Y1 - 2018/4

N2 - The minimum description length two-part coding index of resolvability provides a finite-sample upper bound on the statistical risk of penalized likelihood estimators over countable models. However, the bound does not apply to unpenalized maximum likelihood estimation or procedures with exceedingly small penalties. In this paper, we point out a more general inequality that holds for arbitrary penalties. In addition, this approach makes it possible to derive exact risk bounds of order 1/n for iid parametric models, which improves on the order (log n)/n resolvability bounds. We conclude by discussing implications for adaptive estimation.

AB - The minimum description length two-part coding index of resolvability provides a finite-sample upper bound on the statistical risk of penalized likelihood estimators over countable models. However, the bound does not apply to unpenalized maximum likelihood estimation or procedures with exceedingly small penalties. In this paper, we point out a more general inequality that holds for arbitrary penalties. In addition, this approach makes it possible to derive exact risk bounds of order 1/n for iid parametric models, which improves on the order (log n)/n resolvability bounds. We conclude by discussing implications for adaptive estimation.

KW - Penalized likelihood estimation

KW - codelength

KW - minimum description length

KW - redundancy

KW - statistical risk

UR - http://www.scopus.com/inward/record.url?scp=85040078105&partnerID=8YFLogxK

UR - http://www.scopus.com/inward/citedby.url?scp=85040078105&partnerID=8YFLogxK

U2 - 10.1109/TIT.2017.2789214

DO - 10.1109/TIT.2017.2789214

M3 - Article

AN - SCOPUS:85040078105

VL - 64

SP - 2727

EP - 2741

JO - IEEE Transactions on Information Theory

JF - IEEE Transactions on Information Theory

SN - 0018-9448

IS - 4

ER -