TY - JOUR

T1 - Shrinking the Covariance Matrix Using Convex Penalties on the Matrix-Log Transformation

AU - Yi, Mengxi

AU - Tyler, David E.

N1 - Funding Information:
Research for both authors was supported in part by the National Science Foundation grants DMS-1407751 and DMS-1812198. Mengxi Yi’s research was also supported in part by the Austrian Science Fund (FWF) under grant P31881-N32 and in part by the Scientific Research Starting Foundation of UIBE.
Publisher Copyright:
© 2020 American Statistical Association, Institute of Mathematical Statistics, and Interface Foundation of North America.

PY - 2020

Y1 - 2020

N2 - For q-dimensional data, penalized versions of the sample covariance matrix are important when the sample size is small or modest relative to q. Since the negative log-likelihood under multivariate normal sampling is convex in (Formula presented.), the inverse of the covariance matrix, it is common to consider additive penalties which are also convex in (Formula presented.). More recently, Deng and Tsui and Yu et al. have proposed penalties which are strictly functions of the roots of Σ and are convex in (Formula presented.), but not in (Formula presented.). The resulting penalized optimization problems, though, are neither convex in (Formula presented.) nor in (Formula presented.). In this article, however, we show these penalized optimization problems to be geodesically convex in Σ. This allows us to establish the existence and uniqueness of the corresponding penalized covariance matrices. More generally, we show that geodesic convexity in Σ is equivalent to convexity in (Formula presented.) for penalties which are functions of the roots of Σ. In addition, when using such penalties, the resulting penalized optimization problem reduces to a q-dimensional convex optimization problem on the logs of the roots of Σ, which can then be readily solved via Newton’s algorithm. Supplementary materials for this article are available online.

AB - For q-dimensional data, penalized versions of the sample covariance matrix are important when the sample size is small or modest relative to q. Since the negative log-likelihood under multivariate normal sampling is convex in (Formula presented.), the inverse of the covariance matrix, it is common to consider additive penalties which are also convex in (Formula presented.). More recently, Deng and Tsui and Yu et al. have proposed penalties which are strictly functions of the roots of Σ and are convex in (Formula presented.), but not in (Formula presented.). The resulting penalized optimization problems, though, are neither convex in (Formula presented.) nor in (Formula presented.). In this article, however, we show these penalized optimization problems to be geodesically convex in Σ. This allows us to establish the existence and uniqueness of the corresponding penalized covariance matrices. More generally, we show that geodesic convexity in Σ is equivalent to convexity in (Formula presented.) for penalties which are functions of the roots of Σ. In addition, when using such penalties, the resulting penalized optimization problem reduces to a q-dimensional convex optimization problem on the logs of the roots of Σ, which can then be readily solved via Newton’s algorithm. Supplementary materials for this article are available online.

KW - Geodesic convexity

KW - Newton–Raphson algorithm

KW - Penalized likelihood

UR - http://www.scopus.com/inward/record.url?scp=85092385427&partnerID=8YFLogxK

UR - http://www.scopus.com/inward/citedby.url?scp=85092385427&partnerID=8YFLogxK

U2 - 10.1080/10618600.2020.1814788

DO - 10.1080/10618600.2020.1814788

M3 - Article

AN - SCOPUS:85092385427

VL - 30

SP - 442

EP - 451

JO - Journal of Computational and Graphical Statistics

JF - Journal of Computational and Graphical Statistics

SN - 1061-8600

IS - 2

ER -