## Abstract

For q-dimensional data, penalized versions of the sample covariance matrix are important when the sample size is small or modest relative to q. Since the negative log-likelihood under multivariate normal sampling is convex in (Formula presented.), the inverse of the covariance matrix, it is common to consider additive penalties which are also convex in (Formula presented.). More recently, Deng and Tsui and Yu et al. have proposed penalties which are strictly functions of the roots of Σ and are convex in (Formula presented.), but not in (Formula presented.). The resulting penalized optimization problems, though, are neither convex in (Formula presented.) nor in (Formula presented.). In this article, however, we show these penalized optimization problems to be geodesically convex in Σ. This allows us to establish the existence and uniqueness of the corresponding penalized covariance matrices. More generally, we show that geodesic convexity in Σ is equivalent to convexity in (Formula presented.) for penalties which are functions of the roots of Σ. In addition, when using such penalties, the resulting penalized optimization problem reduces to a q-dimensional convex optimization problem on the logs of the roots of Σ, which can then be readily solved via Newton’s algorithm. Supplementary materials for this article are available online.

Original language | English (US) |
---|---|

Pages (from-to) | 442-451 |

Number of pages | 10 |

Journal | Journal of Computational and Graphical Statistics |

Volume | 30 |

Issue number | 2 |

DOIs | |

State | Published - 2020 |

## All Science Journal Classification (ASJC) codes

- Statistics and Probability
- Statistics, Probability and Uncertainty
- Discrete Mathematics and Combinatorics

## Keywords

- Geodesic convexity
- Newton–Raphson algorithm
- Penalized likelihood