TY - GEN

T1 - The cost-free nature of optimally tuning tikhonov regularizers and other ordered smoothers

AU - Bellec, Pierre C.

AU - Yang, Dana

N1 - Funding Information:
P. C. B.’s research was partially supported by NSF Grants DMS-1811976 and DMS-1945428. D. Y. is supported by NSF Grant CCF-1850743.
Publisher Copyright:
© ICML 2020. All rights reserved.

PY - 2020

Y1 - 2020

N2 - estimator among a family of Tikhonov regularized estimators, or, alternatively, to select a linear combination of these regularizers that is as good as the best regularizer in the family. Our theory reveals that if the Tikhonov regularizers share the same penalty matrix with different tuning parameters, a convex procedure based on Q-Aggregation achieves the mean square error of the best estimator, up to a small error term no larger than C2, where 2 is the noise level and C 0 is an absolute constant. Remarkably, the error term does not depend on the penalty matrix or the number of estimators as long as they share the same penalty matrix, i.e., it applies to any grid of tuning parameters, no matter how large the cardinality of the grid is. This reveals the surprising "cost-free" nature of optimally tuning Tikhonov regularizers, in striking contrast with the existing literature on aggregation of estimators where one typically has to pay a cost of 2 log(M) where M is the number of estimators in the family. The result holds, more generally, for any family of ordered linear smoothers; this encompasses Ridge regression as well as Principal Component Regression. The result is extended to the problem of tuning Tikhonov regularizers with different penalty matrices.

AB - estimator among a family of Tikhonov regularized estimators, or, alternatively, to select a linear combination of these regularizers that is as good as the best regularizer in the family. Our theory reveals that if the Tikhonov regularizers share the same penalty matrix with different tuning parameters, a convex procedure based on Q-Aggregation achieves the mean square error of the best estimator, up to a small error term no larger than C2, where 2 is the noise level and C 0 is an absolute constant. Remarkably, the error term does not depend on the penalty matrix or the number of estimators as long as they share the same penalty matrix, i.e., it applies to any grid of tuning parameters, no matter how large the cardinality of the grid is. This reveals the surprising "cost-free" nature of optimally tuning Tikhonov regularizers, in striking contrast with the existing literature on aggregation of estimators where one typically has to pay a cost of 2 log(M) where M is the number of estimators in the family. The result holds, more generally, for any family of ordered linear smoothers; this encompasses Ridge regression as well as Principal Component Regression. The result is extended to the problem of tuning Tikhonov regularizers with different penalty matrices.

UR - http://www.scopus.com/inward/record.url?scp=85105241018&partnerID=8YFLogxK

UR - http://www.scopus.com/inward/citedby.url?scp=85105241018&partnerID=8YFLogxK

M3 - Conference contribution

AN - SCOPUS:85105241018

T3 - 37th International Conference on Machine Learning, ICML 2020

SP - 723

EP - 732

BT - 37th International Conference on Machine Learning, ICML 2020

A2 - Daume, Hal

A2 - Singh, Aarti

PB - International Machine Learning Society (IMLS)

T2 - 37th International Conference on Machine Learning, ICML 2020

Y2 - 13 July 2020 through 18 July 2020

ER -