TY - CHAP
T1 - Forecast Evaluation
AU - Cheng, Mingmian
AU - Swanson, Norman R.
AU - Yao, Chun
PY - 2020/1/1
Y1 - 2020/1/1
N2 - The development of new tests and methods used in the evaluation of time series forecasts and forecasting models remains as important today as it has been for the last 50 years. Paraphrasing what Sir Clive W.J. Granger (arguably the father of modern day time series forecasting) said in the 1990s at a conference in Svinkloev, Denmark, “OK, the model looks like an interesting extension, but can it forecast better than existing models?” Indeed, the forecast evaluation literature continues to expand, with interesting new tests and methods being developed at a rapid pace. In this chapter, we discuss a selected group of predictive accuracy tests and model selection methods that have been developed in recent years, and that are now widely used in the forecasting literature. We begin by reviewing several tests for comparing the relative forecast accuracy of different models, in the case of point forecasts. We then broaden the scope of our discussion by introducing density-based predictive accuracy tests. We conclude by noting that predictive accuracy is typically assessed in terms of a given loss function, such as mean squared forecast error or mean absolute forecast error. Most tests, including those discussed here, are consequently loss function dependent, and the relative forecast superiority of predictive models is therefore also dependent on specification of a loss function. In light of this fact, we conclude this chapter by discussing loss function robust predictive density accuracy tests that have recently been developed using principles of stochastic dominance.
AB - The development of new tests and methods used in the evaluation of time series forecasts and forecasting models remains as important today as it has been for the last 50 years. Paraphrasing what Sir Clive W.J. Granger (arguably the father of modern day time series forecasting) said in the 1990s at a conference in Svinkloev, Denmark, “OK, the model looks like an interesting extension, but can it forecast better than existing models?” Indeed, the forecast evaluation literature continues to expand, with interesting new tests and methods being developed at a rapid pace. In this chapter, we discuss a selected group of predictive accuracy tests and model selection methods that have been developed in recent years, and that are now widely used in the forecasting literature. We begin by reviewing several tests for comparing the relative forecast accuracy of different models, in the case of point forecasts. We then broaden the scope of our discussion by introducing density-based predictive accuracy tests. We conclude by noting that predictive accuracy is typically assessed in terms of a given loss function, such as mean squared forecast error or mean absolute forecast error. Most tests, including those discussed here, are consequently loss function dependent, and the relative forecast superiority of predictive models is therefore also dependent on specification of a loss function. In light of this fact, we conclude this chapter by discussing loss function robust predictive density accuracy tests that have recently been developed using principles of stochastic dominance.
UR - http://www.scopus.com/inward/record.url?scp=85076729583&partnerID=8YFLogxK
UR - http://www.scopus.com/inward/citedby.url?scp=85076729583&partnerID=8YFLogxK
U2 - 10.1007/978-3-030-31150-6_16
DO - 10.1007/978-3-030-31150-6_16
M3 - Chapter
AN - SCOPUS:85076729583
T3 - Advanced Studies in Theoretical and Applied Econometrics
SP - 495
EP - 537
BT - Advanced Studies in Theoretical and Applied Econometrics
PB - Springer
ER -