Forecasting

Forecast Tests in MetrixND

January 19, 2017

Out-of-sample tests are a useful tool for seeing how well a model performs with data it hasn’t seen before (i.e., data that weren’t used to estimate the model’s coefficients). This is important because a model’s performance on out-of-sample observations is a helpful indicator of how well a model will forecast. However, truly “testing” the forecasting power of a dynamic model (i.e., AR(1), lagged dependent, smoothing) can be a bit trickier than that of a static model.

For a static model, testing is simple. You just need to identify the observations to withhold from estimation and make sure that the model residuals for these observations are not used. Estimation proceeds by minimizing the sum of the squared errors for the remaining observations. The resulting coefficients can then be used to compute residuals and summary statistics for the test observations. That’s how it works in MetrixND when you drop a binary variable into the Test box on the model design form. In periods when the binary value is 1.00, the residuals are weighted to zero. The residuals for the test periods are computed, but they are not included in the sum of squared errors. Statistics for these residuals are reported under Forecast Statistics on the MStat tab of the model object.

For a dynamic model, life is more complicated. If we simply ignore the test period residuals for estimation, the resulting test statistics are one-period-ahead statistics. This distinction is most clear when we withhold data in blocks within the estimation range or at the end of estimation (a forecast test). For example, suppose we estimate a model with a lagged dependent variable (Y_(t-1)) and we use 100 observations to estimate the model. Suppose we then test the model using the next 12 observations. In period 101, which is the first test period, it is OK to use the actual value for the lagged dependent (Y_100). That is a one-period-ahead forecast. But in period 102 if we were really forecasting, we would not know the value of Y_101. We need to hide Y_101 and instead use Y ̂_101. The result is a two-period-ahead forecast. Similarly in period 103 we would need to hide Y_101 and Y_102, and so on.

In the model objects (regression, neural networks, ARIMA, and smoothing), MetrixND does not hide the Y-data in the case of multi period test blocks. As a result, the statistics are one-period-ahead test statistics and give no indication of how accuracy degrades for multi-period forecasts.

One of the nice things about the latest release of MetrixND (version 4.7) is that the Forecast Test object has been reworked to allow for a true forecast test of dynamic models. The Forecast Test now hides the Y-data from dynamic terms so that you get a real sense for how a dynamic model will forecast. Simply drag and drop the model into the Model box of the Forecast Test object, set the Testing Ends date to be the last observation of the series and the Testing Begins date to some time before the end of the data series, e.g., 24 months.

The tail end of the Y-data then become the test set and are hidden from the model. Using a rolling-origin, MetrixND will generate a forecast using the start and end dates selected, and then proceed to add an observation, re-estimate the model, and generate a new forecast beginning in the period following the newly added observation. It will do this all the way through until it gets to the very last available observation.

For a dynamic model, this means that the model is forecasting using the actual Y-data through the last estimation period, and has to use the Y ̂-data thereafter. As a result, we get a series of forecast tests that yield one-period-ahead statistics all the way out to n-period-ahead statistics, giving us a real sense for the model’s forecasting power. Generally speaking, we would expect to see the out-of-sample statistics degrade for longer forecast horizons (e.g., 12 periods ahead vs. 1 period ahead).

In contrast, for a robust static model, we would expect the out-of-sample statistics to be pretty stable through the forecast period.

In conclusion, if you want to do an out-of-sample test on a static model, then any of the testing options in MetrixND will fit your needs. But, if you want to do a true out-of-sample test on a dynamic model, you should use the Forecast Test object.

By David Simons


Senior Forecast Consultant


David Simons is a Forecast Consultant with Itron’s Forecasting Division. Since joining Itron in 2013, Simons has assisted in the support and implementation of Itron’s short-term load forecasting solutions for GRTgaz, Hydro Tasmania, IESO, New York ISO, California ISO, Midwest ISO, Potomac Electric Power Company, Old Dominion Electric Cooperative, Bonneville Power Administration and Hydro-Québec. He has also assisted Itron’s Forecasting Division in research and development of forecasting methods and end-use analysis. Prior to joining Itron, Simons conducted empirical research, performed operations analysis and data management for a nonprofit, and lectured in economics at San Diego State University while pursuing his master’s degree. Some of his empirical research includes examining the behavioral factors that influence educational attainment in adolescents and the environmental implications of cross-border integration. Simons received a B.A. in Business Economics from the University of California, Santa Barbara and an M.A. in Economics from San Diego State University.


Region Selector Select a region and country for the best experience.