-
Pingback: A Pre-attentive Dashboard - Contemporary Analysis - Contemporary Analysis
Using Mean Absolute Error for Forecast Accuracy
Using mean absolute error, CAN helps our clients that are interested in determining the accuracy of industry forecasts. They want to know if they can trust these industry forecasts, and get recommendations on how to apply them to improve their strategic planning process. This posts is about how CAN accesses the accuracy of industry forecasts, when we don’t have access to the original model used to produce the forecast.
First, without access to the original model, the only way we can evaluate an industry forecast’s accuracy is by comparing the forecast to the actual economic activity. This is a backwards looking forecast, and unfortunately does not provide insight into the accuracy of the forecast in the future, which there is no way to test. Thus it is important to understand that we have to assume that a forecast will be as accurate as it has been in the past, and that future accuracy of a forecast can be guaranteed.
As consumers of industry forecasts, we can test their accuracy over time by comparing the forecasted value to the actual value by calculating three different measures. The simplest measure of forecast accuracy is called Mean Absolute Error (MAE). MAE is simply, as the name suggests, the mean of the absolute errors. The absolute error is the absolute value of the difference between the forecasted value and the actual value. MAE tells us how big of an error we can expect from the forecast on average.
One problem with the MAE is that the relative size of the error is not always obvious. Sometimes it is hard to tell a big error from a small error. To deal with this problem, we can find the mean absolute error in percentage terms. Mean Absolute Percentage Error (MAPE) allows us to compare forecasts of different series in different scales. For example, we could compare the accuracy of a forecast of the DJIA with a forecast of the S&P 500, even though these indexes are at different levels.
Since both of these methods are based on the mean error, they may understate the impact of big, but infrequent, errors. If we focus too much on the mean, we will be caught off guard by the infrequent big error. To adjust for large rare errors, we calculate the Root Mean Square Error (RMSE). By squaring the errors before we calculate their mean and then taking the square root of the mean, we arrive at a measure of the size of the error that gives more weight to the large but infrequent errors than the mean. We can also compare RMSE and MAE to determine whether the forecast contains large but infrequent errors. The larger the difference between RMSE and MAE the more inconsistent the error size. The following is an example from a CAN report,
While these methods have their limitations, they are simple tools for evaluating forecast accuracy that can be used without knowing anything about the forecast except the past values of a forecast.
Finally, even if you know the accuracy of the forecast you should be mindful of the assumption we discussed at the beginning of the post: just because a forecast has been accurate in the past does not mean it will be accurate in the future. Professional forecasters update their methods to try to correct for past errors. However, these corrections may make the forecast less accurate. Also, there is always the possibility of an event occurring that the model producing the forecast cannot anticipate, a black swan event. When this happens, you don’t know how big the error will be. Errors associated with these events are not typical errors, which is what RMSE, MAPE, and MAE try to measure. So, while forecast accuracy can tell us a lot about the past, remember these limitations when using forecasts to predict the future.
To learn more about forecasting, download our eBook, Predictive Analytics: The Future of Business Intelligence.
Interested in seeing how we can help you with forecasting?