So far I have looked into understanding time series and then forecasting with Prophet, SARIMA, Greykite and Auto-ML with AWS and GCP. This is now the final part to understand how to evaluate and explain accuracy of forecasts.
As with all predictions, explaining and establishing trust in forecasts is important. Understanding this in detail is beyond my scope but similar to an adjusted R-squared value for regression or AUC for a classification problem, what can we use here?
An intuitive way to think about this is to see how closely predicted values (ŷ) match with actual observed values (y) from a test set. This is indeed what I have eyeballed and commented on in all the previous posts. This difference between y and ŷ is the “residual” and RMSE (root mean square error) is by far the easiest to explain and apply (see more details here). RMSE is the square root of the average of squared residuals = √( ∑ (ŷ - y)² )/n. Since we squared and then did a square root, the scale of the number is same as your values and the larger the number the less accurate the predicted values. Ofcourse as with averages, outliers will get amplified. So, the 84.79 indicates that on average the forecast values are off by that much from observed values. If you see my sample data, values vary from 200 to 1400 so an average error of 85 is ok?
That is it for now. As I get opportunities to apply this to real problems in production I can share more meaningful learnings!
If these topics interest you then reach out to me, and I will appreciate any feedback. If you would like to work on such problems you will generally find open roles as well! Please refer to LinkedIn.