Here’s a question that we get all the time: what forecast accuracy should we target? The answer varies widely across industries and organizations–there is no single “optimal” forecast accuracy (other than 100% accuracy, which is not practical)! The one universal answer is that forecast accuracy can be considered “good” when it is an improvement over the accuracy of the last forecast cycle, and the only way to determine this is by tracking error over time using consistent metrics.
The Forecast Archive
When you save a forecast project, Forecast Pro TRAC stores both the statistical forecasts and the final (i.e., potentially adjusted) forecasts in the project’s database. This means that if you have been updating your project and forecasts every planning period (for example, every month), that in addition to the current month’s forecast, your project’s database will contain forecasts that were generated in previous months. This record of the previously generated forecasts is referred to as the forecast archive.
Maintaining a forecast archive allows you to track your forecast accuracy by comparing your forecasts to what actually happened.
To illustrate forecast accuracy measurement and tracking in Forecast Pro TRAC we are using an example drawn from the Forecast Pro TRAC User’s Guide. The User’s Guide provides detailed tutorial exercises utilizing sample data provided with the program. The example comes from a company called ABC Bakery. In the screenshot below, a 12-month forecast for an individual item to a specific customer is shown.
Notice that for the current forecast period, the historic data begins in mid-2008 and ends in June 2012, and that the first forecast period is July 2012. Forecast Pro TRAC refers to the most recent historic data period used to generate a forecast as the forecast origin. Thus, for the current forecast, the forecast origin is June 2012.
In addition to the current forecast, the forecast project for ABC Bakery contains archived forecasts for the past 12 forecast periods. It contains all of the forecasts which have been made and stored since June 2011. The forecast archive in ABC Bakery allows us to measure accuracy up to a horizon of 12 months.
The screenshot below shows one of the primary accuracy tracking reports available in Forecast Pro TRAC. Due to its cascading-like appearance, this style report is often referred to as a waterfall report. The report compares what we forecasted to what actually happened, therefore it is based on two key elements—actual demand history and archived forecasts for the periods being analyzed.
The actual demand history is shown across the top, in the green-shaded row. Note that the date labels indicate that the actual history shown spans the periods January 2012 through the most recent historic period, June 2012 (although the project’s archive contains 12 months of archived forecasts, the report shown has been customized to focus on the last 6 months).
The next six rows in the report display the forecasts generated for these periods from different forecast origins. Thus, the row labeled 2011-Dec displays the forecasts generated six months ago when the forecast origin was December 2011 and the first forecast period was January 2012. The row labeled 2012-May displays the forecast generated last month when the forecast origin was May 2012 and the first forecast period was June 2012, etc.
The waterfall report allows you to highlight different lead times. A lead time refers to the number of periods ahead of the forecast origin that the forecast was made for. Thus a one-month-ahead forecast would have lead time equals 1, a two-month-ahead forecast would have lead time equals 2, etc.
Notice that in our example the forecasts for lead time equals 1 are all shaded in blue, the forecasts for lead time equals 3 are all shaded in maroon and the forecasts for lead time equals 6 are all shaded in yellow. Focusing on the diagonals in the waterfall report allows you to look at your overall performance across a specific lead time. The bottom portion of the report displays cumulative statistics for different lead times.
The screenshot above shows the Tracking Report Settings dialog box. This dialog box is accessed via the tracking report context menu (the menu is invoked by right-mouse clicking in the tracking report window). Notice that the report can be designed to display accuracy measures for either the statistical forecast or the adjusted forecast. This allows you to determine if your adjustments are adding value. In addition, the report can be designed to display the forecast values, the unit errors or the percent errors for each forecast vs. actual comparison. The waterfall report can be further customized by adjusting the number of periods to measure against (“Periods to display” field) and by adjusting which lead times are highlighted (“Lead time(s)” list).
Waterfall reports are generated by Forecast Pro TRAC for every forecast-level in your hierarchy (e.g., total, group, sub-group, item, etc.). When reviewing tracking reports, you’ll often want to concentrate on those areas where the forecast accuracy has fallen outside of an acceptable range. Forecast Pro TRAC offers exception reporting, allowing you to easily and automatically identify problem areas. Exception reporting will be the subject of Tips & Tricks in the next installment of the series.
If you are interested in hands-on training, Forecast Pro product training workshops are available both via the Web and on-site.
For a deeper understanding of various forecasting methodologies and how they are implemented in Forecast Pro, attend our comprehensive forecasting seminar in Boston in May.