This blog should be seen as the third piece of the 3-blog series. The first two can be found here:
Many who read the second blog commented that it looked very related to the concept of Forecast Value Add (FVA). And they were right. Let us say a few words about that here.
When talking about forecast value add, it is very important to give credit to Michael Gilliland for the concept. He came up with the concept and has diligently worked to popularize it with practitioners.
Mr. Gilliland uses this concept in the context of a multi-level forecasting process. He advocates using this analysis to evaluate the impact of the different steps in improving the forecast accuracy, or in other words, adding value to the forecast by making it more accurate. As an example, consider the following 3 step process.
- Naïve Forecast
- These are from simple methods such as same as last period, same as same period last year, 3-period average, 12-period average, etc.
- Statistical forecast
- Forecast calculated from a demand planning system that creates this forecast using advanced forecasting algorithms in a batch job or through the click of a button without a whole lot of effort from the planner.
- Collaborative forecast overrides
- Forecast generated from human input which overrides the previously calculated statistical forecast.
His main hypothesis is that if each subsequent step is not adding value by increasing the forecast accuracy, then perhaps that step is not worth doing. (I am paraphrasing here).
Our focus in this blog series has been to establish forecast accuracy targets based on this line of thinking. In very general terms, the goal should be to add value to the business through the forecasting process. The increase in forecast accuracy (or the corresponding decrease in forecast error), gains in data management, the betterment of supply-demand balancing, and the ability to avoid really bad surprises through fast collaborative action are some of the ways a good forecasting process creates value. We have however focused on the forecast value add and using that to create a minimum acceptable forecast accuracy target in the previous blog. Now we will take that a step further and talk of ways of improving it.
It could be that your data does not show any repeatable patterns or trends that can be projected out in the future. Or maybe it does, but it also tracks well with some external factors such as the price of Oil or GDP of USA or GDP of China or number of housing starts. Well, now you have a brand-new way to improve your forecast accuracy. One can study these external, possibly macro-economic, hopefully leading indicators and use the relationship to improve the forecast results. There are tools such as LIFe from our partner Solventure which can do the heavy lifting for you and show you which leading indicators can help you with your data.
One can also lean on the human forecasters to help improve the accuracy. Many times, they can provide very good information into the forecasting process because of what they have seen or heard in the marketplace. Perhaps a new law is about to stop a product from being used altogether. Perhaps the competition is taking a one-month shutdown which will move the demand to your firm. The forecasting engine will not have access to this type of information. But a human forecaster can add to the forecast accuracy by making use of this information and updating the forecast accordingly. Net result should be a much better forecast.
Another technique to add to accuracy (or at least take away howling errors) would be to cross-check with a top-down forecast. Let us use an example. Say one is trying to forecast demand for fertilizers. One could do a bottom-up number, be it based on historical trends and patterns, external factors. or human input and arrive at a result. The top-down approach here would be to look at the total land area under cultivation, multiply it by the percentage market share and check for the reasonableness of all the other forecasts.
So, let us go full circle now: How forecastable is your data? Well, it can depend on patterns available in the historical information. It can also depend on how well they relate to external leading factors. It can also depend on your access to business knowledgeable users. If one can use techniques that can be run through a computer in an automatic way, then you can get to a good forecast very efficiently. But efficient and effective are two different things. By deploying some of the not so efficient (but effective) techniques (such as human input), one can still improve the forecast.
We will cover a lot of these in our upcoming webinar…