In a recent discussion on LinkedIn, a question was asked: Is there a disconnect between the academia and business world when it comes to forecasting? As you can imagine, this is a matter of checking the record and reporting on what one finds in the data. Since I neither have the data, nor the resources available to conduct this research, it is somewhat of an overreach for me to comment on this. However, I will try and present what I see in the marketplace to the best of my abilities in this blog post.
There are three types of research on forecasting that come to my mind. (There are probably more types of research out there on the subject that I do not know about.) High level, these are:
- Statistics or Mathematics focused research
- This in itself has many sub fields
- For the most part, this applies to statistical forecast being done using computer software.
- Behavioral focused research
- For the most part, this applies to collaborative forecast being done by people.
- Business focused research
- This is the part where the researchers study the impact of better (or worse) forecasting practices on the performance of the business.
For the most part, statistical research is way ahead of actual, everyday practice in business. There is usually a wide gap between research and the first software offering that uses that research.
The behavioral research fares only slightly better, especially these days as there is a lot of awareness of this way of thinking. It seems to have reached a critical mass whereby almost every other book seems to be touching on how the human brain works and how it effects how we do things. I think with this awareness, I see a lot of discussion and some accommodation in the process of forecasting. I would say the gap here is just a bit narrower than the one above.
The third type of research is the most relevant to this discussion. It seems that this type of research ought to make the connection between theory and practice, and clearly point out the benefits. I think for the most part, this has been done over the years by way of measuring forecast accuracy improvements.
Forecast accuracy improvements are fine in as far as they go. However, the business world is always interested in the Return on Investment (ROI). It is here I see the biggest gap. The types of questions I see practitioners asking include:
- What is a 1% improvement in forecast accuracy worth to a business?
- At what level is this 1% accuracy improvement? (Meaning 1% improvement at the overall business level versus 1% improvement at the SKU-Customer level)?
- At what time lag is this 1% accuracy improvement? Meaning forecast created 3 months in advance versus 1 month in advance. This will help companies with long lead times.
- Is this the same if the starting forecast accuracy is 20%? 40-60-80-90%? I imagine the impact is easier to achieve if the starting inventory accuracy is very low.
- What is the average/max improvement one should expect using a commercial software package?
- Can the above be correlated to a forecastability score? Theoretically, more forecast-able data should be easier to improve on.
- What is the expected improvement from collaborative input from people in the field such as Sales reps?
These are the types of questions where I see interest from business folks. Even when a practitioner has an interest in the details behind the science, they still need to know the answers to these types of questions so they can get a project approval from their management.
I see very limited work along these lines coming out of the academia. A lot of the work in this area is done by software vendors and this is not always considered a trusted source for obvious reasons. In my next posts, I will try and pull together some of this material. In the meanwhile, I hope someone in academia will read this post and take up this type of research. I know many people who will be very interested.
What do you think? Let us know via comments.