You have a favorite forecast accuracy metric(s) you’ve been practicing within the organization for a while, and now you think you are ready to bring it to the Sales and Operations Planning (S&OP) meeting as a Key Performance Indicator (KPI) of your demand planning process. But you are not sure exactly how to go about reporting forecast accuracy to the attendees. In this post, I will address this question.
I saw this news article on CNN (here) about our planet’s earth bigger, older cousin. Quite an interesting discovery if you ask me. However, it got me to thinking about the family tree of Mean Absolute Forecast Error (MAPE), a subject that I am a little bit familiar with. A few weeks ago, I wrote about the two sides of the MAPE coin. After the post, I got a lot of feedback around varying ways practitioners and researchers have modified the MAPE. For me, these variations represent the family tree of MAPE 🙂
I spent some time discussing MAPE and WMAPE in prior posts. In this post, I will discuss Forecast BIAS. Forecast BIAS can be loosely described as a tendency to either Forecast BIAS is described as a tendency to either over-forecast (meaning, more often than not, the forecast is more than the actual), or under-forecast (meaning, more often than not, the forecast is less than the actual). Now there are many reasons why such bias exists, including systemic ones. In this blog, I will not focus on those reasons. Instead, I will talk about how to measure these biases so that one can identify if they exist in their data.
In our line of work at Arkieva, when we ask this question of business folks: What is your forecast accuracy? Depending on who we ask in the same business, we can get a full range of answers from 50% (or lower) to 95% (or higher). How is this possible? Imagine a management team being given this range of numbers on the same metric. I am sure they will not be happy. In this blog post, we will consider this question and suggest ways to report the accuracy so management gets a realistic picture of this important metric.
Key Points on MAPE: Mean Absolute Percent Error (MAPE) is a useful measure of forecast accuracy and should be used appropriately. Because of its limitations, one should use it in conjunction with other metrics. While a point value of the metric is good, the focus should be on the trend line to ensure that the metric is improving over time.
Sales is a fast-paced business; people who aren’t focused on the here and now usually lose out on sales. As mentioned in Jelle’s blog, sales persons would rather be out in the field selling, which is the reason why they feel it would be a waste of their resources sitting in their office forecasting. Yet, forecast inputs from sales teams, when done appropriately, can increase the accuracy of the forecast by 10% – 20%. Here are 8 ways to encourage sales teams to provide forecast input.
In our last post, Sujit discussed the importance of gathering input from participants as a way to mitigate biases. In this post, I will explain why the sales team should be required participants and what their influence is on the forecasting process.
Recently, someone forwarded a Live Science web link to me. The web page had information on how human brains are able to read words and make sense of them even when the letters are jumbled up. The website suggested that it was possible for a human brain to make sense of the words as long as the first and last letters were correct.
Developing an accurate as possible forecast is very important in running a business. Demand planners spend countless hours trying to create a better forecast so that they can help their company be more efficient. A key ingredient in the creation of the final forecast is the forecast generated by a computer program, which is based in statistics. In this blog post, we will focus on creating better statistical forecast.