Can leveraging Deep Learning for defining Inventory control systems be lucrative ($$$$) ?

Important note: I have done my best to translate everything into a “Business reading” format but there is some technical Machine learning and Deep Learning jargon towards the end.

Dataset used was from a significantly large organization with millions of Location-SKU combinations hence not typical representation of efficiency realized

Before we go deeper: A quick summary of my actual experiment

At one point in my career, I worked on a project developing an Inventory control model for Grocery distribution. I still had that dataset with me and had bandwidth once the coronacrisis escalated.  So I decided to use some Deep learning expertise I have accumulated in last 2 years to experiment with the same dataset.

I had the original baseline (pre-model) data, the efficiency % from the conventional model. That provided me the opportunity to compare traditional policy determination approach to my Deep Learning approach. BTW, I tried clustering approach as well but it did not work the way I expected 😁

The results I obtained were fascinating. For some organizations that are good candidates, this could mean Inventory reduction in millions ($$$) . I am sharing an overview of that approach in this article.

So how exactly do organizations define Inventory policies ?

Back in 2016, I wrote this article on this blog, describing how organizations generally define Inventory policies.

Defining Inventory Policies for your ABC Product Groups-A Cheat Sheet

As the article suggests, and as you would know from your experience, the starting point of defining these policies is generally the ABC classification. Some other classifications exist but ABC is the most popular.  The fact is that a significant percentage of companies are still using ABC Classification to categorize and subsequently optimize inventory.

However, in my perspective, this very ABC classification approach of defining the policies could be the source of suboptimal Inventory policies.

ABC analysis was never optimal but a compromise

When ABC analysis was initiatilly introduced, technology and computing power was nowhere it is right now. Large companies had hundreds of thousands, or even millions of SKU combinations, making it impossible to identify a suitable policy for every individual SKU-Location (which is still not practical and advisable to do by the way).

So ABC classification was introduced as a workaround. The fact is, it was an effective approach, at its time.

ABC classification, using a 3×3 matrix, provides a way to simplify an SKU portfolio to make safety stock calculations more manageable. The methodology for Double ABC classification, done using a 3X3 matrix,  is shown in the illustration below.

Capture

  • AA are the most profitable products which are sold frequently.
  • CC are the least profitable products which are rarely sold.
  • AC are profitable products which are rarely sold (and often irregularly sold too).
  • CA are products which are sold all the time, but do not generate a lot of money

The biggest drawback of ABC classification in my opinion, is that the corresponding policies and service levels are determined more with a ‘trial and error’ process that cannot possibly identify the truly optimal stocking level and service for each SKU-Location combination given the complexity of today’s multi-echelon inventory networks.

Companies have tried modifications of ABC classification

Few traditional inventory management tools try to address complexity of today’s multi-echelon complex inventory networks by providing an 8×8 ABC matrix per location. But the fact is, there is no end to it. There are so many criterias that you can think of that the size of the matrix will keep on increasing. But the biggest drawback of such manual approach of defining these matrices is that there may be criteria embedded in the data that you can’t know of, while doing this exercise manually.

The unfortunate truth is that planners today are relying on last century’s solutions–especially with today’s long-tail demand complexity. And hence, they stand little chance of being able to meet both service level and financial goals in a sustainable way.

Traditional ABC classification may be leading to suboptimal Inventory in your network.

Deep learning can come to your rescue

Gartner predicts that, “by 2020, 95% of SCP vendors will be machine learning  or Deep Learning somewhere in their SCP solutions. 

Machine learning’s ability to find patterns in huge data sets and also get smarter over time make it the perfect complement to human inventory planning efforts.

In my effort to evaluate if leveraging Machine Learning methodologies will yield better results, I applied the following two methodologies on the data:

  • Clustering
  • Deep Learning

Unfortunately, I did not get enough success with clustering (my homoheneous T_best number turned out weird) so I will not take the pain to go into the details of my approach and results will clustering here. Instead, we will focus on Deep Learning methodology and results obtained.

Leveraging Deep Learning on Grocery Demand data

Before we get into the details of my experiment, ponder on this. What is the best case methodology of Inventory planning or control system determination ? It is if you could simulate Inventory system of every SKU, single item level, to determine what is the optimal control system, best re-order policy per item, thus achieving the subsequent optimal classification without resorting to any multi-criteria classification method.

Unfortunately, this is not realistic since it will be extremely time-consuming in real settings, where a large number of items need be managed simultaneously. So here is my approach, as illustrated in my diagram:

Capture

Results

Let me first throw some jargon your way 😉

First, during the simulation phase,the lowest cost classification of in-sample items is was achieved as a result of attaining a best re-order policy per item. The supervised classifier algorithms used were support vector machines with a Gaussian kernel and deep neural networks. These algorithms were trained on these in-sample items to learn to classify the out-of-sample items solely on the basis of the values they show on the features (i.e. classification criteria like the ones shown below-not an exhaustive list).

  • Demand of item
  • Standard deviation of the positive demand size of item
  • Mean square error of the positive demand size of item
  • Replenishment lead time of item
  • Target cycle service level of item
  • Forecasted demand of item
  • Order-up-to level of item
  • Unitary purchasing cost of item
  • Unitary holding cost of item
  • Unitary ordering cost of item

The inventory system adopted here is suitable for intermittent demands, but it may also
suit non-intermittent demands, thus providing great flexibility.

Now, the big part, the one sentence “business sense” summary of the entire execrcise was:

A whooping 30% reduction in Safety stock inventory, from the base case.

But here are some aspects to dampen your enthusiasm

Like every other model, a model is NEVER  a 100% realistic representation of the real world. I can see planners, buyers and their directors challenging hundreds of re-order policies defined by my solution. There could be some non-quantifiable aspects that can not be captured in the model and then there could some parameters that are not in my data point (or I have made assumptions).

But here is the exciting part though- even the 1/10th reduction of the reduction identified, can be huge in $$$ terms for organizations with massive operations, that hold inventory across multiple DCs.

Conclusion- One size does NOT fit all

Remember, I am not try to sell Deep learning as panacea. As I consistently mention in my posts:

Deep Learning is not a good candidate for many “conventional” analytics scenarios. In fact, in many cases, it may perform worse or only marginally better than methods like Regression analysis.

So in this case also, there is no guarantee that you will see the significant reduction I saw with my dataset. It depends on various aspects like you SKU mix, number of unique location-SKU combinations, percentage of items with intermittent demand in your total portfolio etc. But a pilot project can help you determine if Deep Learning can help you obtain significant value ($$$).

————————————————————————————

 

 

 

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Google photo

You are commenting using your Google account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s