Caution…this is not the Sci-Fi version
If you came here hoping to read about Algorithms that will take full control of your Supply Chain planning processes , you can stop here and save your precious time.
What will a realistic structure look like ?
Remember that true AI solution should help you make optimal operations decisions in real time. That is the most optimal (and honestly the only) way to leverage AI in real time operating environment.
Though these solutions will recommend best course – the final call in most circumstances will still need to be taken by Humans.
Let’s dig further into how we can build models that can power Supply Chain planning in real time. As mentioned earlier, AI powered Supply Chain planning solutions must enable you to make the best decision possible in the midst of complex, highly variable processes. Such dynamic processes are everywhere in your business and involve interactions between these four key elements. The illustration below shows such a process (Inventory Management).
Note that to keep the example simple, I have used Inventory Management process only. In reality, the NN will have to encompass ALL the processes at once, in order to effectively capture the inter process relationships as well.
- Nodes: elements you cannot control, but know in advance, like DC locations
- Input factors: elements you cannot control, and don’t know in advance, such as customer demand, inventory levels, or actual receipts
- Fixed planning parameters: Safety stock, planned receipts
- Variables: Inventory on hand, actual receipts
- Target KPIs: Customer service level
At a high level, a Supply Chain planning process, for each planning aspect (Inventory, Transportation, Manufacturing etc.), will be three phase, as shown in the illustration below:
It will be a Neural Network based simulation algorithm….
Using Algorithms that leverage neural network, you can build deep probabilistic models of dynamic enterprise processes (basically, simulate what’s actually happening in your supply chain) for predictive simulations. This will allow you to optimize the decision making policy based on those simulations. This state has two phases:
Phase 1: of the process is where given the past sequence of components like lead times, actual customer demand, manufacturing location, DC location etc. the Neural Network is trained to learn something called a latent state representation, which basically compresses the vast amount of historical information you have into a much smaller amount, learning only the most critical pieces of information from the data.
Beyond compressing the information, the model also maps it to a probability distribution, which is then used as part of the predictive simulation in phase 2.
Phase 2: With this latent state probability distribution, if you are given a set of actions and conditions across a future time horizon, you can accurately simulate measurement outcomes such as inventory positions or fill rate, and target outcomes such as lost sales or value at risk. In other words, you can use the model to get an accurate simulation
of the future unknowns (measurement and target outcomes), given what you
know in advance (actions and conditions).
……married with an optimization algorithm
This part of the process utilizes the latent state representation model’s simulation of the future to try out different actions and select the actions that optimize your KPIs.
Given a set of known conditions and simulated unknown outcomes across a future time horizon, the optimization part of the algorithm tries out different action strategies in the simulated environment and rewards action strategies that optimize the key performance indicators associated with the desired targets. In this manner, the optimizer controller acts as a type of reinforcement learning, creating the flexibility to optimize the target outcome based on the cost considerations of your business.
In the context of an enterprise with complex dynamic interactions and process variability, this model is unique in its ability to recommend the optimal actions in any scenario, and to continue learning based on the current actions being taken.
In the example above, you see how we apply the model framework to the real world: Let’s say you’re a manufacturer interested in trying new operating strategies, but high costs of failure preclude you from trying anything on your factory floor. This simulation model learns to simulate future outcomes like your inventory levels (a measurement outcome) or fill rates (a target KPI) given things you know in advance, like your production schedule. Then, the optimizer tries different action strategies in the hypothetical, simulated future, assesses what would happen based on different actions you might choose, and selects the best strategy for you.
As mentioned in the beginning, a Human then needs to review those strategies and then use them for decision making.
Views my own.