Supply Chains may never become fully autonomous but….
Supply chains of the future will be increasingly driven by Algorithms. Key planning processes will be significantly automated and managed by Algorithms. In a nutshell…Supply Chains of the future will be part of Mathematical Corporations -corporations that are driven by data. Data will feed ingenious algorithms.
To successfully lead Supply Chains in Mathematical corporations, you need to better understand certain important aspects like how to invest in the infrastructure, software and Machine intelligence expertise that are the foundation for taking organizations to the next level of performance.
Four key areas that you need to develop a good overview of
The Machine intelligence of the future will not be like automation of the past in which bots do the rote, mundane, repetitive work. Machine will perform select elements of knowledge work long held to be the sweet spot for people in some of the most revered professions like law, medicine, engineering etc.
What kinds of Analytics technology should Supply Chain leaders understand – and buy ? I have divided critical technologies of Machine Intelligence into four areas. This is not an exhaustive categorization but if you invest in learning selected topics across all these four areas, you will position your organization to profit from galaxies of new information.
- Data Collection, storage and preparation methodologies and Technologies
- Applications that help visualize and interpret
- Algorithms applied to data
- Foundation infrastructure technologies
Note that the topics indicated below for each of the above four areas are for you to use as a list to explore these topics. Due to the nature of intent of this article, explanations are not in this article but I plan to write separate articles explaining some of these areas.
Data Collection, Storage, preparation and Foundational Infrastructure
The following are some aspects of this area that you should research:
- Types of Data (ex: Structured, Unstructured)
- What is a Data Lake ?
- Types of databases prominently in use (ex: Relational, Distributed)
- How many different types of existing systems hold relevant data currently and how is the data format different for each of these ? What type of database supports these systems ?
- Basics of Cloud technology and Cloud computing
- Data Security fundamentals
- What technologies are available to automate data extraction, cleaning and processing (ex: Alteryx)
- What is the computing architecture that will be best suitable depending on size of my organization’s data and computing requirements ?
Applications that visualize and Interpret
- Basics of Descriptive statistics to interpret statistical graphs and charts to do data discovery in a Dashboard environment
- Ability to decipher key types of charts like (not an exhaustive list):
- Pie charts
- Line charts
- Heat Maps
- Tree Maps
- Different type of Dashboard Software options available and their high level offering (ex: Tableau, Qlik)
Algorithms applied to Data
What are the different types of Discovery tools available in the market ? These tools generally uncover patterns, associations and anomalies Generally, Data Scientists work on these analysis but this is changing rapidly. Soon there will be easy click and query tools so that non technical executives can apply analytics.
In the mathematical corporation, machine learning is the way you move from simply programming computers to carry out tasks to enable them to learn the world around you and provide recommendations.
Remember that Machine Learning is not just a learning tool. It is also a tool for approximation, prediction and creating original understanding that enhances Supply Chain leader’s high level capabilities to imagine the future. There are many approaches to leveraging Machine learning for predictions and those can be classified into two key categories:
Supervised Learning: In supervised learning, you show the model the input and output data for known examples of pattern. An example is Regression.
- Regression: Finds priority factors that influence an output. Essentially, it defines relationship between certain input variables that lead to an output. Ex: Which of the following variables is a more stronger predictor of product getting damaged during transportation ? Transit time, Product weight, carriers etc. ?
- Classification : Categorizes data by type. You determine the parameters for classification and a good application area example can be a classification algorithm that classifies a new SKU into A,B, or C type Inventory Management SKU based on certain product characteristics.
Essentially, in this type, the computer learns the pattern revealed by many variables and uses it to predict the results of new cases of the same phenomenon.
Unsupervised Learning: It often starts with no knowledge of data or relationships within the data. Instead of training set, Data Scientists tell the model to begin dividing the data in many different ways to learn what choices and summarizations can occur. These algorithms essentially identify groups of data that exhibit similar traits. Examples of some unsupervised learning algorithms are:
- Clustering: Puts data into groups where each group contains data with similar characteristics, as determined by the algorithm.
Simulation and Optimization
- High level understanding of Linear Programming and Mixed Integer Programming (how the problem is structured etc.) and leading off the shelf tools available (ex: Llamasoft, JDA)
- Types of Simulation modeling and leading off the shelf tools available (ex: AnyLogic, FlexSim)
Last but not the least…..Deep Learning
This is a type of Machine Learning that tries to mimic a human brain in terms of architecture. These algorithms can process a wider range of data, typically require less data pre-processing by humans and generally deliver results that are more accurate than traditional machine learning models.
In terms of architecture, there are multiple interconnected layers of software based calculator nodes that are known as neurons (taking a cue from human brain architecture, as mentioned earlier). The network can intake large amount of data and this data is processes through increasingly complex computations as it travels through the network.
As mentioned in the beginning, for those interested in exploring further, I will be covering each of these five areas in detail in separate articles.
Views my Own. Not an endorsement of any of the tools mentioned in the article.