We now live in a world (of business) where there will be two group of big companies:
(1) Companies that continute working this quater to improve business results for the next quaterly call. These businesses in today’s world will typically survive till a mjor disruption happens or till a new entrant, or an existing competitor launches products and/or services that are innovative, thereby making these quarter to quarter companies obsolete.
(2) Companies that bifurcate their startegy- keeping focus on building long term capabilities (3-5 years), while making sure that shareholders understand the impact of these investments in innovation. These investments in building long term capabilities will also help develop “stepwise” capabilities or benefits, the impacts of which can be shared in profit calls and annual reports.
A relevant example in building AI capabilities : Predictive Maintenance
And this long term strategy can not be applied anywhere better than developing one of the most essential capabilities of today’s world- building Business AI capabilities. And here also, Amazon leads by example. Some of the capabilities it has built took it 5-10 years to production. But the key aspect of their success has been to understand that if they chase the quaterly calls like every other company, they will remain in survival model, like most other companies. Growth (in the right areas) and innovation are two key ingredients of successful companies today. While pace of growth can be acelerated, innovation tales time and needs to happen in parallel.
To drive home the example that AI capabilities, specially some of the most touted ones, actually are not “quick deployment” if you want to actually use them in production environment – I will use the example of one “Hot Area”- Predictive Maintenance. I will use this Advanced Analytics tool to explain the need for patience as well as what I mean by the term “Algorithm Trust”.
Predictive Maintenance capabilities- Not an easy journey
You have hearing about Predictive Maintenance a lot recently. If you survey companies that have Maintenance as a significant line item on their P&L, the percentage of companies that are dabbling in building predictive maintenance capabilities is high. Now, if you change the survey question to – “What percent of companies are actively using Predictive Maintenance for actual, live business operations?”, the percentage that you will get is really really low. Why ?
The answer to this “Why?” will drive home the message.
There are two aspects to developing and then eventually embedding the capability within the operations of your organization (in my opinion):
Aspect 1: Technical
Predictive Maintenance capabilities, or specifically true AI capabilities are much beyond algorithms.
To illustrate that Predictive Maintenanace will work in your organization, given the raw data dump from your sensors and other required data points, a qualified and proficient Dat sacientist can build a prototype model, within weeks. But once you know that there is an opportunity, the “real fun” begins. There are many sub-aspects of technical aspects as well. Predictive Algorithms, specifically those that are as critical as a Predictive Maintenance tool, need first of all, a high level expertise to “continuously train” after the initial build.
If you have led or managed Data Science initiatives, you know that no matter how large the initial test and validation data sets were, as soon as you start leveraging that algorithm on streaming data or recent data files, the algorithm results tank. Now, one of the key reasons behind this is obviously because of not designing algorithm build process properly but still, you know that will fairly intricate algorithms like Predictive analytics algorithms, you will need a comprehensive process of “tuning” before the algorithm accuracy comes within acceptable levels. This requires that you have availability to qualified expertise for months (sometimes years). Accepted accuracy level for critical AI algorithms like Predictive Maintenanace can be (and should be) very high- unlike Demand forecasting models for example. And hence, you may often take years to build one that is so production ready that you are at ease letting your technicians use it.
And since we mentioned technicians, that brings me to the scond sub-aspect of Technical aspect: Predictive maintenanace of key asssets and equipments also require a higher level of expertise by technicians. Extensive training is necessary for technicians as they start to use instruments and diagnostic tools to determine when equipment is not funtioning as expected. They need to build an intuition of how the predictive tool makes recommendation. That may sound simple but is a challenge. This is another reason your true capability building process may run into years.
Aspect 2: Behavioral
This, in my mind, is a much more challenging aspect. The first key aspect of this is obviously “change”. Most of the technicians and the education in the technician world have been following schedule based maintenanace for more than three decades. There is a routing associated with it. So obviously, when you ask them to ditch taht “routing” and transition to a process where they don’t know when maintenance needs to be done, it is very uncomfortable for them.
And that brings me to the two key sub-aspects here: Uncertainity and Trust.
We, as humans, love certainity. I thought I was good working under uncertainity till this pandemic hit. The uncertainity of the pandemic, combined with some onegoing uncertainities, threw me totally in a chaos and put a crack in my ” self perceived” ability to thrive in any type of uncertainity. Now if you think about a predictive maintenance tool from a technician’s point of view- They are waiting for a tool to tell them when to perform maintenance. So if they were doing 2 maintenance runs per week, now they may have a week where they are doing none. As a technician, you will have two fold anxiety- Is this “capability building” going to cost me my job?” and “How do I plan my days in advance if I don’t know when duty will call?”.
And then there is the aspect of Algorithm Trust
The most important aspect of this uncertainity that technicials will experience will be- can I trust the algorithm ? And this is where we first get into the first aspect of “Technical” requirements- where you HAVE TO make sure that your technicians understand the underlying logic. The benefits are twofold- If they know the underlying logic, they feel more confident in the results and second, they can flag any weird behavior/recommendations from the algorithm based on their expert knowledge.
And this is where Algorithm trust kicks in. In simple terms, Algorithm trust is essentially how much the end user trusts the recommendations/outputs from the algorithm. I believe that for each type of Algorithm, you can design an “Algorith Score” that you can calculate based on certain inputs from end uusers. This article will not go into detail on that but the key here is:
The best of tools developed after investing millions of Dollars can fail if you can not develop end user confidence in the tool.
There are multiple aspecs of Algorithm trust and model accuracy is obviously one of them. And how can you improve this accuracy. By making sure that you keep “tuning” the algorithm till it delivers results/recommendations/outputs that are high quality, feasible and practical. Users need to “identify” and “associate” with those outputs in order to develope a high level of “Algorith Trust”. And that, again, will take a long time to develop for critical applications like Predictive Maintenance.
Views expressed are my own.