Smart factories will transform the world of Manufacturing but what will be the primary driver behind that ? Automation and developing Digital twins leveraging data generated by sensors obviously will lay the foundation for Manufacturing 4.0 but the real game changer will be how you actually leverage the data collected.
With rapid advances in technology, capabilities like Digital Twin will soon get commoditized so how you use the technology and the data captured by the technology will differentiate you from your competition. Before you start using advanced approaches like advanced Machine Learning or Deep Learning on this data to create new efficiencies and develop new capabilities, you need to start your journey with some basic analytics.
This article highlights some of the basic analytical approaches you can use in Asset Management, leveraging the classic analytics buckets of Descriptive, Prescriptive and Predictive.
Descriptive analytics is the most basic class of analytics. Consider an example where you have tons of data generated by sensors on your manufacturing equipment. You can analyze this data to try and provide a broad view of some of the parameters of your manufacturing operations. In other words, descriptive analytics tries to answer the question What happened? These analytics use data mining, aggregation, or visual intelligence techniques to understand the status of the assets.
In the I-IoT, context the most common technique used for descriptive analytics is KPI monitoring. Other applications are also discussed below.
KPI monitoring and health monitoring
KPI monitoring is the simplest way to monitor the health of a fleet by aggregating or calculating new indicators from raw data. Moreover, we can implement a condition monitoring mechanism using a specific simple rule.
Almost all assets leveraged in your operations have operating thresholds that have been defined. Condition monitoring uses these thresholds or fuzzy rules to extract potential issues that occurred in the past.
Anomaly detection algorithms have been applied to the I-IoT since its induction. Anomaly detection analytics is a special class of descriptive analytics to catch and sometimes anticipate anomalies from a piece of equipment.
We can distinguish between three types of anomaly detection, like:
- Contextual anomalies: The anomalies are context-specific
- Point anomaly: A single instance of data is anomalous if it’s too far from the expected behavior
- Collective anomalies: A group of data points help to detect the anomalies
These three types are shown in the following diagram:
Picture courtesy : Society of Mechanical Engineers
Generally speaking, these analytics build a data-driven model based on the standard operability of the equipment by monitoring it. This is then compared to the current data or features. If the difference is too high, an alert is raised to indicate the user of a possible malfunction. These analytics can be developed using simple rules, clustering algorithms, simple moving averages, or more advanced analytics such as the Kalman filter.
Diagnostic analytics are the most common analytics in the I-IoT context. These analytics use advanced modelling techniques to analyze failure modes and to extract the root cause of an issue. Diagnostic analytics try to reply to the question Why did it happen?
The three steps of diagnostic analytics are as follows:
- Detecting anomalies
- Discovering anomalies
- Determining the root cause of anomalies
Normally, the last two steps of diagnostic analytics are done by human investigation. Diagnostic analytics can use feature extraction or anomaly reasoners to provide indicators about why an anomaly happened. Anomaly detection is, on the contrary, an automatic step performed by (more or less) sophisticated analytics.
After anomaly detection, we need to discover the failure mode and identify the cause of the issue. Normally, these activities require human knowledge of failure modes and effect analysis or a large dataset of past failures. For instance, we can implement a set of rules (which can be deterministic, fuzzy, Bayesian, or machine learning-based), codifying the cause and effect of the fault.
Predictive analytics looks at the future and attempts to answer the question What could happen in the future? These analytics use regression models or anomaly detection models to anticipate potential issues that might occur in the future. In the I-IoT sector, the most interesting sub-class of predictive analytics is prognostic analytics.
Prognostic analytics is an estimation of time to failure and risk for one or more existing and future failure modes . Prognostic analytics predict the future degradation or damage and the Remaining Useful Life (RUL) of an asset or part, based on the measured data.
The goal of prognostics is to predict the cycles remaining before the damage grows beyond the eligibility threshold. In order to predict the RUL, prognostics utilize the measured damage levels or an estimation of the damage up to the current cycle.
Unfortunately, these measures are often affected by uncertainty, which might include noise, bad measurements, and lack of knowledge. When we estimate the RUL, we also have to quantify the probability density function (PDF) of how accurate the prediction is. The process of estimating these uncertainties is called Uncertainty Quantification (UQ). The following diagram summarizes these concepts:
Picture Courtesy : Institute of Industrial Engineers
Note :Although calculating the UQ seems like an academic exercise, it is really useful to determine how accurate the prediction is. A good prognostic analysis should always calculate the RUL and the UQ.
Prescriptive analytics represents the next step in this prediction process and asks the question: How should we respond to these potential future events? Prescriptive analytics anticipates what will happen, when it will happen, and why, suggesting different options related to decisions. These analytics forecast future opportunities and/or mitigate future risks. The common application of prescriptive analytics in I-IoT is for condition-based maintenance (CBM).
CBM is a maintenance strategy for repairing or replacing damaged or degraded parts that might reduce the life of a machine. It monitors, detects, isolates, and predicts equipment performance and degradation without shutting down the daily production. In CBM, the maintenance of systems and components is based on the health of the equipment versus the possibility it might break down or require scheduled maintenance. The following diagram shows the basic idea of CBM compared to more standard maintenance approaches:
Picture Courtesy: Smart Things solutions Inc.
CBM tries to answer the question—Should we change this part or piece of equipment, in accordance with our maintenance plan, or shall we take the risk and continue operating?
Production optimization analytics
Optimization analytics are an interesting class of analytics that are related to the I-IoT. They try to maximize efficiency by increasing production and reducing costs. These analytics work together with data and are normally either activated proactively by a user or used daily to provide automatic insight to the user.
The industrial sector has used these analytics on-premise since 1990 to optimize production directly from the controller. These analytics are not exactly part of the I-IoT, but they should be considered in these discussion.