This article is part two of a four-part article series on helping SMEs chart the course of digital transformation. We will now understand the journey of transitioning from manual to autonomous processes and getting data to yield.
Like it or not, there are data everywhere, and you are generating, collecting or consuming it either consciously or subconsciously in each living moment.
For example, the average human brain is bombarded with over 11 million bits of information per second. This unconscious phenomenon occurs even without us having an inkling about it.
In the context of a machine, the data points required to monitor or diagnose the health or to predict failure drop dramatically to single-digit or tens. In the context of a process in a process industry, based on the complexity, the data points might go up by order of magnitude.
It is only the frequency at which one needs to collect and assimilate the data that would be much higher when it comes to processes than manufacturing or operating products making the data set humongous.
One of my professors said that laziness is a key virtue that led humanity towards technological advancements that you enjoy today. A few centuries ago, if a human being were dreaming, it would be of making a machine that would take away most of his mechanical or mundane tasks; now, the dream is about completely automating them without any human intervention.
Now that we’ve mostly achieved mechanisation in manufacturing let us see what it takes to reach the completely autonomous level of operation.
A machine will get autonomy only when it can learn to adjust to assigned situations; it has to be flexible. Please note, this is a departure from the automation that we have known, which is rule-based and rigid. So, if we want to make the machine learn by itself, we need to pump in a certain level of intelligence.
Intelligence cannot be imaginative in nature but has to be derived from facts relative to the context. If intelligence has to be acquired from facts, we need to tap into a data system. The more the data, the more the failure data and the more learned the machine becomes, and subsequently, the higher the probability of accurate decision.
This is very similar to the difference between an inexperienced and experienced professional who has gone through the ups and downs of life, making better decisions than the former.
Since we are creating that intelligence and pushing it into the machine, we call it Artificial Intelligence, unlike humans who naturally acquire it.
So, if our goal (let’s call TO-BE state) is to have a fully autonomous system – could be a machine or bunch of machines or processes for that matter, we shall start from looking at where we stand or ‘baseline’ (let’s call AS-IS) and the steps that need to be taken to reach the goal. Obviously, we would like to (i) keep the spend* as low as possible and (ii) acquire as robust a system as possible that is scalable, idiot-proof** and future proof.
*Tip: Spend is not only the initial investment. It includes the running costs, unscheduled breakdowns, maintenance, spares, replacements and upgrades, personnel training, etc. (TCO – Total Cost of Ownership).
**Tip: Remember not every person on the shop floor is skilled.
As they say, ‘What is not measured cannot be managed. Similarly, what is not managed, cannot be improved. Hence, it’s important to measure at the beginning of the digitalisation journey.
Tip: the best part of this digital journey is that you would realise a return on investment (RoI) at every stage and can stop or postpone at any stage you want.
Let’s say we are at a stage where we record and collect some of the data manually. Some of them are collected and stored in standalone systems like Excel or ERP, or MRP tools. The first thing we need to do is to find ways to (a) digitalise and (b) automate (as much as possible) the data collection.
Automation of data collection would eliminate costly human errors and free up person-hours which could be put to better use. For example, if you are manually recording the “material in” into the production space from the store, we could (a) digitalise the same by adding simple processes like bar-coding (an inexpensive solution) and (b) channel all material through a single gateway (or a conveyor, if it already exists in your factory) with a bar-code reader (could be the handphone***) which is connected to a central server – this is done for automating the data collection.
Network connectivity between different assets to one centralised or a few servers is necessary to achieve digitalisation.
***Tip: Always look for simpler and robust solutions. Try to use common devices and be software-centric as much as possible to avoid maintenance costs and costs due to unscheduled breakdowns.
Let’s call this the first stage of digitalisation
Once you have the data captured and sent to your server or database, you can visualise or monitor the data and check for the cleanliness and outlier if we set some simple rules. You may check the data in their native formats or get all the relevant data on a dashboard.
Let’s call this stage visualisation
In this stage, you find out if a particular machine is down or material is in short supply. If you were to gather the data manually, you might not capture the short supply well in advance, as the manual data compilation takes time.
You might already be saving some money when you reach this stage by plugging in leakages.
In the case of processes, the cost savings could be much more pronounced when we get to this stage.
In this stage, we could go a step further to automate notification of an outlier (something abnormal). This could mean a particular stakeholder getting an email notification about, say, a breakdown or depleting inventory.
Once we have clean data from various sources stored in different silos^^, the next step is to gather them all together, create connections between different data tables through common parameters.
Inexpensive tools such as Monarch do a fabulous job connecting to innumerable data sources, reading them in native formats, collecting and arranging them in tables that could be manipulated for further analyses, which we’ll talk about later.
Let us call this stage data aggregation
Tip: Like it or not, you’ll be collecting different kinds of data on multiple databases which are disconnected from each other.
For example, the payroll data might be residing in simple excel files, while MRP might be in MS SQL, inventory in a proprietary database and POs and invoices in PDFs. Reading the native formats and bringing them all into a unified database is no easy task.
The aggregated data could be used for various diagnoses, including Root Cause Analysis (RCA). Various what-if scenarios could be simulated to find the appropriate corrective measure. The impact of different parameters on the cost and quality shall be studied through simulations using in-built algorithms.
Some of these advanced analyses could also be used for prognosis or predictive analytics. Using Predictive Analytics, machine breakdowns could be predicted hours and days before the failure. OEE (Overall Equipment Effectiveness) and similar metrics would help understand quality and capacity enhancement possibilities.
Let us call this stage diagnosis and prognosis
We want to tweak the process to get better output and profitability when we know what’s going wrong or what could be better. The advanced autoML algorithm could be taught to self-learn and adjust appropriately to situations.
In some situations, finding a solution might not be that easy, or we would like to try out multiple options and adopt the most optimal one.
If we stop the production from trying out different options, we might have wasted machine-operator cycles. Instead, experimenting on a digital replica (model) of your shop floor might prove to be effective.
The digital replica is referred to as Digital Twin. The Digital Twin could be either very sophisticated and expensive or simplistic yet functional. Regardless, deep insights could be obtained, which expensive tryouts won’t be able to provide.
Whichever way we find the solution, the next step is to control or take corrective measures.
Since we have digitalised and interconnected our enterprise, or at least a key part of it, we might as well create a digital interface and remotely control the process or the assets (machines and other equipment).
Nevertheless, remotely controlling all the assets might not be inexpensive. In cases where it is expensive to control remotely, instructions shall be given to the shop floor to effect the change, based on the findings.
If the situation warrants and we have the Digital Twin to simulate and find the optimal solutions, the corrective measures could be applied in near real-time, thus saving costs. With a small incremental effort, a self-learning framework may be set up.
Let us name this stage control and correction
When we reach the stage where we can diagnose, diagnose, find optimal corrective measures, and communicate or control the machine remotely, all it takes is to automate the whole process.
The decision-making framework using AI and ML could take direct action based on the insights obtained from the above steps. At this level, when the system has become self-diagnostic, a self-healing framework could be applied.
This is when the system becomes autonomous.
Below is a quick representation of the above steps:
Stay tuned for the third article in this four-part series. We will dive into digital transformation opportunities across the enterprise and understand the role of data analytics in helping SMEs make better decisions.
Editor’s note: e27 aims to foster thought leadership by publishing views from the community. Share your opinion by submitting an article, video, podcast, or infographic.
Image credit: stnazkul
The post Digital transformation for SMEs, Part 2: Understanding its maturity cycle appeared first on e27.