Large manufacturers creating copious amounts of “widgets” are running into the same issue -- ensuring both quantity and quality goals are met. Manufacturers need to know when a part coming out of the production line doesn’t meet the required quality level. The longer that this issue remains unknown, the more challenging and expensive it is to fix it because…
1) At the factory itself, more components with faults can be created.
2) If those components are discovered as faulty by the manufacturer’s end customer, there is an impact on the supplier relationship as well as the costs and time delays in replacing the faulty items.
3) If the components are assembled into end equipment that then fails in deployed systems, there are even larger ramifications in terms of recalls and (poor) market visibility for all involved.
Combating the issue
One approach that has been deployed for years is batch testing. The results from this are used to trace back (somewhat manually) to the manufacturing environment and workers to understand root causes and make changes to processes.
Many manufacturers are exploring how to ensure quality is being improved upon in a proactive and more effective manner. The arrival of competent and cost-effective machine learning has enabled the possibility for a camera feed to be reviewed in real time and have failing widgets identified and tagged either physically or virtually. Our experience has indicated that there are some false positives, but this is much better than failing parts making their way to end customers. As the algorithms improve, the incidence level of those occurrences will reduce.
So, where is the industry heading?
One possibility is calculating a decision from multiple data streams. Digital transformation has been talked about for 10 years. Every manufacturer is either trialing a system for real-time quality or predictive maintenance or has started a limited deployment of these systems. Broad deployments like a manufacturer deploying a system across all production lines across all factories worldwide is not yet a reality. Today’s systems are typically making decisions based on a single input. There is however strong interest from the market to collect information and derive insights from a broader set of sensors (vibration, functional test, visual, pressure).
This is where edge computing really comes to the forefront since there is a potential to create so much data in a factory and it is simply too expensive and slow to send all of this to the cloud. Customers appear to be following one of two paths for their initial explorations:
- Demonstrate value by doing quick point of contact (POC) using resources internal to the factory. Once value has been demonstrated, use cloud resources to scale the capability.
- Harness current IT processes and create isolation for the OT network using hypervisor technology and start immediately with the cloud to support the POC.
Most have taken the first approach. The scope of the test is easier and there is a shorter learning ramp, and so if projects are driven by the operators, they tend to start here. There are more top-down decisions regarding POCs where the mandate is that the cloud connectivity must be part of the POC work. Nearly all recognize the need to harness the cloud at some point in the future. The interesting element to watch will be what system functionality is processed where. Customers are concerned about security, data privacy, costs and potential lock in associated with the cloud approach. The reality is that most of these issues are all workable. It is relatively straightforward to create a multi-cloud strategy for the IT/OT elements of the business, for example.
Lastly, expect consolidation of functions. Today, these real-time quality systems are being provided as an add-on. In the future, product line automation systems will include this functionality to support real-time quality and predictive maintenance. With the rapid advancements in artificial intelligence (AI) algorithms and a shift toward more flexible manufacturing environments that can be adjusted to build different widgets, there will be a need to update the system with new software every few months. This enables the hardware to be viable in situations for a decade.
The architecture that this type of system requires is what is referred to as “mission critical edge,” securely combining the scaling benefits of IT infrastructure with the reliability, deterministic real-time behavior of embedded platforms. The benefits for the end customer are a platform that is flexible, cost effective, scalable and secure.
Wherever the industry is heading, it’ll be important for manufacturers to understand the options available to ensure that quality of widgets is upheld and relationships between manufacturers and customers aren’t impacted.