Yield analysis and faulty wafer detection are examples of tasks that would be tedious or impossible to program a computer to do by hand.
We’ve designed computer programs that have learned and evolved so they could learn those skills for us.
This process is called Machine Learning, and it is changing manufacturing industries all over the world.
Machine Learning is capable of noticing patterns that people miss. Machines can sift through overwhelming amounts of data, and correlate events across time spans and different data sources simply too big for humans to understand.
Machine Learning helps organizations manage complexity, change, and uncertainty, and does so through process improvements. These process improvements provide tangible benefits for manufacturers.
Manufacturers deploying Machine Learning have used it to increase efficiency in areas ranging from binning and predictive maintenance through optimizing manufacturing costs and improving product quality.
In industry after industry, Machine Learning has been proven to increase production capacity while lowering material consumption rates.
Though Machine Learning promises to provide a seemingly endless list of benefits, getting there from here isn’t easy.
Implementing Machine Learning involves a number of challenges. The core challenge being that in order to make use of Machine Learning, one must know what questions should be asked of one’s data. Figuring this out is the job of data scientists.
The Importance of Ease-of-Use in Data Science
Not all organizations have data scientists on staff. Building a data science team can be a long journey: data scientists are among the most in-demand professions in the world right now. These organizations are increasingly able to turn to vendors who provide pre-canned algorithms.
Each new predictive or analytic capability incorporated into the standardized tool chest of commercial, off-the-shelf Machine Learning solutions increases the strategic importance of in-house data science teams.
Organizations that have been able to assemble data science teams have the ability to discover which questions they should be asking of their data.
Data science teams also provide organizations the ability to craft increasingly more accurate and helpful algorithms to do so.
Standing up the infrastructure required to run those algorithms on the data, however, can be a showstopper. Data science teams often aren’t infrastructure specialists. Furthermore, they are frequently overwhelmed with requests for new algorithms, and for the most part, they just need the infrastructure problem to go away.
Machine Learning in Practice
Consider for a moment the common manufacturing dilemma of burn-in testing. Burn-in testing is used to detect components which would fail early on in a product’s life cycle.
Traditionally, burn-in testing detected components that were likely to fail simply by increasing the duration of the burn-in testing. This would catch the bulk of the failures associated with the early peak of the bathtub curve. (The bathtub curve describes the pattern of failure rates common in manufacturing.)
Detection of components that are likely to fail is achieved by testing components at artificially high stress levels.
The theory here being that components that start to fail when operated just beyond their design limits would have failed early on when exposed only to designed-for stresses.
Therefore, companies such as automotive OEMs are demanding that their suppliers apply burn-in testing to every chip they ship. However, the chip manufacturers are concerned that extensive burn-in testing is actually impacting the reliability of their products and thus they are looking for ways to reduce burn-in testing.
Machine learning techniques can be brought to bear to identify components that are statistically unlikely to experience the “infant mortality” that burn-in testing seeks to eliminate.
These components can then bypass burn-in testing altogether, extending the life of the product.
Additionally, analyzing the data can determine where in the production life cycle components are failing. Sensor data associated with that part of the production cycle can be examined to identify possible environmental or procedural causes.
It is through these sorts of layered insights that Machine Learning can save organizations money. Tangible savings can be had by not only by failing marginal components before burn-in testing or bypassing testing for components that clearly don’t need it.
Savings can also be had by identifying components which would experience enough damage during burn-in testing to bring their operational lifespan below the warranty period, and then bypassing burn-in testing for these components.
Machine Learning can manage to accomplish these practical savings with little or no impact on the product outgoing quality performance, which is measured in Defective Parts Per Million (DPPM).
Machine Learning is more than lofty goals and fuzzy promises about analytics, though. Machine Learning has concrete and measurable results in key applications.
Infrastructure Comes in Layers
In order to take advantage of Machine Learning, a number of infrastructure components have to be put in place. Machines, testing facilities, and even business processes must be instrumented, and this data collected. The data collected must be structured and stored; it must then be indexed and made available for querying.
Infrastructure which can harmonize data and execute complex queries must be stood up. This infrastructure must be securely connected to the assembled data.
It must be capable of coping with significant compute demand, as both indexing and algorithm training are extremely taxing.
To benefit from Machine Learning in manufacturing, organizations need to deploy the central and edge systems needed to execute the algorithms themselves and tie them into manufacturing processes so that the algorithm can truly impact manufacturing.
In the burn-in reduction example above, the system needs to physically separate the devices requiring burn-in from those which can skip burn-in.
This is particularly difficult to do in today’s distributed and outsourced manufacturing environments.
Finally, the infrastructure needs to provide tools and systems to monitor the entire process, refresh stale Machine Learning models, and alert when unexpected results occur.
Ease of Use
Since the infrastructure itself is complex and Data Scientists’ time is valuable, it is important that the tools used to control the infrastructure, create algorithms and train and test them be simple to use and hide the underlying complexity as much as possible.
Data Scientists don’t want to spend their time fetching and harmonizing data or fiddling with complex deployment scripts. They want to focus on their key competence: creating Machine Learning models.
Optimal Plus provides a suite of Machine Learning infrastructure components that provide end-to-end Machine Learning capabilities throughout the manufacturing process.
This infrastructure is automated so that it scales as needed, is backed by a Service Level Agreement (SLA), and is built to enable data science teams to proceed without having to worry about the details of the underlying infrastructure.
Optimal Plus offers multiple tools to handle data input, rules to govern the deployment of algorithms and models, as well as robust monitoring and alerting.
Reporting tools allow both an examining of real time analytics and the perusal of historic data, including the results of applying previous algorithms.
Many of the biggest names in the semiconductor industry take advantage of Machine Learning infrastructure from Optimal Plus, enabling valuable connections between organizations throughout the value chain.
Machine learning and big data are both very difficult, in practically every way.
The necessary data science skills are rare among industry professionals, the data volumes are immense, and the infrastructure required to work on that data is complex.
Optimal Plus can eliminate the infrastructure challenge, freeing you and your organization to focus on the data science.