Virtually every vehicle manufacturer in the world is either developing, purchasing, or investing in ADAS systems in order to bring autonomous vehicles into the market.
With this demand on the rise, the need for high quality automotive camera modules is rising.
ADAS systems are built using computer vision technology and act as the “eyes” of autonomous vehicles. Numerous cameras are embedded into the ADAS system, and, like every camera, they require proper lenses.
It’s important that each lens be high performing and reliable in order to guarantee the ADAS-equipped vehicle can seamlessly detect obstacles—such as other vehicles, pedestrians, trees, etc.—on the road.
Designers are constantly trying to create the highest performing ADAS products, but these products can be very difficult or expensive to produce.
Because of this, there needs to be a balance between performance, quality, and cost.
Data from the manufacturing process can be used to decrease both NPI, production time, and cost through the use of tools like big data analytics and machine learning, thus enabling manufacturers to design more challenging-to-create products that perform exceptionally well.
In this article, we’ll discuss how incorporating machine learning can help you collect data from the supply chain, as well as gain insights from the assembly line that assist in producing high quality products manufactured with minimum scrap.
Additionally, we’ll explore what steps your business can take to become more data driven in order to ensure your ADAS systems are of high quality, inexpensive to create, and efficient to produce.
Testing your cameras before they are built
One of the greatest challenges manufacturers face when developing ADAS systems is determining how to properly align and calibrate the camera modules while maintaining the focus and alignment throughout the manufacturing process.
Unfortunately, it’s hard to tell early on which lenses will provide optimal performance until after the lens is connected to the PCBA with the imager.
It’s too costly and takes too much time to test each lens individually. It’s also impossible to assume that each lens you receive will be high performing. Some lenses may have better performance characteristics, such as MTF, while others have optical defects, like astigmatism or chromatic aberration. So, which lens characteristics will ensure high performance each and every time, allowing you to optimize the manufacturing and testing process, skip the incremental testing process, and guarantee customers the best ADAS cameras possible?
Using a machine learning model based on your manufacturing business’s historical data, you can refine the process of testing your cameras through a multivariate analysis.
In this process, you’ll minimize the fails that occur at the end of the assembly line, as you will be able to predict which lenses, when integrated into the camera module, are expected to fail based on their characteristics. (This was determined through the machine learning model using past final test and field data.)
Using machine learning to create cost-effective, high-quality automotive camera modules
Let’s go over what you can do to ensure your ADAS systems are high performing, inexpensive, and efficient to produce using machine learning.
Define your goals and inputs
Scrap within the assembly process can be handled in three main ways:
- screening the lens performance
- improving the CMAT assembly time
- skipping testing criteria based on CMAT and PCT.
Virtual screening can be done by using machine learning to compare data from the final test (in house) and field data, and correlating it to the incoming lens inspection data. This allows you to save time and scrap in the assembly process by ensuring that only lenses with a high likelihood of passing the final tests, and that have excellent performance in the field, will enter the process.
Machine learning can help here by classifying the lens performance criteria that directly correlate to final test and field performance results.
CMAT Assembly Time
With machine learning and predictive analytics, you can create a better focus model that gauges which lens will provide the best performance in the camera across the entire FOV, reducing takt time and scrap.
The next step is to correlate focus and MTF against the lens, as well as against the final test and field data.
Skip Tests in Assembly Line
Use a quality index—based on machine learning—to establish which parts of the camera module can skip burn-in and PCT. This can reduce costs and improve time management. The quality index provides a score, if you will, on the likelihood of failure of each part of the product—and, therefore, allows you to truly gauge whether a test is required or can simply be skipped.
In other words, products with a high quality index don’t need to go through rigorous testing because you’ll know they will perform well.
Deploy your model
Using machine learning is a powerful tool to optimize assembly time, improve product performance in the field, and reduce scrap.
Let’s discuss this in more detail. Not too long ago, developing a model would have to be done laboriously through collecting and preparing the data, creating and training the model,and then creating an infrastructure and then deploying the model into the factory.
While this may not sound like much, the devil is in the details and this effort would be a herculean task. However, today there are many options available to automate and shorten this process.
The biggest advantage is utilizing existing tools and platforms, such as OptimalPlus, AWS, Google Cloud, etc, to quickly get your models into your factories. In some cases, you may not even need to develop your own model but can use those that have already been created and are available for training on your data.
That’s the best part about having access to these platforms, having the freedom to decide whether you want a fully customized solution, or high performance rapid deployment.
Regardless of the path you decide, the most difficult and time consuming part of this whole process is collecting, harmonizing, and preparing the data.
In order to help alleviate the tedious parts of this and to make the whole process more efficient, OptimalPlus has developed a whole solution suite targeted at ADAS cameras with the goal of reducing scrap while increasing factory efficiencies, from data collection and preparation to ML models to the Rules engines which closes the loop by creating actions based on the ML model.
As an example of scrap reduction on cameras, we created a random forest-based model that was able to connect incoming lens characteristics to PCT and final test performance. The goal was to make sure that only material that was most likely to pass all subsequent tests made it into the line.
Like with most machine learning models, the power really lies in the model’s ability to learn and improve over time with more data via supervised learning. So even if the model’s performance isn’t perfect at first, don’t worry—it can improve.
One last interesting point is that we were able to use the multivariate analysis capabilities of our platform to find root causes, such as epoxy lots, that contributed to a higher than desired scrap rate. Regardless, the ultimate goal is to save time and money by removing the need for certain tests, improving production efficiency.
OptimalPlus open platform for proper product lifecycle analytics
A lot of data is required to properly shift left and minimize scrap within your production line. The right tools need to be put in place in order to properly collect that data, analyze it, and turn those analytics into actionable business objectives and tasks.
Using the OptimalPlus open platform, ADAS manufacturers can properly analyze data from each step of the supply chain—and utilize that data to conduct predictive analysis on camera lenses, modules, and other parts of the ADAS device.
You need to understand the type of problem you’re trying to solve, create the model, train it for the use case, and evaluate its performance using the defined metrics. Today, there are a wide variety of machine learning models being used, including gradient boosting machine, ordinary least squares, random forest, generalized linear modeling, and convolutional neural networks.
However, in order to get to a completed model, there’s a lot of work that needs to be done, including collecting, harmonizing, and preparing the data.
This is a very time consuming and costly endeavor, and needs to be planned out ahead of time so that the amount of data required to train the model is statistically significant.
Once the data is ready to be used, you can move on to coding the model using tools like Jupyter Notebook and online tools like those available in AWS (SageMaker, Lambda, etc.).
Once you have the model performing to acceptable standards, what do you do with it? It’s a bit like cooking an amazing dinner, but then just leaving the food in the kitchen.
You need to be able to feed the model back into the edge in the factory so that you can create rules and limits that will monitor the health of your product and processes.
Again, there are many ways to do this, such as using the OptimalPlus Rules Engine. However, you need to have some way of utilizing the rules in real time, rather than in a post-hoc fashion, as finding problems well after the products have gone through will cause a lot of unnecessary scrap and lost time. How does all this connect to an ADAS camera though?