Designed for test floors performing wafer sort or final test operations, this feature-rich IIoT solution can be integrated into a manufacturing or engineering environment where time-sensitive response is critical, including OSATs and IDMs, Foundries and Fabless companies. It provides real-time views of the testers, as well as automatic detection of equipment or process-related issues and provides a wide selection of operational reports such as equipment utilization.

Improved IIoT Efficiencies

Test Floor Ops takes decision making on the test floor a step beyond, offering production and engineering facilities a complete, self-contained solution for driving in-the-moment IIoT efficiency from every station.

Early Detection Capabilities

Optimal+ Test Floor Ops detects a tester malfunction on a given wafer as early as the fifth or sixth die, ensuring that equipment failures are identified and acted upon before they become costly

Real-Time Action

Activating automated rules in real time on each specific tester, Test Floor Ops identifies equipment problems and failures as they arise and initiates testing equipment pause/shutdown before it becomes costly

Driving Deeper Insights

Test Floor Ops enables semiconductor companies to act upon insights derived from data generated on all production floor equipment for maximizing yield and productivity while ensuring greater product quality

The Optimal+ Semiconductor Operations Platform Workflow

Our solutions are installed in 90% of the foundries and subcons serving the global semiconductor industry, enabling IDM and fabless teams to seamlessly collect, clean and collate their data sets directly from the source of their creation in preparation for extreme analytics and time-sensitive action. The data then goes through a multi-stage process that enables teams to manufacture actionable intelligence that drives every quantifiable performance metric, as described in the diagram below:


Comparing Test Time across a Fleet of Testers

Finding the Issue
A cross-entity rule triggers an alert when significant performance differences are detected between testers. A cross entity rule compares different entities (such as testers, probe cards and load boards) and highlights equipment with significantly poor performance.

Performing the Analysis
In this instance, the rule monitors the average good bin test time and flags slower testers that result in low throughput. The specific testers are checked and found to have incorrect settings which negatively impact their performance.

Preventing Future Recurrences
A rule is created within the solution to generate an alert when similar conditions resurface.


img_2 Comparing Test Time across a Fleet of Testers

Managing Online Retest Settings
Finding the Issue
An online retest dashboard detects that some of the testers are triggering more online retests then others.

Performing the Analysis
Normally, online retest is statistically the same across the test fleet. In this case, based on the data retrieved, the engineers realize that there is a fundamental difference in the way certain testers are performing retest. They check the online retest policy which is managed by the prober recipe and triggered by certain bin occurrences. They find discrepancies in the settings between different testers. Changing the settings resolves the issue.



img_2 Managing Online Retest Settings

Maximizing Yield through Probecard Performance Analysis

Finding the Issue
A high-level product-based report shows that a product is achieving lower than expected yield and higher than expected retest rates.

Performing the Analysis
By drilling down to individual tester performance within the Optimal+ solution the engineer is able to identify a specific probe card, which is causing the issue within a matter of minutes. The test house is notified and the probe card is removed for inspection.

Preventing Future Recurrences
A rule is created to automatically catch lots exhibiting high site-to-site yield discrepancies. The next time this problem occurs, an email alert can immediately be sent to the user so that the problem can be immediately resolved.



Step 1 – Analyzing a probecard shows site 3 with consistently low yield



Step 2 – Creating a rule to catch site-to-site deviations for this product



Step 3 – Receiving an alert when the problem next occurs

img_2 Maximizing Yield through Probecard Performance Analysis

Using Rules to Catch ATE Issues

Finding the Issue
A by-8 probecard is failing on all sites in some touchdowns due to a tester issue. The problem is detected automatically by an offline touchdown-monitoring pre-defined rule. Standard yield monitoring mechanisms fail to discover the issue because wafer yield is still above the acceptable threshold.

Performing the Analysis
The user receives an alert via email minutes after wafer probing is completed. The user views the wafer-map tool and informs the test house. The wafer is re-probed, significant yield is reclaimed and the user is able to view the results of the retest. The tester is investigated to find the root-cause of the problem.



img_2 Using Rules to Catch ATE Issues

Using Targets for Planning & Capacity
The Challenge
To accurately forecast the number of testers required over the next few months, planners need a mechanism to calculate the throughput of testers on their products (measured in Units per Hour – UPH). Planners want to make sure that the actual throughput of the testers matches the engineers’ expectations.

The Solution
A table of monthly targets is defined for the UPH measure in the Optimal+ solution. The target UPH is specified for each of the products being tested. A report is created to display the actual UPH (based on real test data) and compare it to the pre-specified target. The solution-generated view immediately highlights testers that are under performing. These testers are then investigated and plans are adjusted to allow for the shortfall.



img_2 Using Targets for Planning & Capacity