❌

Reading view

There are new articles available, click to refresh the page.

Machine Learning Might Save Time on Chip Testing



Finished chips coming in from the foundry are subject to a battery of tests. For those destined for critical systems in cars, those tests are particularly extensive and can add 5 to 10 percent to the cost of a chip. But do you really need to do every single test?

Engineers at NXP have developed a machine-learning algorithm that learns the patterns of test results and figures out the subset of tests that are really needed and those that they could safely do without. The NXP engineers described the process at the IEEE International Test Conference in San Diego last week.

NXP makes a wide variety of chips with complex circuitry and advanced chip-making technology, including inverters for EV motors, audio chips for consumer electronics, and key-fob transponders to secure your car. These chips are tested with different signals at different voltages and at different temperatures in a test process called continue-on-fail. In that process, chips are tested in groups and are all subjected to the complete battery, even if some parts fail some of the tests along the way.

Chips were subject to between 41 and 164 tests, and the algorithm was able to recommend removing 42 to 74 percent of those tests.

β€œWe have to ensure stringent quality requirements in the field, so we have to do a lot of testing,” says Mehul Shroff, an NXP Fellow who led the research. But with much of the actual production and packaging of chips outsourced to other companies, testing is one of the few knobs most chip companies can turn to control costs. β€œWhat we were trying to do here is come up with a way to reduce test cost in a way that was statistically rigorous and gave us good results without compromising field quality.”

A Test Recommender System

Shroff says the problem has certain similarities to the machine learning-based recommender systems used in e-commerce. β€œWe took the concept from the retail world, where a data analyst can look at receipts and see what items people are buying together,” he says. β€œInstead of a transaction receipt, we have a unique part identifier and instead of the items that a consumer would purchase, we have a list of failing tests.”

The NXP algorithm then discovered which tests fail together. Of course, what’s at stake for whether a purchaser of bread will want to buy butter is quite different from whether a test of an automotive part at a particular temperature means other tests don’t need to be done. β€œWe need to have 100 percent or near 100 percent certainty,” Shroff says. β€œWe operate in a different space with respect to statistical rigor compared to the retail world, but it’s borrowing the same concept.”

As rigorous as the results are, Shroff says that they shouldn’t be relied upon on their own. You have to β€œmake sure it makes sense from engineering perspective and that you can understand it in technical terms,” he says. β€œOnly then, remove the test.”

Shroff and his colleagues analyzed data obtained from testing seven microcontrollers and applications processors built using advanced chipmaking processes. Depending on which chip was involved, they were subject to between 41 and 164 tests, and the algorithm was able to recommend removing 42 to 74 percent of those tests. Extending the analysis to data from other types of chips led to an even wider range of opportunities to trim testing.

The algorithm is a pilot project for now, and the NXP team is looking to expand it to a broader set of parts, reduce the computational overhead, and make it easier to use.

β€œAny novel solution that helps in test-time savings without any quality hit is valuable,” says Sriharsha Vinjamury, a principal engineer at Arm. β€œReducing test time is essential, as it reduces costs.” He suggests that the NXP algorithm could be integrated with a system that adjust the order of tests, so that failures could be spotted earlier.

This post was updated on 13 November 2024.

❌