|Design IPs have become complex and there are considerable challenges involved in verifying them. Constrained random tests generate random stimuli and the DUT behaviour is checked against a model. This abstract lists out some of the typical problems faced in metrics reporting during different phases of IP Verification and the evolution of a process. Throughout the verification process, verification teams need to demonstrate consistent progress in verifying the design. The tests are ranked on how much they contribute to the coverage. Ranked seeds from the previous regression are collated into a golden test list and they are run in every regression cycle in addition to the new random tests. Tests that contribute to coverage will be added to the golden test list, which grows over time.|
Ranking on the full random regression introduces passing and failing tests into the golden test list and in turn reduces the pass percentage. In this early phase, ranking on functional coverage metrics is beneficial as functional coverage could be reported only on passing tests.
As the testbench matures, the need to rank passing tests on all metrics (code and functional coverage metrics) becomes more important. A unified regression reporting flow to perform ranking on passing tests was developed. Identifying all failing tests and maintaining failing seeds helps to classify the kind of failures seen in a regression. There could be verification plans loaded on top of the coverage databases to report requirements to functional coverage mapping. All these metrics generated till now may have to be fed into in-house tools like tStatus that produce graphs to represent weekly progress and TriCE that helps classify failure categories.
Key points covered include:
- The tests are ranked on how much they contribute to the coverage.
- A Unified regression reporting flow to perform ranking on passing tests was developed.
- The fastest running ranked tests are added to a ‘Mini regression’