Automatic, Validated Debug of Regression Failures

The number of test failures to debug is larger due to constrained random testing, larger ASIC projects and automatic testing. Basically, in order to verify the larger ASIC’s we run more tests more frequently than ever before. The result is that debugging is now by far the largest of all the verification tasks in an ASIC project.

However, today it is possible to automate debugging, but in order for any automation to be useful in a live project the results have to be robust. This can be achieved by validation of the debug analysis: if the debug analysis was correct it should be possible to make the failing test pass by automatically modifying the code. Only if the test failure can be made to pass has the debug analysis been validated.

Daniel Hansson of Verifyter presented how to achieve automatic, validated debug of regression failures at the DVClub Europe Conference-“Debug” on 24 May 2016.

You can view the slides and recordings here