How do you know that the verification you have performed is sufficient? We have long used coverage (functional and code) for stimulus feedback but this has a number of issues:
- There are only subjective ways to measure the quality of the functional model
- It is VERY easy to reach 100% code coverage and still have numerous bugs in the design (e.g. missing functionality)
When it comes to checkers there is no real way to measure quality or effectiveness unless you turn to “mutation testing”. Under this technique bugs are deliberately inserted into the design to see if the verification can find them all. The technique has been automated through tool support which generates metrics relating to quality of both stimulus and checkers (for both static and dynamic verification approaches)
In the next DVClub which will be held on Monday, 28thApril 2014 in Bristol, Cambridge, Eindhoven, Grenoble, Sophia and by Remote Access, we will look at both the technology and the tools available for verifying your verification. The majority of the time will be spent looking at real user experience of applying the technology. There will also be a presentation from a user who has developed his own tool for automating this approach.Register your place today!