Table of Possibilities with Code Coverage and Functional Coverage

Managing code coverage and functional coverage are complex tasks, especially for large SoC designs, and becomes particularly daunting when having to manage this type of data for multiple SoC designs in-process at the same time. This is complicated by the fact that such SoCs often leverage highly configurable silicon IP which can take on multiple forms from one SoC project to the next.

One problem that I’ve had to deal with when managing functional coverage and code coverage results is how to respond to situations where you may (for example) have satisfied your target number for code coverage but yet have not achieved adequate functional coverage, or vice versa.

Before we get into suggestions on how to interpret and handle these situations, let’s list out the possibilities.

Here is a table you can use to judge the state of verification of your design based on functional coverage and code coverage

Situation I: Code Coverage is Not Satisfactory and Functional Coverage is Not Satisfactory

This situation is akin to starting at 0. If you had high coverage for both your code and your functional scenarios and then suddenly your coverage drops, then likely you are in one or more of the following situations:

  • Your infrastructure has faltered. Check to make sure your simulations are successfully compiling, launching, running, and completing.
  • Your design or your testbench has undergone some significant change. Track through changes in your revision control system. If it turns out that some errant submission destroyed your functional coverage, you may have to make achieving some basic level of code coverage and functional coverage required before you allow commits to your revision control repository. You may want to institute a code review process, and perform lint checks to make sure the code will continue to work as expected over time.

If this is not some sudden drop in an already healthy coverage trend, but rather a premature plateauing of functional and/or code coverage progress, then it’s time to check your assumptions about the stimulus, about the definition of the functional coverage, and how these relate to the specification:

  • Check that the stimulus is not over-constraining the possible randomizations that could be taking place in the exercising of the design.
  • Check that your functional coverage does not encode scenarios that are impossible or that are deemed not important to cover. Update your coverage to exclude these unnecessary scenarios. For scenarios that are deemed not important place them in a separate covergroup and document that these are corner cases and make sure the team (cross-functionally) understands these are not “need-to-hit” coverage items. You can update your testplan to make this explicit.
  • Have a conversation with the design team to clarify any aspects of the specification associated with the stimulus and functional coverage. The goal should be to further clarify the specification to eliminate nebulous wording and get to the heart of what the design really is and what is the intent of some particular implementation. The goal should be to extact enough details to enable you to prioritize what features need to be hit and what features either can be deprioritized or which in fact really don’t exist (once the specification is made more clear).

Situation II: Code Coverage is Not Satisfactory and Functional Coverage is Satisfactory

Your test plan is inadequate, your functional coverage model does not include all of the releavant scenarios embedded in your HDL model, or there is extraneous code in your HDL model that needs to be removed or pragma-ed.

For highly configurable silicon IP, you may be simulating a hardware configuration that your test environment has yet to go through an in-kind configuration which would then enable testing the distinct features of this configuration.

Essentially, “there is more ‘there’ there.”[¹] There is code that is being exercised but the abstract scenarios supported by this code have not been identified and written into a functional coverage model. Or this code is extraneous — maybe it’s instrumentation logic used to aid software simulations, emulation, or some other synthetic purpose. (I’m not referring to DFT features which are real features that need to be verified right along with the features of the chip that will be operating in the wild).

The goal is to find out what this code is for, where is it defined in the specification, and if not, it should be pragma’ed or marked in some way to identify that no functional coverage is required. This will also help the team quickly determine how to deal with any drops in code coverage associated with this type of code in the future and prevent unnecessary debug attention being directed at such code.

Situation III: Code Coverage is Satisfactory and Functional Coverage is Not Satisfactory

There are abstract scenarios that have been specified in the functional coverage model that have not been hit. Review to make sure they are still required. If not required, remove them or update them to focus on what’s really needed. If they continue to be required, then you need to update your stimulus to hit these scenarios. For example, maybe you forgot to cross running certain workloads in rarely achieved modes of operation.

Situation IV: Code Coverage is Satisfactory and Functional Coverage is Satisfactory

Review your waivers for both your code and functional coverage to make sure they are valid. Hold a review to make sure all parts of your team are willing to sign off on your results. If new gaps are identified update your waivers, your functional coverage model, and your test plan. Once you’ve done this, you have achieved functional coverage closure.

Closing Advice

This write-up may make achieving functional coverage closure seem simple. Don’t be deceived. Teams spend around 40% of all project time developing testbenches, writing tests, running simulations, generating reports, analyzing reports, and then taking action to hit functional coverage closure.[²]

Especially in modern highly configurable, IP-based chip designs the following characteristics must be accounted for in the functional coverage model (in my opinion):

  • Your functional coverage model should encode the scenarios that you need to hit to adequately verify the IP, subsystem, or SoC
  • Your functional coverage model should be configurable, and must be automatically configured in tandem with the IP, subsystem, SoC
  • Your coverage model should be precise: It should require satisfying only the scenarios deemed to be important based on (cross-functional) review of product requirements and project goals that you have reviewed with your team that are important, and not include others that you think are possible scenarios but are not considered critical to project success
  • Your coverage model must be readable by a human, so that coverage holes can be quickly identified after reading a coverage report by any engineer, not just you, the original author of the model
  • Your coverage model should not be a resource hog: Develop your coverage models in such a way that (1) when it doesn’t need to collect coverage, the model does not tax compute resources, (2) a model that can be enabled or disabled in the shortest amount of time possible (no need to recompile to switch it on or off)
  • Your model can be reused across projects so that an IP, or subsystem that is reused in several different SoC projects can be quickly and precisely analyzed for any functional coverage closure issues

That’s it! I hope that you found this post helpful. Please feel free to leave feedback.

[¹]: Gertrude Stein, but in reverse.

[²]: Wilson Research Group and Mentor Graphics, 2014–2018 Functional Verification Surveys

Data Scientists and Computer Architect. My views are my own.