The Challenge

Suppose that you are a non-profit based in New York City, like The New York Women’s Foundation, or The Salvation Army and you would like to get the word out for an event you have planned. But like many non-profits you have limited resources so you would like to know where in the city you should place people whose mission would be outreach by handing out fliers, brochures, pop-up presentations, etc, to market this future event.

You have the materials, presentations, and other marketing materials ready. But there is yet a few questions that remain:


Managing code coverage and functional coverage are complex tasks, especially for large SoC designs, and becomes particularly daunting when having to manage this type of data for multiple SoC designs in-process at the same time. This is complicated by the fact that such SoCs often leverage highly configurable silicon IP which can take on multiple forms from one SoC project to the next.

One problem that I’ve had to deal with when managing functional coverage and code coverage results is how to respond to situations where you may (for example) have satisfied your target number for code coverage but yet…


In short: Because the tests are generated randomly.

When you have a test that can generate 1 out of 1000 different types of stimulus at any given moment, it is not safe to assume that permutation #379, which you consider to be necessary to confirm some vital functionality of the chip design, was in fact generated. Assuming all 1000 permutations of stimulus are equally likely to be generated, permutation #379 would have a 0.1% chance of ever being generated.

The diagram Random Testing Suffers From Lack of Observability depicts this circumstance: state space of a design (the big circle), random…


TL;DR:

Over time chips became too complex to profitably design and manufacture.[¹]

By getting computers to do some “guided guessing” (i.e. constrained random generation of tests) chip design engineers were able to be luckier than before in finding critical bugs before a chip went into high-volume manufacturing, thus preventing most chips design projects from experiencing certain failure.

Longer Form

Over time chips became too complex to profitably design and manufacture.[¹]

The diagram below illustrates this phenomenon:


Verification is the process of taking an implementation of a chip at some level of abstraction and confirming that the implementation meets some specification or reference design. [¹]

The purpose of verification is to identify and correct design defects in the chip before it goes into manufacturing. There’s a verification step for each step in the chip design process, as shown in the diagram below.


I’m going to write a few blogs about my chip design verification experience. I hope that you will find what I write interesting.

If you have any feedback about what I write, feel free to comment.

This is what I’m going to write about in a few blogs:

Michael Green

Data Scientists and Computer Architect. My views are my own.

Get the Medium app

A button that says 'Download on the App Store', and if clicked it will lead you to the iOS App store
A button that says 'Get it on, Google Play', and if clicked it will lead you to the Google Play store