Guide to writing a verification testplan

The most important factor in writing a good testplan is… writing it!

Write a test plan!

Even if your manager doesn’t want you to spend the time, at least spend a few hours to develop bullet points to cover all the features of the device, all the interfaces (and their features) and all the use modes. Some argue that you can only learn a design, system, specification etc by doing.  The answer to that statement is that writing a test plan is doing! It will help you to avoid bad decisions at the start of the project if you have a better idea of the whole problem.

The randomization trap

Don’t fall into the trap of “letting the randomization catch it”. It’s a common problem these days as test plans in the past often just defined the directed test and what it would do (that doesn’t translate so well to a fully random testbench). A well written plan will outline the situations, sequences, corner cases that need attention, not just the knobs to be randomized.

Things to watch for

Pay particular attention to inter-connects, be they physical interfaces or more functional “interfaces” (ie protocols that are implemented over multiple blocks). When one part of a feature is implemented in one place, by one person, and another part of the feature is implemented in another place by another person (or even another team), the chances of mistakes are very high! Document those high risk areas.

Don’t forget to test error cases. We want to make sure that the design does the right thing, but we also want to make sure that it can handle a situation or can be easily recovered when something else does the wrong thing.

Identify when error testing is crucial. Sometimes error testing is left to the end “in case we don’t get time”. But there are times when you will identify that errors will happen in the normal use of the design, these cases are crucial (and often difficult) to get right.

Keep it up to date

A test plan is a dynamic document, it helps you to see what needs to be done and how to do it, but it will always need to be updated as new things are learned, or the design is changed, or the schedule is changed. Try to keep it up to date as best possible.

How to write it

Every company, manager, engineer has their own requirements on what should be in a testplan. Its less important to get bogged down in what is right and what is best. The right and best thing to do is to write a testplan, so you are already most of the way there.

A full product testplan and schedule (for pre-silicion verification) should cover who, what, where, when & how.

  • who: Which engineer(s) is doing the work
  • what: The definition of what needs to be tested
  • where: The environment in which they will be doing it: block level, sub-system level, SOC level, emulation.
  • when: the schedule
  • how: a description of how they plan on testing it

In most cases you will be writing a sub-part of this entire plan, and we are not going to cover the schedule, so that leaves your testplan to include the what, where and how.

Where is the testbench description.  Usually your testplan will only cover one testbench, so a good starting point for your testplan is a brief description of your testbench.  Its (usually) doesn’t need to be a complete documentation of the entire testbench down to each function call etc. Just provide enough for someone new to get an introduction that they can then find things on their own. A diagram, a list of files (or location of files) and a description of what each part of the testbench does.

What are the features of the device.  I like to break the device down. In some cases these things will overlap, that’s fine, you can refer to another section for the how. Its always good to list out everything in each category to help make sure you have covered everything. And I can’t stress how important it is to identify and concentrate on interfaces or functional boundaries that are shared with other modules. When two people/devices/modules/concepts need to be in sync, its the most likely place to result in bugs.

  • Large Features / Product Requirements
    • sub-features
      • sub-sub features etc
  • Interfaces to other modules
    • including shared protocols (ie a functional interface, not just a physical interface)
    • sub-features within the interfaces
  • Use Cases
    • Programmable Registers (CSRs)
    • Programmable Descriptors (memory based descriptors)
    • How the end product will use the system

How will you test that feature?  This section will depend a lot on the device. Some important things to remember:

  • Write the scope of what needs to be tested in this feature
  • Write the assumptions
  • Write the pass/fail criteria
  • Note the high risk areas.

Author: Nigel C

I am a co-founder of Lateral Sands. I am based in Silicon Valley and manage our employees and customers in Silicon Valley. In my spare time I also work as a verification consultant for Lateral Sands. If you are looking for a contractor position at a fortune 500 semiconductor company, contact me!

Leave a Reply

Your email address will not be published. Required fields are marked *