Guide to writing a verification testplan

The most important factor in writing a good testplan is… writing it!

Write a test plan!

Even if your manager doesn’t want you to spend the time, at least spend a few hours to develop bullet points to cover all the features of the device, all the interfaces (and their features) and all the use modes. Some argue that you can only learn a design, system, specification etc by doing.  The answer to that statement is that writing a test plan is doing! It will help you to avoid bad decisions at the start of the project if you have a better idea of the whole problem.

The randomization trap

Don’t fall into the trap of “letting the randomization catch it”. It’s a common problem these days as test plans in the past often just defined the directed test and what it would do (that doesn’t translate so well to a fully random testbench). A well written plan will outline the situations, sequences, corner cases that need attention, not just the knobs to be randomized.

Things to watch for

Pay particular attention to inter-connects, be they physical interfaces or more functional “interfaces” (ie protocols that are implemented over multiple blocks). When one part of a feature is implemented in one place, by one person, and another part of the feature is implemented in another place by another person (or even another team), the chances of mistakes are very high! Document those high risk areas.

Don’t forget to test error cases. We want to make sure that the design does the right thing, but we also want to make sure that it can handle a situation or can be easily recovered when something else does the wrong thing.

Identify when error testing is crucial. Sometimes error testing is left to the end “in case we don’t get time”. But there are times when you will identify that errors will happen in the normal use of the design, these cases are crucial (and often difficult) to get right.

Keep it up to date

A test plan is a dynamic document, it helps you to see what needs to be done and how to do it, but it will always need to be updated as new things are learned, or the design is changed, or the schedule is changed. Try to keep it up to date as best possible.

How to write it

Every company, manager, engineer has their own requirements on what should be in a testplan. Its less important to get bogged down in what is right and what is best. The right and best thing to do is to write a testplan, so you are already most of the way there.

A full product testplan and schedule (for pre-silicion verification) should cover who, what, where, when & how.

  • who: Which engineer(s) is doing the work
  • what: The definition of what needs to be tested
  • where: The environment in which they will be doing it: block level, sub-system level, SOC level, emulation.
  • when: the schedule
  • how: a description of how they plan on testing it

In most cases you will be writing a sub-part of this entire plan, and we are not going to cover the schedule, so that leaves your testplan to include the what, where and how.

Where is the testbench description.  Usually your testplan will only cover one testbench, so a good starting point for your testplan is a brief description of your testbench.  Its (usually) doesn’t need to be a complete documentation of the entire testbench down to each function call etc. Just provide enough for someone new to get an introduction that they can then find things on their own. A diagram, a list of files (or location of files) and a description of what each part of the testbench does.

What are the features of the device.  I like to break the device down. In some cases these things will overlap, that’s fine, you can refer to another section for the how. Its always good to list out everything in each category to help make sure you have covered everything. And I can’t stress how important it is to identify and concentrate on interfaces or functional boundaries that are shared with other modules. When two people/devices/modules/concepts need to be in sync, its the most likely place to result in bugs.

  • Large Features / Product Requirements
    • sub-features
      • sub-sub features etc
  • Interfaces to other modules
    • including shared protocols (ie a functional interface, not just a physical interface)
    • sub-features within the interfaces
  • Use Cases
    • Programmable Registers (CSRs)
    • Programmable Descriptors (memory based descriptors)
    • How the end product will use the system

How will you test that feature?  This section will depend a lot on the device. Some important things to remember:

  • Write the scope of what needs to be tested in this feature
  • Write the assumptions
  • Write the pass/fail criteria
  • Note the high risk areas.

The goal of a Verification Engineer

Often the verification engineer has the impression that the job is to “find bugs”. While that’s true, its only part of the story. I believe the correct description of a verification engineering job is

designing a system that helps the RTL designer successfully develop a device to meet a specification.

There are important things to note from that description that will help you be successful:

  • Designing a System: Verification Engineer IS a design job. You are designing a complex test environment. Often engineers feel its less glamorous than an RTL design job, but its usually more important and more difficult to do well.
  • Help the RTL Designer: Your immediate customer is the RTL designer. You need to help him or her create a good design.
    • Don’t forget the RTL designer’s needs. In a contract position its often the RTL designer(s) that will decide your long term position. Design and Verification form a team and you will only succeed if the team succeeds.
    • The designer needs a tool to see certain situations to add new features. While it may be easier to just randomize everything, its better to know what the designer needs to control and when he needs it. The design schedule is required to create a verification schedule!
    • First, assume that any bug you find is in the testbench. Once you have put at some effort into ruling that out, then let the designer know about it. Its just good etiquette.
    • ALWAYS automate checks, don’t ever rely on visual checking except for a very short term need. Having said that… its also good to visually check (the first time) to confirm that your automated tests are working.
    • Don’t be afraid of making suggestions once you feel you have a reasonable understanding of what the device does and how it does it.
    • If a particular feature (design or verification) continues to have problem after problem, its worth suggesting that it be re-designed/re-evaluated.
    • Don’t be afraid to suggest that changes be made with verification in mind. A good design is a design that can easily be verified.
  • Successfully develop a device: Success is difficult to define, at its very core is developing a device, that works, within a set schedule.
    • Schedule is almost always the most defining factor. All of the other factors end up being a trade-off with schedule.
    • Depending in the life-cycle of the product and/or the company, factors like future re-use, future enhancements, scalablity, variations in end-products and even sales will have some effect on what success means with respect to your testbench design.
  • Meeting the specification: Don’t forget to read the specifications that the designer is trying to meet.
    • Don’t EVER trust that the designer(s) knows what they are doing and/or have documented the device properly, even if you have the utmost respect for that person/team and their capabilities. Read the specifications yourself to make sure that the end product will work as expected by customers of the physical device.
    • Often it will help you understand why things were done a certain way, as the designers documentation is rarely as thorough as a standard specification.
    • Don’t forgot to ask for product specifications. Make sure the design meets management’s expectations.