Monday, September 26, 2005

Developing a Test Specification


Definitions

I’ve seen the terms “Test Plan” and “Test Specification” mean slightly different things over the years. In a formal sense (at this given point in time for me), we can define the terms as follows:


  1. Test Specification – a detailed summary of what scenarios will be tested, how they will be tested, how often they will be tested, and so on and so forth, for a given feature. Examples of a given feature include, “Intellisense, Code Snippets, Tool Window Docking, IDE Navigator.” Trying to include all Editor Features or all Window Management Features into one Test Specification would make it too large to effectively read.
  2. Test Plan – a collection of all test specifications for a given area. The Test Plan contains a high-level overview of what is tested (and what is tested by others) for the given feature area. For example, I might want to see how Tool Window Docking is being tested. I can glance at the Window Management Test Plan for an overview of how Tool Window Docking is tested, and if I want more info, I can view that particular test specification.
If you ask a tester on another team what’s the difference between the two, you might receive different answers. In addition, I use the terms interchangeably all the time at work, so if you see me using the term “Test Plan”, think “Test Specification.”


Parts of a Test Specification

A Test Specification should consist of the following parts:


  • History / Revision - Who created the test spec? Who were the developers and Program Managers (Usability Engineers, Documentation Writers, etc) at the time when the test spec was created? When was it created? When was the last time it was updated? What were the major changes at the time of the last update?
  • Feature Description – a brief description of what area is being tested.
  • What is tested? – a quick overview of what scenarios are tested, so people looking through this specification know that they are at the correct place.
  • What is not tested? - are there any areas being covered by different people or different test specs? If so, include a pointer to these test specs.
  • Nightly Test Cases – a list of the test cases and high-level description of what is tested each night (or whenever a new build becomes available). This bullet merits its own blog entry. I’ll link to it here once it is written.
  • Breakout of Major Test Areas - This section is the most interesting part of the test spec where testers arrange test cases according to what they are testing. Note: in no way do I claim this to be a complete list of all possible Major Test Areas. These areas are examples to get you going.
  • Specific Functionality Tests – Tests to verify the feature is working according to the design specification. This area also includes verifying error conditions.
  • Security tests – any tests that are related to security. An excellent source for populating this area comes from the Writing Secure Code book.
  • Accessibility Tests – This section shouldn’t be a surprised to any of my blog readers. <grins> See The Fundamentals of Accessibility for more info.
  • Stress Tests – This section talks about what tests you would apply to stress the feature.
  • Performance Tests - this section includes verifying any perf requirements for your feature.
  • Edge cases – This is something I do specifically for my feature areas. I like walking through books like How to break software, looking for ideas to better test my features. I jot those ideas down under this section
  • Localization / Globalization - tests to ensure you’re meeting your product’s International requirements.

Setting Test Case Priority

A Test Specification may have a couple of hundred test cases, depending on how the test cases were defined, how large the feature area is, and so forth. It is important to be able to query for the most important test cases (nightly), the next most important test cases (weekly), the next most important test cases (full test pass), and so forth. A sample prioritization for test cases may look like:


  1. Highest priority (Nightly) – Must run whenever a new build is available

  2. Second highest priority (Weekly) – Other major functionality tests run once every three or four builds

  3. Lower priority – Run once every major coding milestone

Author - saraford

http://blogs.msdn.com/saraford/archive/2004/05/05/249135.aspx

1 comment:

Software testing videos said...

It's quite easy to assemble a list for software testing, which shows beginners how to properly manage their projects - from checking the markets, to valuing criticism from others to final clean up. This here, though, is what I've been looking for in terms of a deeper, more advanced ideas on software testing. Thank you for posting this.