Thursday, September 20, 2007

Software Testing Best Practice Award


An email I received from IIST

===================================================

The International Institute for Software Testing is
giving away FIFTY days of free training.

Up to TEN companies may win. Each company will get Five
days of free training.

Apply for the IIST's Software Testing Best Practice Award
at
http://www.iist.org/bestpractice and receive the following benefits:

**** Get your company featured as an Award Winning Company
In all IIST publicity channels and press releases
**** Tell everyone how great your test process is
**** Get 5 days of free training at the International
Testing Certification Super Week to be held in
Las Vegas, NV, November 26-30, 2007
Chicago, IL, March 24-28, 2008
See details at
http://iist.org/superweek
**** Get your company and your test team recognized and identified
as an Award Winning Team during these remarkable events


The International Testing Certification Super Week will be
held in the following cities:

Las Vegas, NV, November 26-30, 2007
Chicago, IL, March 24-28, 2008

During these events, IIST will offer 25 Full day
in-depth courses taught by Leading Industry Experts
in Software Testing & Quality.

**** Register by September 30th, and save 20% (Las Vegas)
See details at:
http://www.iist.org/stpw/lasvegas/index.php

**** Register by December 31st, 2007 and save 30% (Chicago)
See details at:
http://www.iist.org/stpw/chicago08/index.php

===================================================

Tuesday, September 11, 2007

Requirements Testing

Testing software is an integral part of building a system. However, if the software is based on inaccurate requirements, then despite well written code, the software will be unsatisfactory. Most of the defects in a system can be traced back to wrong, missing, vague or incomplete requirements.

Requirements seem to be ephemeral. They flit in and out of projects, they are capricious, intractable, unpredictable and sometimes invisible. When gathering requirements we are searching for all of the criteria for a system's success. We throw out a net and try to capture all these criteria.

The Quality Gateway

As soon as we have a single requirement in our net we can start testing. The aim is to trap requirements-related defects as early as they can be identified. We prevent incorrect requirements from being incorporated in the design and implementation where they will be more difficult and expensive to find and correct.

To pass through the quality gateway and be included in the requirements specification, a requirement must pass a number of tests. These tests are concerned with ensuring that the requirements are accurate, and do not cause problems by being unsuitable for the design and implementation stages later in the project.

Make The Requirement Measurable

In his work on specifying the requirements for buildings, Christopher Alexander describes setting up a quality measure for each requirement.

"The idea is for each requirement to have a quality measure that makes it possible to divide all solutions to the requirement into two classes: those for which we agree that they fit the requirement and those for which we agree that they do not fit the requirement."

In other words, if we specify a quality measure for a requirement, we mean that any solution that meets this measure will be acceptable. Of course it is also true to say that any solution that does not meet the measure will not be acceptable.

The quality measures will be used to test the new system against the requirements. The remainder of this paper describes how to arrive at a quality measure that is acceptable to all the stakeholders.

Quantifiable Requirements

Consider a requirement that says "The system must respond quickly to customer enquiries". First we need to find a property of this requirement that provides us with a scale for measurement within the context. Let's say that we agree that we will measure the response using minutes. To find the quality measure we ask: "under what circumstances would the system fail to meet this requirement?" The stakeholders review the context of the system and decide that they would consider it a failure if a customer has to wait longer than three minutes for a response to his enquiry. Thus "three minutes" becomes the quality measure for this requirement.

Any solution to the requirement is tested against the quality measure. If the solution makes a customer wait for longer than three minutes then it does not fit the requirement. So far so good: we have defined a quantifiable quality measure. But specifying the quality measure is not always so straightforward. What about requirements that do not have an obvious scale?

Non-quantifiable Requirements

Suppose a requirement is "The automated interfaces of the system must be easy to learn". There is no obvious measurement scale for "easy to learn". However if we investigate the meaning of the requirement within the particular context, we can set communicable limits for measuring the requirement.

Again we can make use of the question: "What is considered a failure to meet this requirement?" Perhaps the stakeholders agree that there will often be novice users, and the stakeholders want novices to be productive within half an hour. We can define the quality measure to say "a novice user must be able to learn to successfully complete a customer order transaction within 30 minutes of first using the system". This becomes a quality measure provided a group of experts within this context is able to test whether the solution does or does not meet the requirement.

An attempt to define the quality measure for a requirement helps to rationalise fuzzy requirements. Something like "the system must provide good value" is an example of a requirement that everyone would agree with, but each person has his own meaning. By investigating the scale that must be used to measure "good value" we identify the diverse meanings.

Sometimes by causing the stakeholders to think about the requirement we can define an agreed quality measure. In other cases we discover that there is no agreement on a quality measure. Then we substitute this vague requirement with several requirements, each with its own quality measure.

Requirements Test 1

Does each requirement have a quality measure that can be used to test whether any solution meets the requirement?

By adding a quality measure to each requirement we have made the requirement visible. This is the first step to defining all the criteria for measuring the goodness of the solution. Now let's look at other aspects of the requirement that we can test before deciding to include it in the requirements specification.

Requirements Test 2

Does the specification contain a definition of the meaning of every essential subject matter term within the specification?

When the allowable values for each of the attributes are defined it provides data that can be used to test the implementation.

Requirements Test 3

Is every reference to a defined term consistent with its definition?

Requirements Test 4

Is the context of the requirements wide enough to cover everything we need to understand?

Requirements Test 5

Have we asked the stakeholders about conscious, unconscious and undreamed of requirements?

Requirements Test 5 (enlarged)

Have we asked the stakeholders about conscious, unconscious and undreamed of requirements? Can you show that a modelling effort has taken place to discover the unconscious requirements? Can you demonstrate that brainstorming or similar efforts taken place to find the undreamed of requirements?

Requirements Test 6

Is every requirement in the specification relevant to this system?

Requirements Test 7

Does the specification contain solutions posturing as requirements?

Requirements Test 8

Is the stakeholder value defined for each requirement?

Requirements Test 9

Is each requirement uniquely identifiable?

Requirements Test 10

Is each requirement tagged to all parts of the system where it is used? For any change to requirements, can you identify all parts of the system where this change has an effect?

Conclusions

The requirements specification must contain all the requirements that are to be solved by our system. The specification should objectively specify everything our system must do and the conditions under which it must perform. Management of the number and complexity of the requirements is one part of the task.

The most challenging aspect of requirements gathering is communicating with the people who are supplying the requirements. If we have a consistent way of recording requirements we make it possible for the stakeholders to participate in the requirements process. As soon as we make a requirement visible we can start testing it. and asking the stakeholders detailed questions. We can apply a variety of tests to ensure that each requirement is relevant, and that everyone has the same understanding of its meaning. We can ask the stakeholders to define the relative value of requirements. We can define a quality measure for each requirement, and we can use that quality measure to test the eventual solutions.

Testing starts at the beginning of the project, not at the end of the coding. We apply tests to assure the quality of the requirements. Then the later stages of the project can concentrate on testing for good design and good code. The advantages of this approach are that we minimise expensive rework by minimising requirements-related defects that could have been discovered, or prevented, early in the project's life.

References:

An Early Start to Testing: How to Test Requirements
Suzanne Robertson

Wednesday, September 05, 2007

Testing Webinar

I came to know about the EuroSTAR Webinar which is schduled for Wednesday, 5th September! I think the topic is:
An Introduction to Testing on Agile Teams – The Practices & Beyond: presented by Antony Marcano, testingReflections.com, UK.

More Details follow:
======================
Date: Wednesday, 5th September, 2007
Time: 10:00 am London-Dublin / 11:00am CET
Duration: 30 minutes

Abstract: An increasing number of organisations are considering, or are in the process of, adopting Agile software development practices. Often, how testers integrate into this process is an afterthought. Worse still, organisations assume that it changes nothing about how testers function and operate. This couldn’t be further from the truth. In fact, a capable Agile team can change the very raison d’etre of a tester in all the ways that testers have often hoped for. No longer does the tester *need* to be the gatekeeper of quality; the whole development team cares about quality like never before. No longer are testers at the end of the process; testers are involved from the outset of the project!

During this webinar, Antony discusses:

Key Points
• What is it that makes a team ‘Agile’? – Practices such as Test Driven Development are a reflection of underlying values and goals. It’s the adoption of these values and goals that allows a team to gain the greatest benefit from adopting an Agile approach to software development.

• What are the common ‘Gotchas’ for testers on Agile teams? – For example, extraordinarily short iterations producing software with end-to-end features can catch out many testing teams. This is especially true if the test team is used to being segregated from the developers as a separate team and/or rely on large amounts of manually executed scripted tests.

• What role do testers play and how can you deliver the most value? – Your primary role is no longer just to inform the project of how the software doesn’t work, but to be a welcomed guide who helps, before the first line of code is written, to make sure that the software does work.

Register Here - http://qualtechconferences.arobis.com/content.asp?id=246

Monday, September 03, 2007

Best Practices... 2

... Continued from previous post.
 
 
  1. Plan real life resource intensive tasks in advance and procure resources accordingly

Some applications may require processing large number of data for final output. During

functional testing of such applications, for quick results, a tester keeps on doing testing with

small set of data, which may take very less time. But user of that application will work with real

data, which may take significant time. A QE should process real time data that requires

application long time to process the data. Testing such cases in the end is not good idea. A

failure for long duration run is difficult to isolate. Isolating of such bugs is time consuming

activities. In the end of cycle such bugs will create panic and may delay major milestone.

Proper planning should be done to ensure smooth testing for long duration tests.

 

  1. Test smartly

As the project/product becomes more and more complex, testing becomes a bigger challenge.

It is indeed impossible to test each and every scenario. Hence there is a need to test smartly.

Some of the smart testing approaches that one can adopt are:

a) Catching them early

One important thing to remember is to catch the bugs as early as possible in the life cycle.

The cost of fixing a bug early in the cycle is very less in comparison to the bug logged late in the

cycle.

b) 80-20 rule

This law states that eighty percent of the functionality should be covered in twenty percent of the

test cases. This method is a very handy resource of saving time and helps to cover the most

important scenarios as fast as possible.

c) Automation

Automation helps a lot in covering a lot of regression features thereby reducing the overhead of

the test team to test out the old features. Automation saves the tester time and can do repeated

tasks over and over again.

 

  1. Sanity testing on every build

The sanity testing is very important and after every check-in, the sanity needs to be run.

Sometimes a change in one module may affect the other modules as such and also the entire app/product.

Sanity on every check-in/change will ensure that the basic things are working fine.

The issue with sanity is how many tests and which tests are you going to cover in the sanity pack.

The sanity pack should contain at least one test from all the features.

  1. Feature sweep, Compatibility sweep etc for every feature.

a)Feature sweep

Feature sweep covers basic test cases of functionality. For every major milestone, these test

cases should be executed to ensure that functionality is not broken.

b) Compatibility Sweep

There are many features that require test cases for multiple hardware platforms,

Operating System among others. For a application that is supported on multiple OS (e.g.

Windows, Mac OS), different hardware configuration (AMD, Intel, Dual processor, hyper

threaded machine, different RAM etc.) and hardware (e.g. different printer for printing

application) such sweeps are very important.

 

 

  1. Proper documentation

Documentation is a very wide term and includes any kind of written communication that helps the

project in sailing through. Documentation can be classified into four broad categories:

1) From the developers point of view

• Feature spec

• Design document

• Commenting

2) From the testing point of view

• Test plan

• Test script

• Execution matrix

• Resource planning document

• QA plan, among others

3) Documentation meant for the external users.

• User guide

• Help files

• Readme documents

4) Project management and Product management

• Requirement Specs

• Project Schedule

As a good Quality Engineer one should try to do the right amount of documentation ensuring that

the documentation helps smooth execution of the project and also helps to catch bugs early in

the cycle. Also the QE should make sure that the Developer has done the proper documentation

and if not the QE should follow up with the developer to get it done.