Wednesday, August 31, 2005

Usability Testing

What is Usability Testing?

Usability testing is the process of working with end-users directly and indirectly to asses how the user perceives a software package and how they interact with it. This process will uncover areas of difficulty for users as well as areas of strength. The goal of usability testing should be to limit and remove difficulties for users and to leverage areas of strength for maximum usability.

This testing should ideally involve direct user feedback, indirect feedback (observed behavior), and when possible computer supported feedback. Computer supported feedback is often (if not always) left out of this process. Computer supported feedback can be as simple as a timer on a dialog to monitor how long it takes users to use the dialog and counters to determine how often certain conditions occur (ie. error messages, help messages, etc). Often, this involves trivial modifications to existing software, but can result in tremendous return on investment.

Ultimately, usability testing should result in changes to the delivered product in line with the discoveries made regarding usability. These changes should be directly related to real-world usability by average users. As much as possible, documentation should be written supporting changes so that in the future, similar situations can be handled with ease.

When to Begin

This process can and should begin as early in the development process as possible. It need not involve a large number of users, in fact 3-10 is probably ideal depending on the scale of the product. These users should ideally be intended users for the product (ie. alpha/beta testers) and should represent a decent cross section of users targeted by the application.

The real question should be "when to end". I believe that this is an incremental process with many beginnings and endings. Ultimately, the job is not done until the product has reached the end of its lifecycle (not just development cycle). Each incremental step should be relatively short with changes and supporting documentation made often (up until initial delivery of the product.) Once a product is initially delivered it can be difficult to make these kind of changes without affecting active users. Once the product is shipped, changes should be considered more carefully with special concern for how active users will be affected and how future users will benefit.

However, it is never too late to start. Even if you are nearing the end of the development cycle, usability testing can still yield enormous results. Sometimes even minor changes to the UI, help system, reports, etc can make the product more appealing to users.

How to Begin

Usability testing can be quite simple. There are 4 basic ingredients that are essential to success. These are

  1. The usability testing person/team needs to include a software developer who is open-minded about changes and not offended by criticism. The goal of usability testing is not to criticize, but improve and learn. If any member of the team is not ready to receive criticism with an open-mind, the testing will almost certainly fail. This person needs to have a good working knowledge of the workflow process the application is designed to facilitate and needs to have good communication skills. Good note taking skills are also essential.

  2. The users selected as test subjects need to be average users (not all power users, and not all entry-level users.) They should understand what the application is designed to do and should be able to communicate their needs reasonably well. Often, these users will not be able to communicate their needs in technical terms, but will be able to identify problems they are having.

  3. The development person/team needs to be prepared to make changes on an incremental basis in relatively fast paced environment. The best situation is to make needed changes quickly so that those changes can be incorporated into the continuing testing process.

  4. Patience. The usability testing and refinement process can take some time and will sometimes go in the wrong direction. By keeping changes small and incremental, it is usually easy to back-track and rework the problem without significant setbacks.
There are 3 methods of feedback that need to be incorporated into our testing. These are:

  1. Direct user feedback. This type of feedback usually occurs by having the test users use the software alone and report back to the usability team/person. Ideally, reporting back should occur on a regular basis (daily or weekly at most).

  2. Observed behavior. This type of feedback can occur in conjunction with direct user feedback. This occurs when the testing team/person observes how users use the software.

  3. Computer supported feedback. This type of feedback can occur on an ongoing basis throughout the testing process. As mentioned this is usually quite simple involving timers and hit counters.
Each of these feedback methods should be used to achieve the ultimate goal.

How to best Leverage Feedback Methods

Direct Feedback

  1. Provide users with notebooks, pre-printed forms, a simple program, web page, etc to record their problems as they are having them. They tend to forget the details if this is not done.

  2. Regularly review the issues users report.

  3. Meet with users on a scheduled (somewhat) basis to discuss their issues and make sure that you fully understand it before proceeding. Be prepared for this meeting by reviewing their issues prior to the meeting.

  4. Keep a good communication dialog open with all users.

  5. Prioritize direct-feedback issues highly. Users need to see results relatively quickly or they get discouraged with the process.
Observed Behavior

  1. Use multiple opportunities to observe behavior. Whether you are training them, discussing problems found earlier, or just walking past their desks, these are opportunities to observe their behavior. (I doubt many of us have 1 way glass to watch users, so I won't go into that here.)

  2. Take notes (use same forms, software, web pages, etc as users use if possible. This will insure you do not forget what you observed.

  3. Compare the notes you are taking against the notes users are taking. If users are not reporting nearly as many problems as you are finding, it is possible that they are not comfortable with the process yet.

  4. Keep a good communication dialog open with all users.

  5. Don't interfere with their normal process. The goal is not to train them on the "right way", but rather to have the software work "their way".

  6. Be prepared to misinterpret user behavior. Sometimes you might observe a user having an apparent problem, but in-fact you are misunderstanding what they are doing.

  7. Prioritize observed behavior after direct-feedback. Users are not expecting these changes and they can cause confusion. Carefully review your observations and discuss them with your team and the users.
Computer Supported Feedback

  1. It is hard to know where to use computer supported feedback early in the development and testing cycles. Therefore start simple and grow from there.

  2. Be careful that the code that supports this does not interfere with the users workflow and by all means does not crash the software. (Been there, done that.)

  3. If possible, log all computer-feedback issues into a simple database (Access worked well for me.)

  4. When reviewing the log, be very careful to not overlook issues and not misinterpret the data. (Your method need not be statistically valid, just reasonable.)

  5. When you see an issue in the data, try and support it through direct feedback and observed behavior methods. This data can be very helpful in knowing what to look for when working with users.

  6. Consider leaving this capability in the final product carefully. Only do so, if the log can be disabled safely and it has been thoroughly tested against all issues (especially exceptional growth in log size.)

Monday, August 29, 2005

Black Box Testing

Black box testing or functional testing is used to check that the outputs of a program, given certain inputs, conform to the functional specification of the program.

The term black box indicates that the internal implementation of the program being executed is not examined by the tester. For this reason black box testing is not normally carried out by the programmer.

Black Box Testing is testing without knowledge of the internal workings of the item being tested. For example, when black box testing is applied to software engineering, the tester would only know the "legal" inputs and what the expected outputs should be, but not how the program actually arrives at those outputs. It is because of this that black box testing can be considered testing with respect to the specifications, no other knowledge of the program is necessary.
For this reason, the tester and the programmer can be independent of one another, avoiding programmer bias toward his own work. For this testing, test groups are often used, "Test groups are sometimes called professional idiots...people who are good at designing incorrect data."

The opposite of this would be glass box testing, where test data are derived from direct examination of the code to be tested. For glass box testing, the test cases cannot be determined until the code has actually been written. Both of these testing techniques have advantages and disadvantages, but when combined, they help to ensure thorough testing of the product.
Also, do to the nature of black box testing, the test planning can begin as soon as the specifications are written.

Advantages of Black Box Testing
There are many advantages of Black Box Testing...

  1. More effective on larger units of code than glass box testing

  2. The tester needs no knowledge of implementation, including specific programming languages

  3. The tester and programmer are independent of each other

  4. Tests are done from a user's point of view

  5. Will help to expose any ambiguities or inconsistencies in the specifications

  6. Test cases can be designed as soon as the specifications are complete

  7. The test is unbiased because the designer and the tester are independent of each other.

  8. The tester does not need knowledge of any specific programming languages.

  9. The test is done from the point of view of the user, not the designer.

  10. Test cases can be designed as soon as the specifications are complete.

Disadvantages of Black Box Testing

  1. Only a small number of possible inputs can actually be tested, to test every possible input stream would take nearly forever

  2. Without clear and concise specifications, test cases are hard to design

  3. There may be unnecessary repetition of test inputs if the tester is not informed of test cases the programmer has already tried

  4. May leave many program paths untested

  5. Cannot be directed toward specific segments of code which may be very complex (and therefore more error prone)

  6. Most testing related research has been directed toward glass box testing

  7. The test can be redundant if the software designer has already run a test case.

  8. The test cases are difficult to design.

  9. Testing every possible input stream is unrealistic because it would take a inordinate amount of time; therefore, many program paths will go untested.

Testing Strategies/Techniques

  1. Black box testing should make use of randomly generated inputs (only a test range should be specified by the tester), to eliminate any guess work by the tester as to the methods of the function

  2. Data outside of the specified input range should be tested to check the robustness of the program

  3. Boundary cases should be tested (top and bottom of specified range) to make sure the highest and lowest allowable inputs produce proper output

  4. The number zero should be tested when numerical data is to be input

  5. Stress testing should be performed (try to overload the program with inputs to see where it reaches its maximum capacity), especially with real time systems

  6. Crash testing should be performed to see what it takes to bring the system down

  7. Test monitoring tools should be used whenever possible to track which tests have already been performed and the outputs of these tests to avoid repetition and to aid in the software maintenance

  8. Other functional testing techniques include: transaction testing, syntax testing, domain testing, logic testing, and state testing.

  9. Finite state machine models can be used as a guide to design functional tests

  10. According to Beizer the following is a general order by which tests should be designed:

  11. Clean tests against requirements.

  12. Additional structural tests for branch coverage, as needed.

  13. Additional tests for data-flow coverage as needed.

  14. Domain tests not covered by the above.

  15. Special techniques as appropriate--syntax, loop, state, etc.

  16. Any dirty tests not covered by the above.
Glass box testing requires the intimate knowledge of program internals, while black box testing is based solely on the knowledge of the system requirements. Being primarily concerned with program internals, it is obvious in SE literature that the primary effort to develop a testing methodology has been devoted to glass box tests. However, since the importance of black box testing has gained general acknowledgement, also a certain number of useful black box testing techniques were developed.
It is important to understand that these methods are used during the test design phase, and their influence is hard to see in the tests once they're implemented.


  1. Forces test developer to reason carefully about implementation

  2. Approximates the partitioning done by execution equivalence

  3. Reveals errors in "hidden" code:

  4. Beneficent side-effects

  5. Optimizations (e.g. charTable that changes reps when size > 100)


  1. Expensive

  2. Miss cases omitted in the code

Wednesday, August 24, 2005

Unit, Component and Integration Testing

Unit, Component and Integration testing

The following definitions are from a posting by Boris Beizer on
the topic of "integration testing" in the c.s.t. newsgroup.

The definitions of integration tests are after Leung and White.
Note that the definitions of unit, component, integration, and
integration testing are recursive:

Unit - The smallest compilable component. A unit typically is the
work of one programmer (At least in principle). As defined, it does
not include any called sub-components (for procedural languages) or
communicating components in general.

Unit Testing: in unit testing called components (or communicating
components) are replaced with stubs, simulators, or trusted
components. Calling components are replaced with drivers or trusted
super-components. The unit is tested in isolation.

Component: a unit is a component. The integration of one or more
components is a component.

Note: The reason for "one or more" as contrasted to "Two or
more" is to allow for components that call themselves

Component testing: the same as unit testing except that all stubs
and simulators are replaced with the real thing.

Two components (actually one or more) are said to be integrated when:
a. They have been compiled, linked, and loaded together.
b. They have successfully passed the integration tests at the
interface between them.

Thus, components A and B are integrated to create a new, larger,
component (A,B). Note that this does not conflict with the idea of
incremental integration -- it just means that A is a big component
and B, the component added, is a small one.

Integration testing: carrying out integration tests.

Integration tests (After Leung and White) for procedural languages.
This is easily generalized for OO languages by using the equivalent
constructs for message passing. In the following, the word "call"
is to be understood in the most general sense of a data flow and is
not restricted to just formal subroutine calls and returns -- for
example, passage of data through global data structures and/or the
use of pointers.

Let A and B be two components in which A calls B.
Let Ta be the component level tests of A
Let Tb be the component level tests of B
Tab The tests in A's suite that cause A to call B.
Tbsa The tests in B's suite for which it is possible to sensitize A
-- the inputs are to A, not B.
Tbsa + Tab == the integration test suite (+ = union).

Note: Sensitize is a technical term. It means inputs that will
cause a routine to go down a specified path. The inputs are to
A. Not every input to A will cause A to traverse a path in
which B is called. Tbsa is the set of tests which do cause A to
follow a path in which B is called. The outcome of the test of
B may or may not be affected.

There have been variations on these definitions, but the key point is
that it is pretty darn formal and there's a goodly hunk of testing
theory, especially as concerns integration testing, OO testing, and
regression testing, based on them.

As to the difference between integration testing and system testing.
System testing specifically goes after behaviors and bugs that are
properties of the entire system as distinct from properties
attributable to components (unless, of course, the component in
question is the entire system). Examples of system testing issues:
resource loss bugs, throughput bugs, performance, security, recovery,
transaction synchronization bugs (often misnamed "timing bugs").

Testing Glossary

A simple glossary of Software Testing

Ad Hoc Testing
Goal oriented passing through the product. Sometimes to prove or disprove a notion of how the product will behave.

Alpha Test
The part of the Test Phase of the PLC where code is complete and the product has achieved a degree of stability. The product is fully testable (determined by QA). All functionality has been implemented and QA has finished the implementation of the test plans/cases. Ideally, this when development feels the product is ready to be shipped.

Automated Testing
Creation of individual tests created to run without direct tester intervention.

Beta Test
The part of the Test Phase of the PLC where integration testing plans are finished, depth testing coverage goals met; Ideally, QA says product is ready to ship. The product is stable enough for external testing (determined by QA).

Black Box Test
Tests in which the software under test is treated as a black box. You can't "see" into it. The test provides inputs and responds to outputs without considering how the software works.

Boundary Testing
Test which focus on the boundary or limit conditions of the software being tested. (Some of these tests are stress tests).

Breadth Testing
Matrix tests which generally cover all product components and functions on an individual basis. These are usually the first automated tests available after the functional specifications have been completed and test plans have been drafted.

Breath Testing
Generally a good thing to do after eating garlic and before going out into public. Or you may have to take a breath test if you're DUI.

A phenomenon with an understanding of why it happened.

Code Complete
Phase of the PLC where functionality is coded in entirety; bug fixes are all that are left. All functions found in the Functional Specifications have been implemented.

Code Freeze
When development has finished all new functional code. This is when development is in a "bug fixing" stage.

Coding Phase
Phase of the PLC where development is coding product to meet Functional/Architectural Specifications. QA develops test tools and test cases during this phase.

Compatibility Test
Tests that check for compatibility of other software or hardware with the software being tested.

Concept Phase
Phase of the PLC where an idea for a new product is developed and a preliminary definition of the product is established. Research plans should be put in place and an initial analysis of the competition should be completed. The main goal of this phase is to determine product viability and obtain funding for further research.

Coverage analysis
Shows which functions (i.e., GUI and C code level) have been touched and which have not.

Data Validation
Verification of data to assure that it is still correct.

To search for and eliminate malfunctioning elements or errors in the software.

Definition Phase
See Design Phase.

This is when a component of a product is dependent on an outside group. The delivery of the product or the reaching a certain milestone is affected.

Depth Testing
Encompasses Integration testing, real world testing, combinatorial testing, Interoperability and compatibility testing.

Design Phase
Phase of the PLC where functions of the product are written down. Features and requirements are defined in this phase. Each department develops their departments' plan and resource requirements for the product during this phase.

Dot Release
A major update to a product.

A bug that no one wants to admit to.

The center of interest or activity. In software, focus refers to the area of the screen where the insertion point is active.

Phase of the PLC defining modules.

Their implementation requirements and approach, and exposed API. Each function is specified here. This includes the expected results of each function.

See Green Master.

Green Master (GM)
Phase of the PLC where the certification stage begins. All bugs, regressed against the product, must pass. Every build is a release candidate (determined by development).

Graphical User Interface.

Integration Testing
Depth testing which covers groups of functions at the subsystem level.

Interoperability Test
Tests that verify operability between software and hardware.

Load Test
Load tests study the behavior of the program when it is working at its limits. Types of load tests are Volume tests, Stress tests, and Storage tests.

This term refers to making software specifically designed for a specific locality.

Maintenance Release
See Inline.

A standard of measurement. Software metrics are the statistics describing the structure or content of a program. A metric should be a real objective measurement of something such as number of bugs per lines of code.

Events in the Product Life Cycle which define particular goals.

Performance Test
Test that measures how long it takes to do a function.

A flaw without an understanding.

Product Life Cycle - see Software Product Life Cycle.

Pre-build 1; product definition phase. (Functional Specification may still be in process of being created).

Product Life Cycle
The stages a product goes through.

from conception to completion. Phases of product development includes: Definition Phase, Functional/Architectural Specification Phase, Coding Phase, Code Complete Phase, Alpha, Beta, Zero Bug Build Phase, Green Master Phase, STM, and Maintenance/Inline Phase.

Proposal Phase
Phase of the PLC where the product must be defined with a prioritized feature list and system and compatibility requirements.

QA Plan
A general test plan given at the macro level which defines the activities of the test team through the stages of the Product Life Cycle.

Real World Testing
Integration testing which attempt to create environments which mirror how the product will be used in the "real world".

Regression Testing
Retesting bugs in the system which had been identified as fixed, usually starting from Alpha on.

People, software, hardware, tools, etc. that have unique qualities and talents that can be utilized for a purpose.

Something that could potentially contribute to failing to reach a milestone.

See Ship to Manufacturing.

Storage Tests
Test how memory and space is used by the program, either in resident memory or on disk.

Stress Test
Tests the program's response to peak activity conditions.

Syncopated Test
A test that works in harmony with other tests. The timing is such that both tests work together, but yet independently.

Test Case
A breakdown of each functional area into an individual test. These can be automated or done manually.

Test Phase
Phase of the PLC where the entire product is tested, both internally and externally. Alpha and
Beta Tests occur during this phase.

Test Plan
A specific plan that breakdown testing approaches on a functional area basis.

Test Suite
A set of test cases.

The degree to which the intended target users can accomplish their intended goals.

Volume Tests
Test the largest tasks a program can deal with.

White Box Test
It is used to test areas that cannot be reached from a black box level. (Sometimes called Glass
Box testing).

Zero Bug Build
Phase of the PLC where the product has stabilized in terms of bugs found and fixed. Development is fixing bugs as fast as they are found, the net resulting in zero bugs on a daily basis. This is usually determined when after a few builds have passed. This is the preliminary stage before Green Master.    

Thursday, August 18, 2005

Tasks of a Test Engineer

The Test Engineer should participate o­n teams that design and develop the products. They should develop test protocols and test reports for the verification testing of the product to ensure that they meet product requirements and specifications. Tasks and responsibilities include:

1. Prepare verification and validation test plans, test procedures and test reports.

2. Participate in the development of product requirements and specifications.

3. Perform verification and validation testing of hardware and software.

4. Participate in customer meetings o­n project status and design reviews.

5. Facilitate risk analysis activities.

6. Contribute to the overall improvement in procedures.

7. Ensure that testing is performed

8. Ensure that testing is documented

9. Ensure that testing methodology, techniques, standards are established and developed

Testing personnel are encouraged to be creative & innovative, while at the same time are expected to learn and follow our structured processes.

The basic skills for a qualified test engineer:

  • Controlled - Organized individual, systematic planing - Good planing o­n testing
  • Competent - Technical awareness of testing methods, tools, and criteria
  • Critical - Inner determination to discover problems
  • Comprehensive - Total understanding of the given system and specifications - Pay good attention to the details
  • Considerate - Ability to related to others and resolve conflicts

The basic tasks of a test engineer include:

  1. Prepare testing plans and organize testing activities
  2. Design test cases and document them using well-defined test methods
  3. Develop test procedures and test data
  4. Write problem reports and text execution records
  5. Use testing tools and aids
  6. Review test designs, results and problems
  7. Test maintenance changes
  8. Oversee acceptance tests.

Friday, August 12, 2005

Introduction to Test Case Writing

Test Case writing may be referred to the full process of case development from the decision to use a case to release of the case to its use in class. The entire sequence of steps of the process is set forth in Figure 1. However, the suggested activities for case writing that follow have been established to assist instructors or case writers in organizing and presenting information in the case format. The focus is on the writing process.

The Test Case Writing Process

Step 1: Case Origin

Identify the needs

Step 2: Establishing the needs

The search for a specific issue ideas and individuals or organizations that might supply the case information

Step 3: Initial Contact

The establishment of access to material on the case subject

Step 4: Data Collection

The gathering of the relevant information for the case

Step 5: The Writing Process

The organization and the presentation of the data and information

Step 6: Release

The obtaining of permission from the appropriate individuals to use the case for educational purposes.

Adopted from Leenders & Erskine (1989)

  1. A case should appear authentic and realistic. The case must develop the situation in real life terms. Reality must be brought into the case. Use as much factual information as possible. In the case, quotes, exhibits and pictures can be included to add realism and life to the case. The problem scenario in the case should be relevant to the real world so that students can experience and share the snapshot of reality.

  2. Use an efficient and basic case structure in writing. First, open up the case with the broadest questions, and then face the specific situation. Close with a full development of the specific issues. The presentation of a case should be primarily in a narrative style, which is a story-telling format that gives details about actions and persons involved in a problem situation.

  3. There must be a fit of the case with students’ educational needs, and the needs in practice. The topics and content of the case should be appropriate and important to the particular students in which the case is used. Moreover, case ideas should be relevant to the learning objectives

  4. A case should not propound theories, but rather pose complex, controversial issues. There are no simple or clearly bounded issues. The controversy of a case can entail debate or contest. It creates learning at many levels – not only substantive learning, but learning also with respect to communication and persuading others. The relationship between issues and the theories should be dealt with through the discussion or instruction.
  5. There should be sufficient background information to allow students to tackle the issue(s). Include not only the events that happened, but also how the people involved perceive them. There should be enough description in the prose of the case itself for students to be able to situate the case problem, understand the various issues that bear on the problem, and identify themselves with the decision-maker’s position. Also, good cases need descriptions of the people involved since understanding an individual’s predisposition, position, and values, is an important part of the decision making.

  6. Write the case in a well-organized structure and in clear language. A case should be easy to read or access. Make sure that you prepare an outline of the case and use it to organize your materials. Also ensure the clarity and refinement of your presentation of the case.

Project Proposal
Business Need

Market Opportunity

Software Requirements Spec

Use Cases
Use Case Suite

Use Cases

Feature Specifications
Feature Set

Feature Specifications

Non-Functional Requirements

Environmental Requirements

Structural Design

Behavioral Design

User Interface

Build System




Target & Benefits
Target Market Segment

Claimed Customer Benefits

User Needs
Classes of Users

User Stories

Quality Assurance
QA Plan

Test Suite

Test Cases

Test Runs

Interview Notes
Requests from Customers

Use cases are a popular way to express software requirements. They are popular because they are practical. A use case bridges the gap between user needs and system functionality by directly stating the user intention and system response for each step in a particular interaction

Step One: Identify Classes of Users

The first step in writing use cases is understanding the users, their goals, and their key needs. Not all users are alike. Some users will expect to walk up to the system and accomplish one goal as quickly as possible.

Step Two: Outline the Use Case Suite

The second step in our breadth-first approach to writing use cases is to outline the use case suite. A use case suite is an organized table of contents for your use cases: it simply lists the names of all use cases that you intend to write. The suite can be organized several different ways. For example, you can list all the classes of users, and then list use cases under each.

Step Three: List Use Case Names

If you did step two, this step will be much easier to do well. Having an organized use case suite makes it easier to name use cases because the task is broken down into much smaller subtasks, each of which is more specific and concrete.

Step Four: Write Some Use Case Descriptions

In step three, you may have generated ten to fifty use case names on your first pass. That number will grow as you continue to formalize the software requirements specification. That level completeness of the specification is very desirable because it gives more guidance in design and implementation planning, it can lead to more realistic schedules and release scoping decisions, and it can reduce requirements changes later.

Step Five: Write Steps for Selected Use Cases

1) Enable users to achieve the key benefits claimed for your product

2) Determine a user's first impression of the product

3) Challenge the user's knowledge or abilities

4) Affect workflows that involve multiple users

5)Explain the usage of novel or difficult-to-use features

Each use case step has two parts: a user intention and system response:

  1. User Intention

    The user intention is a phrase describing what the user intends to do in that step. Typical steps involve accessing information, providing input, or initiating commands. Usually the user intent clearly implies a UI action. For example, if I intend to save a file, then I could probably press Control-S. However, "press Control-S" is not written in use cases. In general, you should try not to mention specific UI details: they are too low-level and may change later.

  2. System Response

    The system response is a phrase describing the user-visible part of the system's reaction to the user's action. As above, it is best not to mention specific details that may change later. For example, the system's response to the user saving a file might be "Report filename that was saved". The system response should not describe an internal action. For example, it may be true that the system will "Update database record", but unless that is something that the user can immediately see, it is not relevant to the use case.

Step Six: Evaluate Use Cases

An important goal of any requirements specification is to support validation of the requirements. There are two main ways to evaluate use cases:

  1. Potential customers and users can read the use cases and provide feedback.

  2. Software designers can review the use cases to find potential problems long before the system is implemented.

You can perform a more careful evaluation of your use cases and UI mockups with cognitive walk-throughs. In the cognitive walk-through method, you ask yourself these questions for each step:

  • Will the user realize that he/she should have the intention listed in this step?

  • Will the user notice the relevant UI affordance?

  • Will the user associate the intention with the affordance?

  • Does the system response clearly indicate that progress is being made toward the use case goal?

Use Case Writing Steps
1: Identify Functional Areas

2: Outline the Feature Set

3: List Feature Names

4: Write Some Feature Descriptions

5: Write Specs for Selected Features

6: Evaluate Features

Feature Spec Writing Steps

1: Identify Functional Areas

2: Outline the Feature Set

3: List Feature Names

4: Write Some Feature Descriptions

5: Write Specs for Selected Features

6: Evaluate Features

Fundamentals of Software Testing - In Brief

Some of the fundamentals of software testing include

- Software Test Requirements

- Software Test Design

- Software Test Planning

Let us deal each of the above one by one.

Software Test Requirements

Identifying and defining software requirements in general is a difficult job. Requirements Management is seen as the key to success in software development.

A formal approach to specifying test requirements goes a long way toward satisfying the two major complaints cited in the survey above. It formalizes the translation of software requirements to test requirements and makes the process repeatable (Capability Maturity Model level 2).

Even with the availability of the ANSI/IEEE standards, Gerrard holds that the majority of requirements documents are "often badly organized, incomplete, inaccurate and inconsistent." He also believes that the majority of documented requirements are "untestable" as written because they are presented in a form that is difficult for "testers" to understand and test against their expectations. It is up to the test engineers themselves to translate those requirements into testable ones.

There are no standards documents to guide the specification of requirements for software testing (and/or to guide the translation of existing software requirements specifications). Refining existing software requirements into software testing requirements is a very difficult task. The information a test engineer must have in order to properly test a software component is highly detailed and very specific. Granted, there are different levels and approaches to testing and test data specification (Black Box, Gray Box, and White Box views of the software) and the nature of the test requirements depends on the point of view of the test engineer. The stance of the test engineer is also dependent on the level of depth and details the software requirements specification contains. Thus, it is possible to specify test requirements from both Black Box and White Box perspectives.

Before starting test design, we must identify our test objectives, focuses, and test items. The major purpose is to help us understand what are the targets of software testing.

This step can be done based on:

  1. Requirements specifications

  2. Inputs from developers

  3. Feedback from customers Benefits are:

  4. Identify and rank the major focus of software testing

  5. Check the requirements to see if they are correct, completed, and testable

  6. Enhance and update system requirements to make sure they are testable

  7. Support the decision on selecting or defining test strategy

Software Test Design

Software test design is an important task for software test engineers. A good test engineer always know how to come out quality test cases and perform effective tests to uncover as many as bugs in a very tight schedule.

What do you need to come out an effective test set ? -

  • Choose a good test model and an effective testing method

  • Apply a well-defined test criteria

  • Generate a cost-effective test set based on the selected test criteria

  • Write a good test case specification document What is a good test case?

It must have a high probability to discover a software error -

  • It is designed to aim at a specific test requirement

  • It is generated by following an effective test method

  • It must be well documented and easily tracked

  • It is easy to be performed and simple to spot the expected results

  • It avoids the redundancy of test cases

Software Test Planing
This includes the following activities.

  1. Testing activities and schedule

  2. Testing tasks and assignments

  3. Selected test strategy and test models

  4. Test methods and criteria

  5. Required test tools and environment

  6. Problem tracking and reporting

  7. Test cost estimation

    Software Errors

    Definition of Software Error - What is a software error?

    One common definition of a software error is a mismatch between the program and its specification.

    Definition #1:

    “A mismatch between the program and its specification is an error in the program if and only if the specification exists and is correct.”

    Definition #2
    “A software error is present for when the program does not do what its end user reasonability expects to do.” (Myers, 1976)

    Categories of Software Errors

    1. User interface errors, such as output errors, incorrect user messages.
    2. Function errors
    3. Defect hardware
    4. Incorrect program version
    5. Testing errors
    6. Requirements errors
    7. Design errors
    8. Documentation errors
    9. Architecture errors
    10. Module interface errors
    11. Performance errors
    12. Error handling
    13. Boundary-related errors
    14. Logic errors such as
      • calculation errors
      • State-based behavior errors
      • Communication errors
      • Program structure errors, such as control-flow errors

    Most programmers are rather cavalier about controlling the quality of the software they write. They bang out some code, run it through some fairly obvious ad hoc tests, and if it seems okay, they’re done. While this approach may work all right for small, personal programs, it doesn’t cut the mustard for professional software development. Modern software engineering practices include considerable effort directed toward software quality assurance and testing. The idea, of course, is to produce completed software systems that have a high probability of satisfying the customer’s needs.

    There are two ways to deliver software free of errors. The first is to prevent the introduction of errors in the first place. And the second is to identify the bugs lurking in your code, seek them out, and destroy them. Obviously, the first method is superior. A big part of software quality comes from doing a good job of defining the requirements for the system you’re building and designing a software solution that will satisfy those requirements. Testing concentrates on detecting those errors that creep in despite your best efforts to keep them out.

    Wednesday, August 10, 2005

    The Basics of Automated Testing - 2

    ... continued from The Basics of Automated Testing - 1

    Automated test can be broken into two big pieces:
    · Running the automated tests
    · Validating the results.

    Running the Automated Tests

    This concept is pretty basic, if you want to test the submit button a login page, you can override the system and programmatically move the mouse to a set of screen coordinates, then send a click event. There is another much trickier way to do this. You can directly call the internal API that the button click event handler calls. Calling into the API is good because it's easy. Calling an API function from your test code is a piece of cake, just add in a function call. But then you aren't actually testing the UI of your application. Sure, you can call the API for functionality testing, then every now and then click the button manually to be sure the right dialog opens.

    Rationally this really should work great, but a lot of testing exists outside the rational space. There might be lots of bugs that happen when the user goes through the button instead of directly calling the API. And here's the critical part – almost all of your users will use your software through the UI, not the API. So those bugs you miss by just going through the API will be high exposure bugs. These won't happen all the time, but they're the kind of things you really don't want to miss, especially if you were counting on your automation to be testing that part of the program.

    If your automation is through the API, then this way you're getting no testing coverage on your UI. And you'll have to do that by hand.

    Simulating the mouse is good because it's working the UI the whole time, but it has its own set of problems. The real issue here is reliability. You have to know the coordinates that you're trying to click before hand. This is doable, but lots of things can make those coordinated change at runtime. Is the window maximized? What's the screen resolution? Is the start menu on the bottom or the left side of the screen? Did the last guy rearrange the toolbars? And what suppose the application is used by users of Arabic language, where the display is from right to left? These are all things that will change the absolute location of your UI.

    The good news is there are tricks around a lot of these issues. The first key is to always run at the same screen resolution on all your automated test systems (note: there are bugs we could be missing here, but we won't worry about that now - those are beyond the scope of our automation anyway.) We also like to have our first automated test action by maximizing the program. This takes care of most of the big issues, but small things can still come up.

    The really sophisticated way to handle this is to use relative positioning. If your developers are nice they can build in some test hooks for you so you can ask the application where it is. This even works for child windows, you can ask a toolbar where it is. If you know that the 'file -> new' button is always at (x, y) inside the main toolbar it doesn't matter if the application is maximized or if the last user moved all the toolbars around. Just ask the main toolbar where it is and tack on (x, y) and then click there.

    So this has an advantage over just exercising the APIs since you're using the UI too, but it has a disadvantage too – it involves a lot of work.

    Results Verification

    So we have figured out the right way to run the tests, and we have this great test case, but after we have told the program to do stuff, we need to have a way to know if it did the right thing. This is the verification step in our automation, and every automated script needs this.

    We have many options
    · Verify the results manually
    · Verify the results programmatically
    · Use some kind of visual comparison tool.

    The first method is to do it ourselves, that is by manually verifying the results and see that it meets our expectations.

    The second way is of course to verify it programmatically. In this method, we can have a predefined set of expected results(baseline), which can be compared with the obtained results. The output of this would be whether a test case passed or failed. There are many ways by which we can achieve this; we can hard code the expected results in the program/script. We can also store the expected the results in a particular file or a text file or a properties file or a xml file and read the expected results from this file and compare with the obtained result.

    The other way is to just grab a bitmap of the screen and save it off somewhere. Then we can use a visual comparison tool to compare it with the expected bitmap files. Using a visual comparison tool is clearly the best, but also the hardest. Here your automated test gets to a place where it wants to check the state, looks at the screen, and compares that image to a master it has stored away some where.

    Saturday, August 06, 2005

    The Basics of Automated Testing - 1

    Test automation is the use of Software to control the execution of tests, the comparison of actual outcomes to predicted outcomes, the setting up of test preconditions, and other test control and test reporting functions. Over the past few years, tools that help programmers quickly create applications with graphical user interfaces have dramatically improved programmer productivity.

    This has increased the pressure on testers, who are often perceived as bottlenecks to the delivery of software products. Testers are being asked to test more and more code in less and less time. Test automation is one way to do this, as manual testing is time consuming.

    As and when different versions of a software are released, the new features will have to be tested manually time and again. But, now there are tools available that help the testers in the automation of the GUI which reduce the test time as well as the cost, other test automation tools support execution of performance tests.

    Many test automation tools provide record and playback features that allow users to record interactively user actions and replay it back any number of times, comparing actual results to those expected.

    Test Automation is an important subject since it's the most important part of any tester’s job. It's clearly not the best use of time to click on the same button looking for the same dialog box(expected result) every single day. Part of smart testing is delegating those kinds of tasks away so we can spend time on harder problems. And computers are a great place to delegate repetitive work.

    That's really what automated testing is about. We try to get computers to do our job for us. One of the ways a tester describes his goal is to put himself out of a job - meaning that automate the entire job. This is, of course, unachievable so we don't worry about losing our jobs. But it's a good vision!

    Our short term goal should always be to automate the parts of our job we find most annoying with the selfish idea of not having to do annoying stuff any more!!!

    With people new to automated testing, that's always how we frame it. Start small, pick an easy task that you have to do all the time. Then figure out how to have a computer do it for you. This has a great effect on your work since after you get rid of the first one that will free up more of your time to automate more and more annoying, repetitive tasks. Now with all this time you can go and focus on testing more interesting parts of your software.

    That last paragraph makes it sound like writing automated tests is easy, when in fact it's typically quite hard!!!

    There are some fundamentally hard problems in this space. There are a lot of test tools which try to help out with these problems in different ways. Hopefully it will be valuable as a way to better understand automated testing and as a way to help choose your test tools. As a side note, implementing automated tests for a text based or API based system is really pretty easy; Let us focus on a full UI application - which is where the interesting issues are.

    Automated test can be broken into two big pieces:
    · Running the automated tests.
    · Validating the results.

    ... to be continued