Wednesday, September 07, 2005

Software Test Automation and the Product Life Cycle


The Product Life Cycle(PLC) and Automated Test

A product's stages of development are referred to as the product life cycle (PLC). There is considerable work involved in getting a product through its PLC. Software testing at many companies has matured as lessons have been learned about the most effective test methodologies. Still, there is a great difference of opinion about the implementation and effectiveness of automated software testing and how it relates to the PLC.

Computers have taken over many functions in our society that were once "manual" operations. Factories use computers to control manufacturing equipment and have cut costs enormously. Electronics manufacturing use computers to test everything from microelectronics to circuit card assemblies. Since automation has been so successful in so many areas, does it make sense that a software program should be used to test another software program? This is referred to as "automated software testing" for the remainder of this article.

Software testing using an automatic test program will generally avoid the errors that humans make when they get tired after multiple repetitions. The test program won't skip any tests by mistake. The test program can also record the results of the test accurately. The results can be automatically fed into a database that may provide useful statistics on how well the software development process is going. On the other hand, software that is tested manually will be tested with a randomness that helps find bugs in more varied situations. Since a software program usually won't vary each time it is run, it may not find some bugs that manual testing will.
Automated software testing is never a complete substitute for manual testing.

There has been plenty of debate about the usefulness of automatic software testing. Some companies are quite satisfied with the developer testing his/her own work. Testing your own work is generally thought of as risky since you'll be likely to overlook bugs that someone not so close to the code (and not so emotionally attached to it) will see easily. As soon as the developer says it's done they ship it. The other extreme is the company that has its own automatic software test group as well as a group that tests the software manually. Just because we have computers does that mean that it is cost effective to write tests to test software and then spend time and resources to maintain them? The answer is both yes and no. When properly implemented, automated software test can save a lot of time, time that will be needed as the software approaches shipping.

This is where the PLC comes in. How effectively you make use of the PLC will often be dependent on your programming resources and the length of the PLC. Companies large and small struggle with software testing and the PLC. Hopefully, this discussion of the PLC should help you determine when to use automation and when manual testing is preferred. This should help you answer the questions: "Why should I automate my software testing?" "How can I tell if automation is right for my product?" "When is the best time to develop my test software?".


The Product Life Cycle

As we discuss the use of automated and manual testing, we need to understand what happens in each phase of the product life cycle. The PLC is made up of six major stages, the Design Phase, the Code Complete Phase, the Alpha Phase, the Beta Phase, the Zero Defect Build Phase, and the Green Master Phase. You can think of the PLC as a timeline showing the development of a software product. These are the major milestones that make up the Product Life Cycle. Products that follow these guidelines for implementation of the PLC will have a much better chance of making it to market on time.

The implementation of the PLC varies widely from company to company. You can use this as a guide for future reference to assist you in your automation efforts. Your implementation will vary from the ideal PLC that is discussed here, but your software's success may depend on how well you've implemented its PLC. If your PLC is to include automated testing you should pay attention to which automated tasks are performed during each phase.

For each phase we'll describe it, define its special importance and discuss how to incorporate software automation into your project. Most other discussions of the PLC don't include the lessons learned about test automation. This should be your "one-stop" guide to help you know how and when automation fits into the PLC.


Design Phase

What is the Design Phase? The design phase begins with an idea. Product managers, QA, and Development get together at this point to determine what will be included in the product. Planning is the essence of the design phase. Begin with the end in mind and with a functional specification. Write down all of your plans. What will your product do? What customer problems does it solve?

Incorrectly, some companies don't include Quality Assurance (QA) in the design phase. It is very important that QA be involved as early as possible. While developers are writing code, QA will be writing tests. Even though QA won't really have the total picture of the product, they will want to get as much of a jump on things as possible. Remember, that the primary purpose of QA is to report status. It is important to understand the product's status even early in the Design Phase.

Why is the Design Phase important? If you think you're too short on time to write up a functional description of your product, then consider the extra time involved to add new features later on. Adding features later (especially once the Code Complete Phase has been reached) is known as "feature creep". Feature creep can be a very costly haphazard way to develop your product, and may materially interfere with delivery of the software.

Automation activity during the Design Phase. As soon as the functional specification is written, create all test cases so that they can be run manually. Yes, that's right, manually! These manual tests are step-by-step "pseudo" code that would allow anyone to run the test. The benefits of this approach are:

  1. Your test cases can be created BEFORE ever seeing the software's user interface (UI). It is too soon to automate tests at this point in time, but you can create manual tests with only a small risk of changes that will occur . This is a point of great frustration for those who have tried to implement automated test scripts too early. Just as soon as the test script is written changes in the UI are bound to be introduced and all the work on the script is found to be for nothing.
  2. When (not if) the code is modified, you will always have manual procedures that can be adapted to the change more quickly than an automated test script. This is a great way to guarantee that you will at least have tests you can perform even if automation turns out to not be feasible. (Note: one of the objections to software test automation is that the tests must be continually updated to reflect changes in the software. These justifiable objections usually stem from the fact that automation was started too early.)
  3. Test methods can be thought out much more completely because you don't have to be concerned with the programming language of the automation tool. The learning curve of most automation tools may get in the way of writing meaningful tests.

If you have the resources available, have them begin training on the test tools that will be used. Some members of the team should start writing library routines that can be used by all the test engineers when the start their test coding. Some of these routines will consist of data collection/result reporting tools and other common functions.

After the manual test cases have been created decide with your manager which test cases should be automated. Use the Automation Checklist found later in this article to assist you in deciding what tests to automate. If you have enough manpower you may want to have an test plan team and an automation team. The test plan team would develop tests manually and the automation team would decide which of the manual tests should be run automatically (following the guidelines of the Automation Checklist later in this article). The automation team would be responsible for assuring that the test can be successfully and cost effectively automated.

Sometime during the design phase, as soon as the design is firm enough, you'll select the automation tools that you will need. You don't have to decide exactly which tests need to be automated yet, but you should have an idea of the kinds of tests that will be performed and the necessary capabilities of the tools. That determination is easier as the software gets closer to the code complete phase. Your budget and available resources will begin to come into play here.

For just a moment, let's discuss some of the considerations you should use in selecting the test tools you need. You'll also want to keep in mind the Automation checklist later in this column. It will help you determine if a test should be automated. There are a few good testing tools including Apple Computer's Virtual User (VU) (See the September, 1996 article "Software Testing With Virtual User", by Jeremy Vineyard) and Segue's QA Partner (Segue is pronounced "Seg-way").

Is there a lot of user interface (UI) to test? Software with a lot of UI is well suited for automated black box testing. However, some important considerations are in order here. You need to get involved with development early to make sure that the UI can be "seen" by the automation tool. For example: I've seen programs in which a Virtual User 'match' task (note: a task is what a command is called in Virtual User) couldn't find the text in a text edit field. In those cases, this occurred because the program didn't use standard Macintosh calls, but rather was based on custom libraries that provided UI features their own way.

Will the automated test environment effect the performance or operation of the system being tested? When you're trying to test the latest system software, you don't want the testing system changing the conditions of the test.

Is the speed that the tests run a consideration? If you're trying to measure the performance of a system you'll want to make sure that the conditions are as much like the "real world" as possible. You should consider the amount of network traffic that is present while you're running your tests. Also, the speed of your host processor can effect the time it takes your tests to run. You should schedule your tests so that you minimize the possibility of interfering with someone else on your network. Either isolate your network from others or warn them that you will be testing and that there is a possibility that their network activity may slow down.

What kinds of tests will be performed? The lower the level the testing is, the more likely white box testing should be used. A good example of this would be if you have a function that does a calculation based on specific inputs. A quick C program that calls the function would be much faster and could be written to check all the possible limits of the function. A tool like VU would only be able to access the function through the UI and would not be able to approach the amount of coverage that a C program could do in this situation.

Is there a library of common functions available or will you have to write them yourself? It will save a lot of time if you don't have to develop libraries yourself. No matter how extensive the command set, efficient use of library functions will be essential. Libraries that others have written may be useful; you can modify them to meet your own needs.

What will be the learning curve for a script programmer? The time it takes will depend greatly on the kind of testing you have to do and the experience of the programmer. If you've done your homework on the available test tools, you should know what to expect. Some companies even offer training courses (for a price) in their test software.

Can your automation tool automatically record actions for you? Some tools do this, but don't expect to rely on this too heavily. Tools that I've seen that do this end up creating code that has hard coded strings and tend to be organized in a sequential manner rather than by calling procedures. These recorded scripts are harder to maintain and reuse later. If you plan to use the same script for international testing, modifying the script will mean much more work. If you want to record actions, I recommend that you do it only to create short functions and you should edit the script after recording to remove the unwanted hard coded strings, etc.

Can you run AppleScript scripts from the tool's script language? This is a very useful feature since AppleScript scripts are so easy to write and can add additional functionality to your test tool.

In preparing this article, I encountered several "pearls" worth relating:

"Success in test automation requires careful planning and design work, and it's not a universal solution. ... automated testing should not be considered a replacement for hand testing, but rather as an enhancement." (Software Testing with Visual Test 4.0, forward by James Bach, pg. vii)

"The quality assurance engineers then come on the scene... and begin designing their overall test plan for the features of the product...."

"The goal is to have the test plan and checklists laid out and ready to be manually stepped through by the test engineers when each feature is completed by the programmers. Each item on a checklist is considered a scenario and related scenarios are grouped into test cases." (Software Testing with Visual Test 4.0, pg. 5-6)

Code Complete Phase

What is the Code Complete Phase? At this major milestone the code has been completed. The code has been written, but not necessarily yet debugged. (Development may try to claim they are at code complete even though they may still have major coding still left to do. Go ahead and let them declare the code complete, but don't let them get to Alpha until the code really is completely written.)

Why is the Code Complete Phase important? Sooner or later you'll have to get to a point where new code is no longer being written, and the major effort is in fixing bugs. Development will be relieved to get to this point as now they don't have to be as concerned with the initial coding and can concentrate on refining the existing product. (This is why they will try to claim they are at code complete even when they are not).

Automation activity during the Code Complete Phase Although the UI may still change, QA can begin writing Automatic test cases. The tests that should be written at this point are breadth tests that tell the status of the overall software product. Don't write tests which stress the product until you get close to Alpha. The product will probably break very easily. Some acceptance (or "smoke") tests should also be created to give a quick evaluation of the status of a particular build. Before reaching the Alpha phase there should also be tests written to test the Installer, boundary (or stress tests), compatibility (hardware and OS), performance, and interoperability.

Somewhere just before code complete, you will need to decide which tests should be made into automatic tests and what test tools to use. Use the following checklist to help you determine which tests should be automated:

Automation Checklist

If you answer yes to any of these questions, then your test should be seriously considered for automation.


Can the test sequence of actions be defined?

Is it useful to repeat the sequence of actions many times? Examples of this would be Acceptance tests, Compatibility tests, Performance tests, and regression tests.

Is it necessary to repeat the sequence of actions many times?

Is it possible to automate the sequence of actions? This may determine that automation is not suitable for this sequence of actions.

Is it possible to "semi-automate" a test? Automating portions of a test can speed up test execution time.

Is the behavior of the software under test the same with automation as without? This is an important concern for performance testing.

Are you testing non-UI aspects of the program? Almost all non-UI functions can and should be automated tests.

Do you need to run the same tests on multiple hardware configurations? Run ad hoc tests (Note: Ideally every bug should have an associated test case. Ad hoc tests are best done manually. You should try to imagine yourself in real world situations and use your software as your customer would. As bugs are found during ad hoc testing, new test cases should be created so that they can be reproduced easily and so that regression tests can be performed when you get to the Zero Bug Build phase.) An ad hoc test is a test that is performed manually where the tester attempts to simulate real world use of the software product. It is when running ad hoc testing that the most bugs will be found. It should be stressed that automation cannot ever be a substitute for manual testing.

Alpha Phase

What is the Alpha Phase? Alpha marks the point in time when Development and QA consider the product stable and completed. The Alpha Phase is your last chance to find and fix any remaining problems in the software. The software will go from basically functional to a finely tuned product during this phase.

Why is the Alpha Phase important? Alpha marks a great accomplishment in the development cycle. The code is stable and the most major bugs have been found and fixed.

Automation Activity During The Alpha Phase

At this point you have done the tasks that need to be done in order to reach Alpha. That is, you have all your compatibility, interoperability, and performance tests completed and automated as far as possible. During Alpha you'll be running breadth tests every build. Also you'll run the compatibility, interoperability, and performance tests at least once before reaching the next milestone (beta). After the breadth tests are run each build, you'll want to do ad hoc testing as much as possible. As above, every bug should be associated with a test case to reproduce the problem.

Beta Phase

What is the Beta Phase? The product is considered "mostly" bug free at this point. This means that all major bugs have been found. There should only be a few non essential bugs left to fix. These should be bugs that the user will find annoying or bugs that pose relatively no risk to fix. If any major bugs are found at this point, there will almost definitely be a slip in the shipping schedule.

Automation activity during the Beta Phase

There's no more time left to develop new tests. You'll run all of your acceptance tests as quickly as possible and spend the remaining time on ad hoc testing. You'll also run compatibility, performance, interoperability and installer tests once during the beta phase.

Remember that as you do ad hoc testing every bug should have an associated test case. As bugs are found during ad hoc testing, new test cases should be created so that they can be reproduced easily and so that regression tests can be performed when we get to the Zero Bug Build phase.

Zero Defect Build Phase

What is the Zero Defect Build Phase? This is a period of stability where no new serious defects are discovered. The product is very stable now and nearly ready to ship.

Automation Activity During The Zero Defect Build Phase

Run regression tests. Regression testing means running through your fixed defects again and verify that they are still fixed. Planning for regression testing early will save a lot of time during this phase and the Green Master phase.

Green Master

What is the Green Master Phase? Green Master is sometimes referred to as the Golden Master or the final candidate. The product goes through a final checkout before it is shipped (sent to manufacturing).

Automation activity during the Green Master Phase

After running general acceptance tests, run regression tests. You should run through your fixed defects once again to verify that they are still fixed. Planning for regression testing early will save a lot of time during this phase.

===================================================

by Dave Kelly, Symantec Corporation

http://www.mactech.com/articles/mactech/Vol.13/13.10/SoftwareTestAutomation/

No comments: