Wednesday, August 10, 2005

The Basics of Automated Testing - 2


... continued from The Basics of Automated Testing - 1


Automated test can be broken into two big pieces:
· Running the automated tests
· Validating the results.


Running the Automated Tests

This concept is pretty basic, if you want to test the submit button a login page, you can override the system and programmatically move the mouse to a set of screen coordinates, then send a click event. There is another much trickier way to do this. You can directly call the internal API that the button click event handler calls. Calling into the API is good because it's easy. Calling an API function from your test code is a piece of cake, just add in a function call. But then you aren't actually testing the UI of your application. Sure, you can call the API for functionality testing, then every now and then click the button manually to be sure the right dialog opens.


Rationally this really should work great, but a lot of testing exists outside the rational space. There might be lots of bugs that happen when the user goes through the button instead of directly calling the API. And here's the critical part – almost all of your users will use your software through the UI, not the API. So those bugs you miss by just going through the API will be high exposure bugs. These won't happen all the time, but they're the kind of things you really don't want to miss, especially if you were counting on your automation to be testing that part of the program.


If your automation is through the API, then this way you're getting no testing coverage on your UI. And you'll have to do that by hand.


Simulating the mouse is good because it's working the UI the whole time, but it has its own set of problems. The real issue here is reliability. You have to know the coordinates that you're trying to click before hand. This is doable, but lots of things can make those coordinated change at runtime. Is the window maximized? What's the screen resolution? Is the start menu on the bottom or the left side of the screen? Did the last guy rearrange the toolbars? And what suppose the application is used by users of Arabic language, where the display is from right to left? These are all things that will change the absolute location of your UI.


The good news is there are tricks around a lot of these issues. The first key is to always run at the same screen resolution on all your automated test systems (note: there are bugs we could be missing here, but we won't worry about that now - those are beyond the scope of our automation anyway.) We also like to have our first automated test action by maximizing the program. This takes care of most of the big issues, but small things can still come up.


The really sophisticated way to handle this is to use relative positioning. If your developers are nice they can build in some test hooks for you so you can ask the application where it is. This even works for child windows, you can ask a toolbar where it is. If you know that the 'file -> new' button is always at (x, y) inside the main toolbar it doesn't matter if the application is maximized or if the last user moved all the toolbars around. Just ask the main toolbar where it is and tack on (x, y) and then click there.

So this has an advantage over just exercising the APIs since you're using the UI too, but it has a disadvantage too – it involves a lot of work.


Results Verification

So we have figured out the right way to run the tests, and we have this great test case, but after we have told the program to do stuff, we need to have a way to know if it did the right thing. This is the verification step in our automation, and every automated script needs this.

We have many options
· Verify the results manually
· Verify the results programmatically
· Use some kind of visual comparison tool.


The first method is to do it ourselves, that is by manually verifying the results and see that it meets our expectations.


The second way is of course to verify it programmatically. In this method, we can have a predefined set of expected results(baseline), which can be compared with the obtained results. The output of this would be whether a test case passed or failed. There are many ways by which we can achieve this; we can hard code the expected results in the program/script. We can also store the expected the results in a particular file or a text file or a properties file or a xml file and read the expected results from this file and compare with the obtained result.


The other way is to just grab a bitmap of the screen and save it off somewhere. Then we can use a visual comparison tool to compare it with the expected bitmap files. Using a visual comparison tool is clearly the best, but also the hardest. Here your automated test gets to a place where it wants to check the state, looks at the screen, and compares that image to a master it has stored away some where.

No comments: