In this post I would like to present the past evolution of test automation, and try to draw the next stage.
First Generation – Control
The first generation of test automation (at least as I remember, I’m not so old after all) focused on GUI applications and tried to solve the control problem: how can we control the application? How should the tool identify UI components?
Adding to it the ‘Record and Replay’ approach and we get a total mess. We can create tests very fast but maintenance is nightmare. In most cases we will not be able to execute them on the next version of our application.
For example, let’s say that we want to test some kind of authentication system with web management interface. Our test will include commands as following:
Or, if we want to test a networking device, the same approach would yield the following test:
connect(deviceHost, username, password);
cliCommand("router bgp 200");
cliCommand("neighbor 220.127.116.11 remote-as 300");
As you can seem we end up with extremely long scripts.
It was very hard to understand the test logic from the scrip itself; it affected the scale of the project, because usually only the persons who built the script were able to debug and maintain it.
We can still see tools that have this exact focus.
Second Generation – Business logic
In the second generation, the focus moved from controlling the application to modeling the business logic of the System under Test (SUT). In this methodology, we start by building a management object with logical operations representing the SUT business logic and then we use these operations in our test. It is the management object responsibility to control the SUT.
Note that now we can talk about SUT in general, not just GUI applications, since the control is no longer in the hurt of the platform, and we can think of many ways to control an application, not just it’s GUI.
With our networking device, we will have a management object with operations like the following:
And the test will be:
Note that the tests are much shorter, and much more readable. The most importing advantage of this approach is the ability to overcome changes in the application. If the syntax of a command in the SUT changes you only need to change a single point in the management object, not hundreds of tests.
Additional advantage that I used many time is in using the object oriented capabilities. Let’s say you have two types of routers in your setup (or two version of the save router). You can build 2 implementation of the save interface, one for every router. The same test will be executed without any changes. It’s highly important in large scale projects.
In many cases business logic automation testing will do fine. If we want to improve maintainability even more, the following capabilities can be added:
- SUT parameterization – configuration file that describes the SUT. Holds setup data like host name, user name, password …
- Test parameterization – support for user input per test and scenario.
- Setup configuration management – manage different setup states (configurations).
You can go very far with this kind of methodology.
The main disadvantage of this methodology is that it requires programming skills from all the people involved in the project. If the complexity of the system you are testing is not very high, you can use frameworks that hide the code to some degree (but never completely).
So what if you have a complex system to test but not all the members in the testing team have programming skills?
The future belongs to Model base testing. It enables to move the complexity from the tests to the model. The tests can stay very simple and in the same time test very complex environment.
Let’s say I would like to test routing environment. A test for multi router environment can be very complex.
The logical model can be built as follow:
- You have a lab that contains routers
- Each router contains network interfaces and has properties like name and host. It can have more than one implementation.
- Network interface contains IP interfaces, stations and have properties like slot, port and peer.
- Station can have Linux and Windows implementations and have properties like host.
JSystem, an automation framework that supports model base test automation.
First I build a simple tree representation of the model using the “SUT Planner”:
Now I can build tests by using generic business logic statements:
Build a loop over 2 models (STAR and MESH and maybe more).
- First step will take the model and implement it on the setup
- Second step will test ping connectivity from every station to all the others.
The nice thing about it is I can use the same test on new models I will build.
Model base testing is the future of test automation. It enable team work were the automation experts build the model capabilities and testers can use it to build models and flows. This allows the testers to focus on designing tests rather than writing software.
The main advantage is by using the same flows on deferent models.