Let’s imagine the following situation. A day like any other – the Development Team carries out an estimate of the task. And all of the sudden the developers say: “It’s a simple task. We will finish it quickly”. The room falls silent for a moment, and the people responsible for testing are starting to wonder if anyone has considered the impact of the abovementioned task on the design of automated tests. Maybe there will be a need to add a completely new functionality (e.g. authorization tokens or login process)? Maybe it will be necessary to make several adjustments? The role of the tester (or rather the developer, because Agile methodologies do not distinguish the names of team members) is to ensure that the tests, including automatic tests, are taken into consideration in the estimate. Effective communication is one of the principles of Agile methodologies. “It will take up a little of your time, but testers will spend much more time on it”. Therefore, the testers’ work should also be reflected in the estimate.
Such an approach during the estimate phase will have a positive effect on the work of the entire Development Team, because automated tests are also about the code. And just like the production code, it should also undergo the code review process. It may seem a waste of time “because these are just tests“. But poor quality code is hard to maintain. Not without reason do 63% of experienced developers participating in HackerRank research consider “spaghetti code” as one of the most irritating pet peeves in a project. In addition, such code does not give definite results, and can also omit important business cases.
Read more: Test automation common myths
Automated tests, Definition of Ready and Definition of Done
Definition of Ready
Definition of Ready (DoR) defines the status of the task that the team recognizes as understandable and which it is able to carry out. The DoR should also specify whether the task is testable, and if so, in what way. This also applies to automated tests. The team should outline and comprehend all activities that must be performed to include the given task in the test design. Even at this stage we are able to determine which of the tests will be automated. It is worth bearing in mind that they will certainly affect the velocity of the Development Team. What does this mean? Let me explain. Velocity measures how quickly the Development Team is able to meet the Product Owner’s requirements. It is worth taking this factor into consideration when determining the scope of work in a given Sprint.
For example, in the case of a task concerning updates of tools used in the system, automated tests will be limited to running them after making changes in order to perform regression, so as to confirm that the system is working properly.
The situation is different when it comes to a task concerning the implementation of a new functionality. In this case, development and code review tests must be included in the estimate.
Definition of Done
Definition of Done (DoD) defines the moment when we believe that the task carried out from the Sprint Backlog has been completed. Should it then contain information about automated tests related to a given task? There are different opinions on this subject.
Some people believe that it will slow down development, thereby reducing the team’s velocity. The concerns are related to a potential situation in which the developers would finish their tasks and then simply wait for the testers to complete testing, without doing anything active themselves. Others, in turn, claim that this will help to create better quality software, which will positively affect the awareness of the Development Team in the context of testing. Proponents of this approach emphasize that it will allow teams to indicate the impact of test adaptation to the complexity of the task, which should be taken into account during the estimate.
In my view, you need a well-coordinated and committed team with a high level of awareness of work in the Agile methodology to include automated tests in DoD. The lack of awareness in the team should not, however, be an argument against such a measure. On the contrary – it will be a challenge and it will have an additional positive impact on knowledge when it comes to the role of automated tests in the software delivery process.
In one of the projects I worked on, the team decided to include automated tests in DoD as they wanted to confirm the system’s level of performance prior to presenting it to the client. Next, the developers asked for training so that, in the event of a heavy testing workload or their absence, the tests would not be neglected. This approach quickly proved to be fruitful. The tests prepared by the developers showed that changes to the code caused the system to malfunction in a completely different place than expected. And all of this before the code review of the solution itself!
Read more: What is Regression Testing
Should tasks related to creating and maintaining automated tests affect the scope of the Sprint? In my opinion, we need to distinguish two areas here. The first of them is related to DoD, about which I wrote above. If tests have been included there, the Sprint range must reflect the team’s velocity, including automating testers.
The second area, in turn, concerns test-related tasks, i.e. those which are not directly related to tests, but have a significant impact on them. It is a good idea to ask yourself a few questions here. Do testers have a separate environment for running their tests? If so, does it have adequate infrastructure? Do testers have adequate knowledge and / or access? Perhaps, while searching for answers to these questions, we will realize that it is necessary to engage other team members (developers, DevOps), which will affect the Scope of the Sprint.
How to present automated tests during the meeting summarizing the work done in a particular Sprint?
- Certainly, what has been done and what area of functionality has been covered by tests should be briefly discussed. The extent to which it will unburden testers during manual regression testing is also important. This is the best way to present the impact of automated testing on collective work to the entire Development Team.
- If you work on a given application with several teams, it is also worth mentioning the problems encountered and ways of solving them. Why? It may turn out that the problem has already been solved by someone else, or – on the other hand – maybe we have solved a problem that someone else is working on.
- If necessary, it may be worth considering adequate reporting in the automated test design, for example by configuring a dedicated tool so that it is possible to present a ready-made report with statistics.
What to avoid:
- Individual testing scenarios should not be presented and discussed. The Sprint Review is not the appropriate time for this, since their assessment should be done at the planning stage and subsequent code review.
- Definitely do not offer the team something that can be called a “test seance.” Do you consider running automated tests and watching “how the machine clicks on the application” together a great idea? You may be right – if you want the Sprint Review testing part to be tedious and treated by others as a necessary evil.
In the article I discussed the presence of automated tests in the Agile software development process and their influence on the work of the Development Team. It is worth remembering that automated tests are part of the testing service, but without an appropriate process and understanding how they function, they will not be effective.
In Agile teams, automated testing facilitates the software delivery process for the entire Development Team. Therefore, testers (developers) take care of it and make them visible to others. The visibility of automated software tests at every stage of planning will certainly allow the entire team to have a better understanding and make more effective use of their potential in the future.