Testing and the Definition of Done
Previously, I have written about my current project, a system used by Stockholm Country Council for computing compensation from the county to the various health care providers within the county for services provided to the people of Stockholm. I have also mentioned previously that I have taken the role of de facto Scrum Master. In addition, I have become the de facto test manager in the team. The system does not really have users per se, but rather, it takes input in the form of XML files, it makes a bunch of calculations on this data and then it spits out some new XML files.
The XML input is very complex in that it contains all sorts of parameters, all of which help to define a particular visit to a doctor for a particular patient in Stockholm County. So if you want to test a change in a particular compensation model for the county, you have to first analyze the model, which is stored in the database, to see what caregivers in the county provide services covered by that model. Then you have determine what sorts of care (diagnoses, conditions, treatments, etc) will trigger different aspects of the compensation model and add these to your XML input. Finally, once you are sure you have constructed an XML file that the system will process and which will have the desired model applied to it, you can finally start introducing variations to the XML file (or to various forms of control data in the database) to test your changes.
Once you have a properly constructed test file that will test your function, you then have to run one batch program that reads the file into the database. Then you run a new batch file that computes compensation for the doctor’s visit. You also have to run two additional batch programs to compute all the output from the system. When all of this is done, you have to check the output (often in the form of various rows in database tables) to make sure that it matches the specification.
So, the process of constructing test files and running them through the system is complex (and time-consuming) to say the least. Add to this the fact that we also perform the acceptance testing for system changes (we construct new files based on the product owner’s acceptance test specifications instead of our own system test specifications). Generally, I take the tasks in the sprint that have to do with constructing the test files. My thinking has been that this will leave the developers free to work on the code changes. However, they are often so efficient that they need test files long before I have them ready, and even if I do have them ready, the process I described above means that they are prone to have errors in them, meaning that the developer not only has to potentially debug code, but he also debugs the test files.
Based on the description above, it is probably not surprising that we spend roughly 30-40 percent of our time constructing test files and running tests. As one might imagine, the entire Scrum team is in agreement that test is the bottleneck that keeps us from reaching a higher velocity. So the question is how we can be better at testing.
In Scrum and XP from the Trenches, Henrik Kniberg discusses Scrum and testing in some detail. In particular, he points out that we cannot possibly hope to get rid of the acceptance test phase. Instead we should focus on what we can do to increase quality and thus shorten the acceptance testing phase:
- Include testers in the Scrum team. When they aren’t testing or preparing for tests, let them work on other tasks, like pair programming with a more traditional coder.
- The tester is the “signoff guy”. In other words, something is not done until he says so.
- Increase quality by doing less per sprint.
In our project at Kentor we don’t have any test experts per se. As I mentioned previously, I have become the de facto test manager, so I write the test cases and prepare the test files, and technically, I am the “signoff guy”. However, the developers are so good at running the tests that they can immediately see when things are not right. Of course, we do try to make sure that developers do not test their own code.
Kniberg also discusses whether or not to include acceptance testing in the sprint. He says that some of the Scrum teams he coaches do this but for the most part they avoid it for two reasons:
- Sprints are time-boxed. Acceptance testing is extremely difficult to time-box, due to the nature of testing (test, identify bug, debug, test again, etc.) At some point you will have to decide what to do with a bug when the time box runs out. Delay it to the next sprint? Release a buggy system? Neither option is good.
- If you have multiple Scrum teams, even if they each include acceptance testing in their sprints, you will still have to do a common acceptance testing after all the sprints.
In our project at Kentor, we have thus far included acceptance testing in the sprint. As I mentioned previously, we do a lot of the acceptance testing for the product owner. The question is whether or not this is a good idea. Because of the deterministic nature of the system, we cannot call a sprint done until the acceptance testing is completed. However, as already discussed, testing takes up such a large part of the sprint that it makes it very difficult to commit to very many backlog items in a sprint.
There do not appear to be any easy answers to how best to work with test in this project. We know we need to reduce the amount of time we spend on test. Some of the ideas we have so far are:
- Introduce unit testing using JUnit. Such tests could also function as system tests for changes made to specific, contained computational modules in the system.
- Introduce continuous integration into the build environment so that unit tests can be automated.
- Build a tool for generating test files.
- Build a tool for automating the system and acceptance testing phases.
We still have to work out the details, but if we can make successfully incorporate the four points above, I think there is a pretty good chance we can significantly reduce the amount of time we spend on test.