For years I insisted in automated unit tests with a naive assumption that if you take care of the pennies, the dollars will be taken care of by themselves. I even watched unit tests coverage numbers closely to make sure they were high enough, but still, the perceived quality of the software was not good. It turned out that “I was penny-wise and pound-foolish.”
I couldn’t understand what these people were talking about!
- Our code coverage was above 85%! A test code audit ensured that good assertions were actually in place.
- Static analysis tools we were using didn’t show anything important.
- The number of open bugs in the QA system was below 100, and a number of them were deemed as not important by the client.
- Iteration demos to the user were successful with no serious issues detected.
- There were no broken builds in our continuous integration process.
- Quantitative analysis (cyclomatic complexity, Crap4j, etc.) pointed to code that was very good.
In other words, all of the standard industry metrics for quality signaled that the code was good.
As we started to ask more questions about why certain groups thought our quality was low, we discovered a few issues.
Our unit tests ran outside the container, so in some cases when we ran them in the container the application would not start up. When it started up, links were not available, or clicking on them would take you to a page full of display errors, etc. The unit tests were indeed verifying a lot of the functionality, but they were not verifying the UI logic or configuration settings. In our case, we were stopping at the controller level.
A second problem that we uncovered was that demos were being done on individual developer machines and not on an “official server.” So teams would spend a significant amount of time preparing for the demo. They were configuring the application on the individual machine by hand, and sometimes not using a build out of the continuous integration process.
Another common complaint was that the application would not run the first time it was deployed to the test environment, and would require developers to get involved in troubleshooting and re-configuring the environment.
The quality issue was not related to software defects, but an unpolished product. Complaints didn’t necessarily come from the end user, but internal stakeholders.
To solve the problems, we changed our definition of done to require in-container manual tests besides unit tests. This resolved a lot of small issues that were being found by QA.
One other important improvement that we are now undertaking is to fully automate our deployment. We expect that the same way that continuous integration improves the build process, this new process will improve deployment scripts/instructions.
With the deployment process automated, the next step is to at least automate a few smoke tests. This should allow us to quickly identify if the deployment succeeded or not and what problems we might have.
What did we learn from this?
- Question/listen to all of your customers, including team members (Product Owner, Scrum Master and team) and some non-team members (e.g. Operations). Not all team members are comfortable speaking up during retrospectives.
- Quality is a complete package, not only the absence of bugs. Your deployment process is part of the application as well.
- Automated unit tests are not enough. Even if they are not purely unit tests and bleed into the functional and integration tests realm.
- Question your definition of done. Is it complete enough?
Unit tests are a great tool; however, those passing should not be the extent of your definition of done. Keep your eyes on acceptance tests, load tests and others; they will help you avoid other possible issues besides bugs. And above all, listen carefully to all stakeholders in your project, having them in the retrospective doesn’t mean that they are speaking their minds.
1 comments:
It is very easy to be seduced by those green check marks, isn't it?
Manual configuration is always a red flag and should be eliminated wherever possible.
Avoid logic, wherever possible, in the UI. If that's not possible, you may want to look at automating some UI tests - such tests are especially good at finding things like broken links. They, too, can be integrated into your build process.
I may be able to help you get started if you're interested.
Post a Comment