The need for clearly defined management practices and automated tools to support them became obvious to me soon after I started a developer testing program at my previous company where we had approximately 200 developers. Thanks to the enthusiasm of a handful of developers and thought leaders, and with the support of upper management, we were able to start the developer testing program with a bang. We began by choosing a Thursday afternoon during which all developers would drop whatever they were doing and would download, learn, and start to use JUnit, PyUnit, or ccUnit to generate tests for their code. Throughout the afternoon, we asked developers who were already practicing developer testing and were familiar with the various xUnit frameworks to roam the halls and help the beginners. In order to sustain the effort and continually reinforce the importance of developer testing, we also leveraged the company's weekly all-hands meetings to present an award to the developer who contributed the greatest number of tests during the previous week.
Based on the growth in the number of unit tests (an initial spike followed by more moderate but consistent increases) the program was a success, but I quickly became aware of a number of issues:
Since then, I have had the opportunity to talk to dozens of engineering managers that have implemented, or tried to implement, developer testing programs. The reports are very consistent: without some practices and tools to set up objectives, prioritize the efforts, and measure the results, it's very difficult to maintain momentum, achieve consistency, and maximize efficiency and effectiveness. It's clear that developer testing, like any other activity that consumes a non-trivial amount of valuable development time and resources, has to be managed.
Software developers can contribute to the testing effort in many ways, so before we talk about managed developer testing I want to give you a precise definition of what I mean when I talk about developer testing in general.
In my organizations I have defined developer testing as a practice involving the following set of core requirements:
The core requirements stated above are an essential starting point for a developer testing effort, but as I mentioned in the introduction, these core practices are not enough to ensure sustained success. Developers and managers need a set of tools to manage targets and metrics to plan and guide their testing efforts.
Managed developer testing is a practice that extends the basic requirements of developer testing as follows:
Since managed developer testing is based on metrics, I will use the next section to discuss a set of metrics that can be easily adopted to get your program off the ground. As you progress, you can augment this basic set with additional more advanced or more customized metrics.
It's very difficult to talk about software metrics without opening a can of worms. There are no perfect software metrics; any metric you select will have some advantages and some disadvantages, and every metric can be manipulated or misused by developers or by managers. Nevertheless, in my experience the lack of any metrics is far worse than the judicious application of imperfect metrics, and in that spirit I share with you a basic set of developer testing metrics that I have found very useful:
One key property of the metrics I just described is that they are positive test metrics - the larger the number the better, they measure degrees of success. By contrast, metrics like bugs found, bugs remaining, etc., are negative metrics - the larger the number the more in trouble you are; they may be a measure the thoroughness of the tests but they are also a measure of the “bugginess” of the code.
Positive test metrics also give you a sense of control over your destiny that you don't get from bug counts. You can set targets and control what percentage of code you will cover with tests, but it's hard to set targets and control how many bugs you will find because that's a function of the code and not just of the tests. My belief, and experience, is that you get much better results by using positive metrics and by focusing on and measuring the growth in test points and the improvements in test coverage.
The key consideration in setting developer testing targets is to start easy. Nothing will kill a developer testing effort more quickly than setting impressive but unattainable objectives. I am a believer in setting BAHG (Big Audacious Hairy Goals) but I have learned that in the context of developer testing it pays to set long-term BAHG and plan to get there through consistent and gradual improvements. It's easy to get carried away and aim for, say, 10,000 test points and 90% code coverage in 2 months, but unless you are dealing with a brand-new project, catching up to achieve that objective by adding tests to all your pre-existing code will quickly give developers testing indigestion. You should think of your developer testing effort as a long-distance race and not a sprint. As a rule of thumb you should start by allocating 10 to 20% of developer time to testing and set objectives to match that level of effort.
Below is a sample set of targets and a dashboard for a fledgling developer testing effort.
Developer Testing Dashboard for Project X | ||
Target by 6/30/04 |
Actual as of 5/31/04 |
|
Total test points | 5,000 | 4,414 |
% of classes with 1+ test points | 40% | 31% |
% of methods with 1+ test points | 40% | 33% |
Code coverage (statement) | 50% | 44% |
Automation and self-sufficiency are an essential component of developer testing. If the tests are not fully automated and autonomous, running them and analyzing them becomes a chore and, as a result, they don't get run as often as they should.
The same motives for automation apply to the management metrics; their collection and reporting must be fully automated. The test metrics we have suggested are all easy to automate. Test points, classes with testing points, and methods with test points can be calculated by static analysis of the code, and there are plenty of commercial and open source tools that can be used to calculate test coverage.
The last step is to create a script that combines all the data in a single dashboard and publishes it on an internal web page.
A well implemented managed developer testing program starts paying off and becomes addictive for both developers and their managers very quickly.
Having a growing body of fully automated developer tests that you can run daily, or several times a day, gives you an unprecedented level of confidence in the stability and functionality of the code - especially as the breadth and depth of the test suite increases.
Our investment in developer testing pays dividends daily. I can't remember the last day that our tests did not catch at least one bug, and the norm is several bugs per day. In the past many of these bugs would have been found several days later, during integration or system testing, and tracking them and fixing them would have cost us an order of magnitude more (at least) in terms of time and resources.
By using positive test metrics and looking at them regularly thanks to the dashboard, I can keep developers focused and motivated. The dashboard helps to remind me and them that our job is not just to deliver code, but also accompanying tests, to make sure that the code works and will continue to work.
In summary, the recent focus on the importance of developer testing is the best thing that has happened to software in a long time. Without some management practices and tools, however, most developer testing efforts either fail to get off the ground, or fizzle-out after an initial period of excitement and focus. This Developer Testing Note was designed to give you an introduction to the concept of Managed Developer Testing and to get you off the ground as quickly as possible. In future articles, I will share with you additional practices and techniques to take your developer testing program to the next level.
Posted by Alberto Savoia at December 27, 2003 09:17 PM
TrackBack URL for this entry:
http://www.developertesting.com/mt/mt-tb.cgi/105