Adopting TDD is not trivial
Even though a lot of people talk about TDD, few people do it well. It can be a good technique for helping to build high quality robust code, but as this article shows it is easy to misunderstand it as well. Often people think they´re doing TDD but in fact they aren´t. Why? Two common obstacles in adopting TDD are:
Developers must learn about the test frameworks and various techniques that are unique to developer testing. Concepts such as using Mock Objects to simulate parts of the system to isolate testing are powerful but complicated. This additional skill set must be learned to be effective. A similar problem was observed during the emergence of C++ where many people thought that they were writing OO but were really writing procedural programs inside classes!
TDD is based on a ‘test first’ concept where unit tests are written before the code they test. This is significantly different from the classic ‘test last’ model. If there is not a certain degree of dedication to change the tests first as the code changes, tests can rapidly become useless. For example, new requirements add new conditions that must be tested and the tests must be updated to reflect this new data. However, if a developer takes shortcuts due to time constraints and does not update the test cases we run the risk of introducing serious bugs or regressions.
Unit tests often miss the “unexpected bugs”
Even if there is a body of unit tests developed by TDD, serious bugs can still slip through. The code in the article contains a serious bug in the factor() method that is not caught by any of the test cases. If you examine this method, there is a loop at the beginning of the method that looks like:
If you call this method with 0 it will go into an infinite loop!
Why is there no test case for this value? It is simple human nature that defeats the developer as a test writer. Since the developer did not think about 0 as a value when the code was being written he is also not going to think of it during testing. A couple classic limitations of manually written tests:
Developing manually written test cases requires the developer to think of all of the possible test inputs and code behaviors. Therefore, bugs caused by incomplete understanding of requirements or simple omissions often go untested. In other words, if you didn't think of it when you wrote it, you probably won't think of it in testing.
It's natural for people to focus on “normal” cases much more than on errors and exception cases. As a result, most manually written tests tend to exercise to expected input and output, while error conditions are less thoroughly tested. This can lead to catastrophic failures in deployed systems, from poor error handling. Steven McConnell discusses this in his upcoming book Code Complete 2, in chapter #22 on developer testing as “clean” vs. “dirty” tests (I'm not sure I like this clean vs. dirty terminology but the entire chapter is a good read and the point is right on target).
Posted by Kent Mitchell at January 16, 2004 02:07 PM
TrackBack URL for this entry: