January 21, 2004 - The XPeriment

When talking about developer testing in Java, the conversation often shifts to XP and other Agile methods. Thus it happens that we get asked how Managed Developer Testing works with a team doing XP. We feel there is a good story to tell there, but it isn't one we can speak to from first-hand experience. Combine those queries with an "XP-curious" CTO and a couple of internal XP advocates and it is time for an experiment.


There are several reason for us to do this new product using XP. For me personally XP is the best process I know for making software in an environment where the requirements and deadlines are shifting. Alberto, in addition to wanting the product, wants to see an XP team up close and personal. And we as a company want to be able to look customers right in the eye and say "our products work great with XP, and we know it because we've used them that way ourselves." I feel that when making tools for your trade you've got to be able to do more than eat your own dog food, you've got to enjoy it.

Quality for the customer

On a typical XP project the customer is concerned with features, while the developer is concerned with quality. And indeed on our project day-to-day we'll be doing test-driven development with JUnit and then using Agitator to learn about what we forgot and to provide more strenuous coverage.

But Alberto, our customer, wants more. He wants to be able to verify that the code is self-testing, robust, maintainable, and provides a good foundation for future releases. To meet this need we'll provide an on-demand dashboard with our quality metrics. This dashboard will augment the feedback he gets from acceptance tests and paint an on-going picture of our progress. Having these other elements of our deliverable beyond just code actually increases the value of the code. If we consider our project team to be a virtual consulting company, we should be able to raise our rates!

The Hypothesis

Helping my son with his science fair project reminded me that a good experiment starts with a hypothesis, so here's ours:

  1. we will get good coverage from working test-first, and
  2. because our code will be tested it will be testable, making it easy to agitate, thus
  3. Agitator will catch issues that we missed from our manual unit testing.
  4. the combination of acceptance tests and the dashboard will allow a transfer of confidence from the developers to the customer, and
  5. by providing assets that meet objective and verifiable metrics we increase the value of the code.

Posted by Jeffrey Fredrick at January 21, 2004 02:26 PM

Trackback Pings

TrackBack URL for this entry:


Post a comment

Remember Me?