On 6 Aug 2006, at 13:41, Nicholas Clark wrote:
[snip]
My view is that because the actual output of the code isn't well specified
(sadly nothing that new there either), if we write functional tests to
verify that the behaviour we desire is present, then we're actually killing two birds with one stone - we have tests for the spec, and the tests are the spec. (Which isn't perfect as specs go, but it's a heck of a lot better
than the current spec).

That's exactly what I'd do, for the reasons you outline.

Also, right now we really don't care about the
specific output of the the individual units that make up the code - all that matters to the client is the final behaviour. Hence writing unit tests at fine detail for existing code could well be a (relative) waste of effort in that it's quite possible that the units they test are thrown away soon
if the implementation is changed.

Yup. That's been my experience. In general I find that post-code unit tests end up being to brittle to be of much use unless the code base is nice to begin with.

Writing unit tests for big balls of mud is a waste of time, since as soon as you start gutting the code to turn it into something sane you end up throwing the code away. You'll also probably find that edge- cases that look like bugs in the modules are exploited elsewhere - so spotting odd behaviour at this level doesn't help you much.

Whereas functional requirements are much
less likely to change on a release-by-release basis, so writing them is less likely to generate code that has a short lifetime. And having functional tests is likely to give us better coverage up front, so we're more likely to spot a change that unintentionally breaks behaviour, even if we can't use
them to efficiently nail down which change was the culprit.

Why am I wrong?

You're not :-)

If you've not come across it already I'd heartily recommend "Working Effectively with Legacy Code" by Michael Feathers. Lots of useful advice.

Cheers,

Adrian

Reply via email to