(I'm going to s/functional testing/acceptance testing/g here because
to me there's not a whole lot of difference between a unit test and a
functional test).

Both your reasoning are correct.  The question is now one of practicality.

For one, its never just one or the other.  Its not acceptance testing
OR lower level testing.  You do both as the need arises.

The way I'd look at it is simple.  Which way gives you the most bang
for the buck?  Which is going to catch the most bugs?  Which is going
to speed up your development?  Which is going to get you writing tests
for an untested hairball?

Acceptance tests are really good at telling you something's broke and
they're resistant to internal changes.  They let you know the program
isn't doing what it supposed to do, or that you just broke an older
feature.  But often there's so many layers between the test and the
piece which broke that they don't help much in debugging.

Lower level unit tests are really good at telling you what part broke.
They're great aids in debugging.  Its like opening the hood of your
car and seeing the broken part flashing red.  Good unit tests can
speed up your development greatly.  But low level units tend to change
a lot so you might write a pile of tests only to find that the
behavior changed.

If you're going to throw out the guts, then yeah, work on acceptance
tests.  If you have a well defined unit that you're going to work on,
write tests for that and then work on it.  If a low level unit is
likely to change, maybe you can write a test a few layers up, but
without having to go all the way up to customer acceptance testing.

The closer your test is to the thing which broke the better aid it is
for development and debugging.

The closer your test is to the user interface the more accurate it
will be in catching real user level bugs and the more resistant it
will be to internal change.

CONCLUSION:
Do both as needed.  You definitely need acceptance testing but not to
the exclusion of all other testing.  And you probably don't have time
to cover everything.  In a situation where you have an untested
existing app target the traditionally buggy areas first.  That'll get
you effective tests most quickly.

As for unit tests, again, follow the bugs.  Test the traditionally
buggy units.  Also test the units which change most often, there's
various tools which you can run over your repository to figure that
out (<insert hand waving here>).  If the guts change a lot, test at a
point of relative stability a few layers up.


On 8/6/06, Nicholas Clark <[EMAIL PROTECTED]> wrote:
This is sort of off-topic because it's more a general question about testing,
rather than Perl specific, but the code in question happens to be written in
Perl...

There is this big hairball of under-tested code. (Nothing new here)
So the question is, which to tackle first - unit tests, or functional tests.

A colleague's view is that you can't have functional tests until you know
that the individual units work, hence start with unit tests. (I believe that
the assumption is that when they're mostly complete start on functional
tests, but that wasn't stated). This seems the logical approach if you want
to refactor things.

My view is that because the actual output of the code isn't well specified
(sadly nothing that new there either), if we write functional tests to
verify that the behaviour we desire is present, then we're actually killing
two birds with one stone - we have tests for the spec, and the tests are
the spec. (Which isn't perfect as specs go, but it's a heck of a lot better
than the current spec). Also, right now we really don't care about the
specific output of the the individual units that make up the code - all
that matters to the client is the final behaviour. Hence writing unit tests
at fine detail for existing code could well be a (relative) waste of effort
in that it's quite possible that the units they test are thrown away soon
if the implementation is changed. Whereas functional requirements are much
less likely to change on a release-by-release basis, so writing them is
less likely to generate code that has a short lifetime. And having functional
tests is likely to give us better coverage up front, so we're more likely to
spot a change that unintentionally breaks behaviour, even if we can't use
them to efficiently nail down which change was the culprit.

Why am I wrong?

Nicholas Clark


Reply via email to