I think everyone understands that breaking
things is bad, and that more testing is good -
particularly if it can be automated.
The problem is figuring out *how*.
I assume this isn't limited strictly to "unit"
tests.
For instance, Alex had a clearly defined bug
("search gets the wrong thing when in an
anchor"). I'm not sure it was easily replicated,
but it was easily described. For the moment,
lets assume it was a sometimes problem, and
his fix just simplified the code, which seems
to have helped. (Even if that isn't true of this
fix, there are situations like that.)
Chris just described two more bugs.
I'm not quite certain how to test for any of
the three.
The pedantic unit test method would verify
that the old search did not return a single
endpoint number. (It wasn't supposed to.)
A more useful test would include a document
which displayed badly on the old search --
but success might just mean you got lucky.
Also note that "displays badly" is not easily
checked, or always agreed upon. (In this
particular case, you could specify more
carefully, by saying that the first letter of
the search was dropped from highlighting.)
I think a "Can we follow links embedded in tables"
test would make more sense than anything specific
to Chris' first bug. Did this bug require all sorts of
odd setup to trigger?
How would you test for the absense of a .bmp?
I suppose we could write a utility that verifies
the existence of all resources (possibly as part
of a resource family), but it wouldn't be specific
to this bug.
-jJ
_______________________________________________
plucker-dev mailing list
[EMAIL PROTECTED]
http://lists.rubberchicken.org/mailman/listinfo/plucker-dev