Martin Alderson wrote:
Hi all,

I'm looking for any opinions or guidelines on how critical it is to create 
tests for each bug fixed.  Personally, I have been trying hard to create a test 
for each bug I fix to protect against regressions but for some of the mitosis 
bugs that are dependent on timing it is very hard to create a test that will 
always succeed.
Yes, ideally, each fixed bug would have it's associated test. This is not only for avoid regressions, but also because it covers one more dark corner with some test.

For time-dependent issues, this is really tricky.

I would say that it's a best effort approach. If you can't write a test due to the overly complexity, eh, nobody's perfect !
Should I steer well clear of timing dependent tests or should I always try and 
come up with a test, even if it may prove frail?
I would say again: do your best, but don't spoil your time when it can be used more efficiently for other tasks. We have users all around the planet, they will find bugs, and we will be fix them. This is not to say we should not write tests, or that we can be lazzy, or that users are our testers, but this is the real world. Don't expect to deliver a bug free product, this is a dream...

I would also add that from the developper POV, what we lack is not that much tests, but documentation and specification. I'm pretty sure that almost nobody knows how the current replication works. Even for our users, I don't know if we have any valuable piece of documentation which explains how to setup replication.

A minimum set of dev documentation can be the best way to find potential bugs. When it comes to asynchornous mechanisms, brain and luck are the best tools to find potential bugs and potential regression, IMHO.

I hope it helps !



--
--
cordialement, regards,
Emmanuel Lécharny
www.iktek.com
directory.apache.org


Reply via email to