On Fri, Aug 02, 2002 at 08:16:17AM +0200, Janek Schleicher wrote:
> Ilya Martynov wrote at Fri, 02 Aug 2002 07:42:44 +0200:
> 
> >>>>>> On Wed, 31 Jul 2002 21:52:17 +0200, Janek Schleicher <[EMAIL PROTECTED]> 
>said:
> > 
> > JS> [..snip..]
> > 
> > JS> Thinking in general,
> > JS> there could be also some other features included.
> > JS> Let's think we'd like to test the creation of big pictures,
> > JS> perhaps 5_000 x 5_000.
> > JS> It could take a while to make a test for all pixels,
> > JS> but we also would like to test some of them (randomly choosen),
> > JS> to avoid systematic error.
> > 
> > Test results should be easily reproducible. I don't think having
> > randomly choosen tests is good idea.

I think having randomly chosen repeatable tests is an excellent idea.
Over the course of many people making many test runs explore far more
of parameter space than any single systematic test permutation device
could hope to achieve.

> srand could be our friend.

Which is how I'm doing it at work now.
I call srand with a random number. (I'm getting mine from /dev/urandom,
but I suspect that calling rand() and using that to prime srand will
achieve sufficient randomness for these purposes. (ie you get to run one
of 65536 sequences, which is better than running 1 of 1 sequence)

My tests at work generate a seed this way before starting. They call srand
with the seed, and store the seed.
If the test fails, they print out what the seed was on STDERR.
(as a hack in an END block that checks the exit code after Test::Builder
has set it in its END block)

If the test is run with no arguments it randomises (as above). Else it
treats the argument as a chosen seed, and uses that to call srand() to
repeat a previous run.

This way I can make test a few times and it runs different random parameters
each time. And if a test fails, it prints out the seed, and I can re-run
repeatedly by hand until I work out why there is a bug. And fix it.

I've found bugs this way that I wouldn't have spotted early by just running
a fixed set of parameters in my tests each time. Mainly because I make test
many times during the day as I make each incremental change (so each make
test has to be fast) which immediately picks up on big bugs, but the
incremental effect can find really obscure bugs that only crop up for
a small proportion of possible input.

> However, it's not important for me that the parameters are really randomly choosen - 
> 
> allthough I still would prefer to have some before I release a module -
> but I'd like to write tests to avoid systematic mistakes,
> while it would need too long to test all scenarios.

This is the problem that I have, and I think I've found a solution that
works for me.

Nicholas Clark

Reply via email to