On Saturday 08 June 2002 17:32, Adrian Howard wrote:

> I found that, once you have a moderately complex system, it's hard to
> determine whether changes you are being warned about are going to be an
> issue (beyond simple things like method renaming). I spent too much time
> looking at the code, and usually ended up writing a functional test to
> make sure that I wasn't missing something.

> I eventually just bit the bullet and started writing more functional
> tests. This (of course)  had the usual affect of writing more tests ---
> it made development faster.

What would one of these functional tests look like?  I usually end up with a 
few tests per function with names similar to:

 - save() should croak() without an 'id' parameter
 - ... and should return false if serialization fails
 - ... or true if it succeeds

I'll probably also have several other tests that don't exercise save()'s 
effective interface.  They're not so important for dependency tracking, so 
I'll ignore them for now.

My current thinking is that marking the interface tests as special is just 
about the only way to track them reliably:

        $foo->{_store} = $mock;
        $mock->set_series( 'serialize', 0, 1 );

        eval { $foo->save() };
        dlike( @$, qr/No id provided!/, 'save() should croak()...' );

        my $result = $foo->save( 77 );
        dok( ! $result, '... and should return false...' );
        dok( $foo->save( 88 ), '... or true...' );

.... where dlike() and dok() are Test::Depend wrappers around Test::More's 
like() and ok().

Test::Depend will save the names and results away and compare them at the end 
of the test suite's run.

There, that's my handwaving magic in a nutshell.  I'm not thrilled with the 
dopey names, but haven't a better idea at the moment.

-- c

Reply via email to