On Mon, Oct 20, 2003 at 07:02:56PM +0000, [EMAIL PROTECTED] wrote:
> Practically, the only way to do this is to save the results of each test 
> in a seperate cover_db and then selectively merge them to see whether or 
> not your coverage changed. Even then, finding the minimal set of tests 
> that provides "full" coverage is a trial and error, brute force 
> approach. Actually, I think it's equivilant to the knapsack problem.

Yes, but that's not really what I'm looking for.

As I said, I know that this is a rather coarse grained approach. I'm not
looking for something that can automatically reduce my test suite. I'm
looking for something that can raise a large flag and say - "Oi! This
lot of tests over here seem to be doing almost exactly the same stuff as
this stuff over here. Is that deliberate?"

> Fully automated test suites also tend to make it not worth the
> effort. (Who cares if the test suite takes 10 minutes or 10 hours if
> you let it run overnight?)

The two sentences here don't necessarily go together! :)

We have a fully automated test suite. It takes too long to run, and we
want to run it at each integration - not just overnight.

> If the code has been properly designed (i.e. factored into separate 
> pieces without lots of ugly interdependancies) it should be possible to 
> run the module-level[1] test for that piece when it changes, not the full-
> out test suite for the entire product. Again, that could be saved for 
> the release process.

Surely this is only true if no other module in your system interfaces to
this in any way.

No matter how well factored the code is something is going to depend on
it functioning. And so any changes will need to ensure that the code
depending on it still works.

You can layer the code such that changes to code at the "top level"
don't need to run the tests on "lower" code, but changes to that "lower"
code will always need to be checked by re-testing the "higher" level code.

Tony

Reply via email to