On Mon, Oct 20, 2003 at 08:41:12PM +0100, Tony Bowden wrote:
> On Mon, Oct 20, 2003 at 07:02:56PM +0000, [EMAIL PROTECTED] wrote:
> > Practically, the only way to do this is to save the results of each
> > test in a seperate cover_db and then selectively merge them to see
> > whether or not your coverage changed. Even then, finding the minimal
> > set of tests that provides "full" coverage is a trial and error,
> > brute force approach. Actually, I think it's equivilant to the
> > knapsack problem.

But you can get an awful long way with heuristics.  First pick the test
that provides the greatest coverage.  Then pick the test which when
merged with previously selected tests provides the greatest coverage.
Continue until you reach some limit (possibly no more tests).  Skew this
somewhat with the time taken to run the test, if desired.  In practise
this can provide some pretty good results.  The coverage can be
represented as bitmaps which are or'd together and the whole thing can
be fairly efficient, certainly enough to make it worthwhile if your
tests are taking an hour to run.

This has always been on the TODO list.  Or rather, having checked, this
has never been on the TODO list, but I've always planned on doing it.

> Yes, but that's not really what I'm looking for.
> 
> As I said, I know that this is a rather coarse grained approach. I'm
> not looking for something that can automatically reduce my test suite.
> I'm looking for something that can raise a large flag and say - "Oi!
> This lot of tests over here seem to be doing almost exactly the same
> stuff as this stuff over here. Is that deliberate?"

Or maybe for a single test in the heuristics above substitute set of
tests.

-- 
Paul Johnson - [EMAIL PROTECTED]
http://www.pjcj.net

Reply via email to