On Mon, Aug 04, 2008 at 11:38:40AM +0100, Mel Gorman wrote:
> On (04/08/08 13:51), David Gibson didst pronounce:
> > On Fri, Aug 01, 2008 at 09:49:28AM +0100, Mel Gorman wrote:
> > > On (01/08/08 13:18), David Gibson didst pronounce:
> > > > On Thu, Jul 31, 2008 at 11:05:42PM +0100, Mel Gorman wrote:
[snip]
> > > > Hrm. I certainly agree that we need better handling of the various
> > > > expected failures. But I'm finding this implementation kinda
> > > > confusingly complicated. So here's a counter-proposal for the design.
> > >
> > > I'm surprised it is found to be complicated. Tests consist of a program
> > > and an optional verification script. I don't see why embedding the
> > > verification code inside a C program is somehow less complicated.
> >
> > Your approach is conceptually simple enough, yes. But from reading
> > the patch, the actual mechanics of hooking up each testcase with its
> > result checking script is kind of complicated.
>
> I'm failing to see how. Create a shell script called
> verifyresults-TESTNAME.sh and it gets called with the exit code.
I think you over-estimate the depth and subtlety of my complaint.
Basically I'm just looking at the changes to run_tests.sh and going
slightly cross-eyed.
> > Doing the result checking from within the testcases has several advantages:
> > - We only need the second result checking pass - and the
> > mechanism for getting there - on those testcases that need it
> > - We can more easily fold CONFIG() into the same mechanism
> > - All the information about a particular testcase - what it
> > does, and what each possible result means - is kept together.
> >
> > > One nice side-effect of the verification script is that the history of our
> > > regressions are contained in one place. Reading through the comments in
> > > one
> > > file should give an idea of what bugs we've handled or kernel behaviours
> > > we
> > > have fixed.
> >
> > I don't see how that's not so if the verification is within the
> > testcase.
>
> To me, it confuses which is the testcase and which is the verification
> if they are kept in one file.
The distinction between the two seems somewhat artificial to me in the
first place.
[snip]
> > But it's still possible to split a test into C and shell portions
> > where that's the simplest way to handle things. For the case in point
> > of the old-style link tests, in fact we already need some sort of
> > wrapper around the main C program. > One of the expected failure modes
> > here is a SEGV on startup; ugly. If we have a shell wrapper around
> > this we can both capture this failure mode, turning it into a neat
> > FAIL(), and post-process the result producing an EXPECTED_FAIL()
> > result when appropriate. dtc/libfdt, whose testsuite is based on the
> > same framework, already has a number of shell and shell/C hybrid
> > testcases.
>
> Ok, I think what you are suggesting is that instead of a verification script
> that tests can optionally be run via a shell script instead of calling the
> C program directly. I can roll something like that together and see what it
> looks like.
Sort of. I guess I'm saying I think you could implement this two-part
testcase concept on a case-by-case basis, so that the
testcase/verify-script pair looks like just a single testcase to
run_tests.sh.
--
David Gibson | I'll have my music baroque, and my code
david AT gibson.dropbear.id.au | minimalist, thank you. NOT _the_ _other_
| _way_ _around_!
http://www.ozlabs.org/~dgibson
-------------------------------------------------------------------------
This SF.Net email is sponsored by the Moblin Your Move Developer's challenge
Build the coolest Linux based applications with Moblin SDK & win great prizes
Grand prize is a trip for two to an Open Source event anywhere in the world
http://moblin-contest.org/redirect.php?banner_id=100&url=/
_______________________________________________
Libhugetlbfs-devel mailing list
[email protected]
https://lists.sourceforge.net/lists/listinfo/libhugetlbfs-devel