On (04/08/08 13:51), David Gibson didst pronounce: > On Fri, Aug 01, 2008 at 09:49:28AM +0100, Mel Gorman wrote: > > On (01/08/08 13:18), David Gibson didst pronounce: > > > On Thu, Jul 31, 2008 at 11:05:42PM +0100, Mel Gorman wrote: > > > > There are regression tests that fail for known reasons such as the > > > > binutils > > > > or kernel version being too old. This can be confusing to a user who > > > > reports > > > > a FAIL from "make func" only to find out it is expected. This patchset > > > > can > > > > be used to tell a user when a FAIL is an expected fail. It is broken up > > > > into three patches. > > > > > > > > The first patch introduces the means for calling a verification script. > > > > The > > > > second patch adds verification that an ELFMAP failure is due to an old > > > > version of binutils. The final patch runs a verification script of the > > > > version of ld for linkhuge tests instead of skipping them. > > > > > > > > There are other known failures that could be accounted for now such as > > > > mprotect() failing due to an old kernel. These can be handled over time. > > > > > > Hrm. I certainly agree that we need better handling of the various > > > expected failures. But I'm finding this implementation kinda > > > confusingly complicated. So here's a counter-proposal for the design. > > > > I'm surprised it is found to be complicated. Tests consist of a program > > and an optional verification script. I don't see why embedding the > > verification code inside a C program is somehow less complicated. > > Your approach is conceptually simple enough, yes. But from reading > the patch, the actual mechanics of hooking up each testcase with its > result checking script is kind of complicated. >
I'm failing to see how. Create a shell script called verifyresults-TESTNAME.sh and it gets called with the exit code. > Doing the result checking from within the testcases has several advantages: > - We only need the second result checking pass - and the > mechanism for getting there - on those testcases that need it > - We can more easily fold CONFIG() into the same mechanism > - All the information about a particular testcase - what it > does, and what each possible result means - is kept together. > > > One nice side-effect of the verification script is that the history of our > > regressions are contained in one place. Reading through the comments in one > > file should give an idea of what bugs we've handled or kernel behaviours we > > have fixed. > > I don't see how that's not so if the verification is within the > testcase. > To me, it confuses which is the testcase and which is the verification if they are kept in one file. > > > In fact, we already have a rudimentary mechanism for handling expected > > > failures: the CONFIG() macro basically means "failed because > > > something in the environment isn't suitable to run the test". > > > > I am aware of it, but it runs within the test itself which means all the > > checking needs to take place in a C program. > > No, all the checking needs to take place within the testcase. That > doesn't imply within a C program. > All the test cases are C programs. Where else would you put the result? Are you suggesting that a testcase be a shell script which runs the C program or what? If so, how is that simplier than having a separate verification script as necessary? > > > It's > > > not very helpful about what's wrong with the environment, of course, > > > nor have I been terribly consistent with what's a CONFIG() and what's > > > a FAIL() in some cases. > > > > > > Nonetheless, I think handling expected vs. unexpected failures within > > > the testcases themselves will be a better option. > > > > That means doing things like discovering your binutils version and kernel > > version in C. It gets worse if we have to check the distro version. Consider > > what it would be like to convert http://www.csn.ul.ie/~mel/which_distro.sh > > to C? Verification code that should be a trivial shell script becomes a > > range of time-consuming C code. > > No, not necessarily. One advantage I see of this method is that when > the distinction between the different results *can* be done easily in > C - like most of the current CONFIG() results - we can do that without > having to split each testcase into C and shell components. > The version numbers still have to be discovered by something. Concievably, it could be done as environment variables and do the checking with helper utility functions, but it would be time-consuming to write and gain very little. > But it's still possible to split a test into C and shell portions > where that's the simplest way to handle things. For the case in point > of the old-style link tests, in fact we already need some sort of > wrapper around the main C program. > One of the expected failure modes > here is a SEGV on startup; ugly. If we have a shell wrapper around > this we can both capture this failure mode, turning it into a neat > FAIL(), and post-process the result producing an EXPECTED_FAIL() > result when appropriate. dtc/libfdt, whose testsuite is based on the > same framework, already has a number of shell and shell/C hybrid > testcases. > Ok, I think what you are suggesting is that instead of a verification script that tests can optionally be run via a shell script instead of calling the C program directly. I can roll something like that together and see what it looks like. > > > In cases where the > > > distinction is easy we can have separate FAIL_EXPECTED() - which would > > > give a reason - and FAIL_UNEXPECTED() macros. In other cases we could > > > simply have a FAIL() that branches off to some postprocessing code to > > > determine whether the failure is expected or not. This mechanism > > > would subsume the current handling of CONFIG(). > > > > And compel to write all the verification code in C. I'd write it for the > > core library if necessary but this is considerably more coding effort than > > what we need to be spending on a regression suite. > -- Mel Gorman Part-time Phd Student Linux Technology Center University of Limerick IBM Dublin Software Lab ------------------------------------------------------------------------- This SF.Net email is sponsored by the Moblin Your Move Developer's challenge Build the coolest Linux based applications with Moblin SDK & win great prizes Grand prize is a trip for two to an Open Source event anywhere in the world http://moblin-contest.org/redirect.php?banner_id=100&url=/ _______________________________________________ Libhugetlbfs-devel mailing list [email protected] https://lists.sourceforge.net/lists/listinfo/libhugetlbfs-devel
