On (01/08/08 13:18), David Gibson didst pronounce: > On Thu, Jul 31, 2008 at 11:05:42PM +0100, Mel Gorman wrote: > > There are regression tests that fail for known reasons such as the binutils > > or kernel version being too old. This can be confusing to a user who reports > > a FAIL from "make func" only to find out it is expected. This patchset can > > be used to tell a user when a FAIL is an expected fail. It is broken up > > into three patches. > > > > The first patch introduces the means for calling a verification script. The > > second patch adds verification that an ELFMAP failure is due to an old > > version of binutils. The final patch runs a verification script of the > > version of ld for linkhuge tests instead of skipping them. > > > > There are other known failures that could be accounted for now such as > > mprotect() failing due to an old kernel. These can be handled over time. > > Hrm. I certainly agree that we need better handling of the various > expected failures. But I'm finding this implementation kinda > confusingly complicated. So here's a counter-proposal for the design. >
I'm surprised it is found to be complicated. Tests consist of a program and an optional verification script. I don't see why embedding the verification code inside a C program is somehow less complicated. One nice side-effect of the verification script is that the history of our regressions are contained in one place. Reading through the comments in one file should give an idea of what bugs we've handled or kernel behaviours we have fixed. > In fact, we already have a rudimentary mechanism for handling expected > failures: the CONFIG() macro basically means "failed because > something in the environment isn't suitable to run the test". I am aware of it, but it runs within the test itself which means all the checking needs to take place in a C program. > It's > not very helpful about what's wrong with the environment, of course, > nor have I been terribly consistent with what's a CONFIG() and what's > a FAIL() in some cases. > > Nonetheless, I think handling expected vs. unexpected failures within > the testcases themselves will be a better option. That means doing things like discovering your binutils version and kernel version in C. It gets worse if we have to check the distro version. Consider what it would be like to convert http://www.csn.ul.ie/~mel/which_distro.sh to C? Verification code that should be a trivial shell script becomes a range of time-consuming C code. > In cases where the > distinction is easy we can have separate FAIL_EXPECTED() - which would > give a reason - and FAIL_UNEXPECTED() macros. In other cases we could > simply have a FAIL() that branches off to some postprocessing code to > determine whether the failure is expected or not. This mechanism > would subsume the current handling of CONFIG(). > And compel to write all the verification code in C. I'd write it for the core library if necessary but this is considerably more coding effort than what we need to be spending on a regression suite. -- Mel Gorman Part-time Phd Student Linux Technology Center University of Limerick IBM Dublin Software Lab ------------------------------------------------------------------------- This SF.Net email is sponsored by the Moblin Your Move Developer's challenge Build the coolest Linux based applications with Moblin SDK & win great prizes Grand prize is a trip for two to an Open Source event anywhere in the world http://moblin-contest.org/redirect.php?banner_id=100&url=/ _______________________________________________ Libhugetlbfs-devel mailing list [email protected] https://lists.sourceforge.net/lists/listinfo/libhugetlbfs-devel
