On Jan 12, 2012, at 10:40 PM, <[email protected]> wrote:
> Ran the test suite for coreutils:
>
> ====
> root:/scripts# grep -i fail src/coreutils-8.14/gnulib-tests/test-suite.log
> 1 of 270 tests failed. (24 tests were not run).
> FAIL: test-parse-datetime (exit: 134)
> test-parse-datetime.c:142: assertion failed
> ====
First, I'm still hoping someone will have some input on the coreutils test that
failed. Is this a show-stopper...?
Responses below...
>> I'm assuming that if the book lists these errors, we may as well exclude
>> them from consideration.
>
> I always have problems with my mudflap. In fact, I think my mudflap
> fails me all the time. (B^)>
Thanks for the feedback. Maybe my point was cloudy. Simply, if folks are
ignoring the libmudflap test results when in error, why pay attention at all
(unless you're a lmf dev, and if so, you can figure out how to turn that off)?
May as well include (perhaps optionally) code to ignore the errors that most
people already visually ignore.
I script my LFS builds (presumably, like you do). My script(s) fail as soon as
any command fails (I use bash for my scripts, and use '-e' for fast-fail). So,
since my goal is to automate as much as possible, I don't want to suffer a
hard-stop on failures that are essentially expected errors. As such, I'm
wondering if there's value (sounds like your answer would be 'yes') to adding
the detection of known test failures, and continuing in spite of them.
>> __GLIBC_TEST_ERROR_COUNT=$(grep Error glibc-check-log | grep sources | egrep
>> -v
>> "posix/annexc|nptl/tst-clock2|nptl/tst-attr3|rt/tst-cpuclock2|misc/tst-writev|elf/check-textrel|nptl/tst-getpid2|stdio-common/bug22|posix/bug-regex32"
>> | wc -l)
>
> I think that code reads terrible.
>
>> if [ 0 -ne $__GLIBC_TEST_ERROR_COUNT ] ; then
>> grep Error glibc-check-log | grep sources
>> false
>> fi
>
> This is readable code.
It was an attempt to communicate an idea, not necessarily to write good-looking
code; however, I do understand that the latter can be an impediment to the
former, so I'll include some nicer looking code that avoids the ugliness of
inlining the XFAILs. You can get around it with a pretty heredoc that
generates the regex for the egrep -v, like this:
====
__XFAIL=$(
cat <<EOF
fail-test-1.c
fail-test-2.c
fail-test-3.c
[Add other expected failure patterns to this list]
[Another pattern]
EOF
)
__XFAIL=$(echo $__XFAIL | sed 's/ /\|/g')
[testing occurs here...]
__FAIL_COUNT=$(cat $TEST_OUTPUT | grep "Error:" | wc -l)
__XFAIL_COUNT=$(cat $TEST_OUTPUT | grep "Error:" | egrep -v "$__XFAIL" | wc -l)
====
Q
Postscript: the full script, including some random test-output:
====
#! /bin/bash
(
cat <<EOF
fail-test-7.c
Error: fail-test-1.c
Error: fail-test-3.c
fail-test-6.c
fail-test-4.c
fail-test-2.c
fail-test-5.c
EOF
) > test-cont.out
(
cat <<EOF
fail-test-7.c
Error: fail-test-1.c
Error: fail-test-3.c
fail-test-6.c
fail-test-4.c
fail-test-2.c
Error: fail-test-5.c
EOF
) > test-abort.out
################################################################
#
# Actual code-START
#
################################################################
__XFAIL=$(
cat <<EOF
fail-test-1.c
fail-test-2.c
fail-test-3.c
EOF
)
__XFAIL=$(echo $__XFAIL | sed 's/ /\|/g')
for TEST_OUTPUT in test-cont.out test-abort.out ; do
echo "Checking file [ $TEST_OUTPUT ]..."
__FAIL_COUNT=$(cat $TEST_OUTPUT | grep "Error:" | wc -l)
__XFAIL_COUNT=$(cat $TEST_OUTPUT | grep "Error:" | egrep -v "$__XFAIL"
| wc -l)
if [ 0 -lt $__XFAIL_COUNT ] ; then
cat $TEST_OUTPUT | grep "Error:"
echo " [ $TEST_OUTPUT ] tests FAILED; aborting"
false
elif [ 0 -lt $__FAIL_COUNT ] ; then
echo " [ $TEST_OUTPUT ] tests had expected failures;
continuting"
fi
done
################################################################
#
# Actual code-END
#
################################################################
====
--
http://linuxfromscratch.org/mailman/listinfo/lfs-support
FAQ: http://www.linuxfromscratch.org/lfs/faq.html
Unsubscribe: See the above information page