On Thu, 20 Jul 2006 16:38:06 +0100, Dave Abergel wrote:
I have some questions about the philosophy of running these tests. What is
the point of testing software if the results of the test depend on so many
irreproducible variable (like whether or not your system is under heavy use
at the time)?
Normally the results should be very consistent for a given build config
and build environment. The reason for some tests failing when the
system is under load, is usualy that they simply timeout before completing.
When I was building some of the bigger toolchain packages on old
hardware (Pentium II 180MHz) last year, I had a lot of failures and
tracked the majority of them to timeouts. In some cases there is a
timeout or timout factor variable that you can set, but docmentation is
way to scarce on that point!
Maybe it's just the scientist in me, but this seems to make interpretation
of the results a bit tricky - especially for someone like me who is not an
expert in these things.
Well, if you get timeouts under control, the results should be consistent
which helps interpretation. But I agree it's not always easy. Part of the
problem is that the package is already large and complex, and the tests
have to run on many different platforms and also with different configs
for the package itself.
Certain tests just always fail on particular plaforms and these can
be confirmed by comparing against logs of successful builds submitted
by other people with similar configs.
Try http://gcc.gnu.org/gcc-3.4/buildstat.html for starters.
For the rest, if you get a small number of isolated failures, you're most
likely ok. Possibly some exotic feature will be broken in your package
or possibly the test case is broken. (E.g. I encountered a failed test
case once because I was building somewhere under my home dir and
/home was actually a symlink to /var/home.)
On the other hand, if you get a large cluster of failures (or worse yet,
the test suite fails to run) you can be sure that there is a major area
of functionality broken or a problem with some pre-requisite package.
The LFS book is thorough enough IMHO that you should not encounter
such major problems if following it carefully. (I said _should_ not.)
The bottom line is that tests shouldn't just fail randomly.
There's always a reason.
Now, perhaps I should just not bother to run the tests if I don't think
there's any point (it's my distro, so it's my rules 8o) ) but this seems
rather unsatisfactory, given that there seems to be this opportunity to
find out if the software is working as expected.
It is unsatisfactory IMO. How much time you want to spend nitt-picking
through the results is your choice, but if you get massive clusters of
failures you can bet something is wrong.
I suppose that the easy answer would be that I've missed some documentation
(by which I mean official docs, rather than LFS notes), and if I have then
I'm sorry for the noise I've just made 8o).
I don't think you missed much, or you are not alone.
Much of what exists in this area would probably be found in the
mail archives of distro developers (defn of distro includes LFS here).
If I've not missed anything, then my question stands. I'll try to rephrase
it in a more explicit way.
If 'a few' failures are acceptable, how do you define 'a few'?
Difficult to put a rule of thumb on it, but say < 2%?
Again, look for "clusters" indicating an area of functionality,
as opposed to isolated failures.
And if there are certain tests that are more critical than others, how do
you know what these are?
Experience and intelligent guessing helps--sometimes...
Check out the build logs of others--it's kind-of experience for free ;-)
And one last tip: keep your own logs or at least some notes; it will
save you a lot of time next time round if you already have a good
idea which tests are likely to fail in your environment, or if you have
to go back and track down a problem some pages further in the
book and you want to check back to that one failed test which you
weren't sure at the time if it was important....
--
http://linuxfromscratch.org/mailman/listinfo/lfs-chat
FAQ: http://www.linuxfromscratch.org/faq/
Unsubscribe: See the above information page