On Fri, Feb 15, 2008 at 12:11:42PM -0800, Dan Nicholson wrote:
> 
> Ken will disagree, but I'd say that for your class I wouldn't run the
> testsuites, only the book's sanity checks. While running the toolchain
> testsuites are vital if you want to ensure that your new system will
> work properly, you're just creating temporary development
> environments. Furthermore, the toolchain tests are not going to be the
> interesting part for your students.
> 
 Will I disagree ?  For the book's users, I'm ambivalent about
testsuites - my experience is that they *can* show up serious
breakage.  For people who ask for support here, much better that we
tell someone "sorry, your build is broken because..." at an early
stage.  That's why I really feel uncertain about the "learning by
building" part of such a short course - much of the learning comes
from making mistakes, learning how to look at what you did (history,
then finding a way to log what happened, then realising you needthe
error messages as well...), and finding solutions.

 Certainly, it's useful for a student to learn that the testsuites
work in different ways.  But, just because a testsuite passed it
doesn't always mean the build is good.  Equally, I see far too many
failures to get worked up about them.  I have no experience of
building in a virtual OS, but I think people have seen exotic test
failures in glibc when doing that.  And your course has to end up
with a working system or it is pointless.

 As the tutor, you ought to know more than what you teach the
students, so it would be helpful if you run the testsuites during
your preparation.  I really think you need to follow the book more
than once, some things only become apparent the second or third time.

 For the students, it's perhaps useful to understand that package
maintainers have different ideas about the purpose of testsuites,
and the output can vary dramatically between packages.  Gcc or
binutils are useful examples, but gcc in particular is very slow to
test.  Glibc's test output is fairly impenetrable.  I categorise
make as "should all work, and gives you a nice warm feeling", udev as
"should all work, but might have suspect addition, and does it really
tell you a lot?", findutils as "good for showing the instructions
missed something", the current (old) module-init-tools as "how many
tests can you run and say all one of them succeeded?", vim as "WTF?",
perl as "I've got to look through 1500 tests to see which one
failed?".

 If you have time, it would be good to select a package or two on
which to run the tests.  You might even find one where you can look
at the files and explain _how_ the tests run.  Heh, I might pay good
money to go on that!

 In reality, I think you'll probably have to pre-build chapter 5,
and probably build large parts of chapter 6 between the classroom
sessions, just to get it all done.  In this country, we have, or
had, a kid's TV program where they used to make things, with the
catchphrase "here's one I prepared earlier".  You might end up doing
the same thing so you can concentrate on what adds value.

ĸen
-- 
das eine Mal als Tragödie, das andere Mal als Farce
-- 
http://linuxfromscratch.org/mailman/listinfo/lfs-support
FAQ: http://www.linuxfromscratch.org/lfs/faq.html
Unsubscribe: See the above information page

Reply via email to