On Fri, Jun 19, 2020 at 06:58:42PM +0200, Pierre Labastie via lfs-dev wrote: > On Fri, 2020-06-19 at 16:26 +0100, Ken Moffat via lfs-dev wrote: > > I've now been through my test logs for the new build (on my i7 > > haswell). > > > > Here are a few comments (in order of testing) > > > > glibc-2.31.0 > > ------------ > > > > We say "misc/tst-ttyname is known to fail in the LFS chroot > > environment." But for me it was skipped (I'm running a 5.7.2 > > kernel, with 5.7.2 headers and 5.7 iproute2). > > > > But my log says > > 5102 PASS > > 44 UNSUPPORTED > > 16 XFAIL > > 2 XPASS > > > > And misc/tst-ttyname is now among the unsupported - > > UNSUPPORTED: misc/tst-ttyname > > I have quite different results (was with 5.6.15 kernel and headers, and > iproute2-5.6): > 1 FAIL > 5177 PASS > 22 UNSUPPORTED > 17 XFAIL > 2 XPASS > > with > FAIL: misc/tst-ttyname > > I cannot tell why it has run more tests for me than for you (I mean the > 22 less UNSUPPORTED cannot explain the 75 more PASS) > > Note that in a VM with an old debian 8 system (kernel 3.16), I have 24 > UNSUPPORTED and 5175 PASS, otherwise identical. >
Strange, but I'll have to accept it does fail. On this machine using cross-chap5-2020513 I got the same results as in the current build. And on cross-chap5-20200603 on a ryzen. BUT - on a different ryzen, using _trunk_ from 20200603 my results match your. > > > > I'm also now wondering about the comment on the rt/tst-cputimer > > tests. I don't have any systems running kernels older than 5.4, but > > the detail suggests that "mid-period" 4.14 and 4.19 stable kernels, > > as well as some or all of 4.20, failed here. Could we not just say > > that some kernels before linux-5.0 cause these tests to fail ? > > Don't remember, so I trust you. This test passed on the 3.16 debian > kernel, though. But some fixes may have been backported. > I really don't remember either, but mentioning 4.20 kernels looks a bit messy (they haven't been maintained in ages) and I think less-specific wording should calm users who either build on very old LFS versions where they have never updated their kernel, or on ubuntu who I believe are doing their own maintenance for 4.15. > > > > gcc-10.1.0 > > ---------- > > > > I seem to be getting rather more failures than the book implies, > > although I don't think they are either serious or unexpected. > > > > First, 14 failures i nthe torture test, variants of > > FAIL: gcc.c-torture/compile/limits-exprparen.c > > Isn't it the one that fails when ulimit is not increased? > Maybe. I'm increasing the ulimit as root, then running the test as 'tester', which matches the book. Again, my build of trunk from 20200603 didn't fail these, but cross-chap5 from that date did. Very strange, looks as if maybe _I_ lost something in the change (well, I know I already lost several things in the change - fixes in one branch of my scripts but not in the other, but most of those were for BLFS). > > > > Second, as wel las the 6 locale/get_time test failures I also had > > FAIL: 20_util/unsynchronized_pool_resource/allocate.cc execution test > > FAIL: 22_locale/numpunct/members/char/3.cc execution test > > Never seen those ones > I had both on the 20200603 cross-chap5 (along with a couple of others I've not seen before or since), on the 20200603 trunk I also had the 'numpunct' failure. > Most often, I have only the 6 22_locale/get_time failures on the > Haswell or 64 bit VM's. > > On the i686 VM, I had 13 errors for libstdc++, but not the same as the > ones you report. > > > > > bison-3.6.3 > > ----------- > > > > Here, I strongly disagree that the tests need to be run with -j1. > > > > On my i3 Skylake last December I had to use -j1 to get the package > > to compile, and therefore I also used -j1 on that machine if I ran > > the tests. But on other machines I'm using -j8 both for the compile > > and for the tests (and no failures). > > Could be changed I guess > > > > > python-3.8.3 > > ------------ > > > > "The test named test_normalization fails because network > > configuration is not completed yet." I assume that the work on > > /etc/hosts has solved this, I have: > > > > 0:01:42 load avg: 6.94 [369/423] test_normalization passed -- > > running: test_concurrent_futures (1 min 38 sec), > > test_multiprocessing_spawn (1 min 33 sec) > > > > and at the end it reported SUCCESS before listing the skipped tests. > > > > Not run those since the book tells not to run them > I'm sorry, where does it say that for Python ? > > acl-2.2.53 (tested after coreutils) > > ---------- > > > > In the past I'd assumed that the two failures I was seeing > > [test/root/permissions.test and test/root/setfacl.test ] were because > > I didn't use ACLs (although ext4 supports them), but that was > > probably a mistaken assumption. Anyway, now I don't have any > > failures in acl. > > Not run either > I can understand that, the book doesn't really support it. > > > > util-linux-2.35.2 > > ----------------- > > > > I have to admit that this is my first build with this version, I'd > > managed to miss the change to it. I see that we use 'make -k check' > > in the expectation that things may fail. And I do get one failure: > > > > FAILED (column/invalid-multibyte) > > > > ISTR that has happened in the past, and I had the impression that > > the cross- changes had fixed it, but maybe I'm mistaken. > > For me they now all pass (on all my machines). Note that I do not use > any special CFLAGS. Those are all jhalfs builds. The cpu is exactly the > same as yours. > > Pierre > But my CFLAGS go up to eleven! Right across the board - eleven, eleven, eleven! (paraphrasing Nigel Tufnel, This is Spinal Tap) Yeah, it's possible that something there might be involved, but we currently use 'make -k check' so it's probably not a priority for me. The differences in my gcc tests are more perplexing, I'll need to find time to think about those. Thanks for the data. ĸen -- He died at the console, of hunger and thirst. Next day he was buried, face-down, nine-edge first. - the perfect programmer -- http://lists.linuxfromscratch.org/listinfo/lfs-dev FAQ: http://www.linuxfromscratch.org/faq/ Unsubscribe: See the above information page