On Wed, Jun 15, 2016 at 03:01:23PM -0700, Paul Rogers wrote: > Merging replies to/for Ken and Bruce since it's one thread... > > > a quick look at your past few posts (or rather, those which I have > > received - it's always possible my upstream decided to drop one ) only > > shows things to do with the headers. > > For clarification then on this fast moving news <cough, cough>: I was > getting failures on the API headers, couldn't figure out why, Bruce > suggested starting over. Did so. Then, magically, I got through the > API headers but died at the GCC check. Stuck there for now with > everything in place for diagnosis. > > > My memory of 7.7 is very feint, and in those days my test systems > > "Feint" as in "dodgey"? ;-D > "Feint" as in "fading away towards nothingness".
> > were all AMD, but I do recall that until recently I got various > > failures in gcc. Indeed, that book uses 'make -k check' because > > failures ARE expected. If you get only a few, you are probably good > > to continue. If you get 500+ (been there in LFS-6 on (unsupported) > > ppc) things have broken. > > It was going well, but whatever it is caused the make script to fail. > > > Actually, that reminds me - the contrib script used to let me run the > > tests in parallel and then summarise the results. At some time in the > > past couple of years I now recall posting (on -dev) that I was getting > > a LOT of failures in the c++ tests. In the end, the solution was to > > use -j1 for the tests. > > OK. I've used -j1 in the past, but the book doesn't say so here, as > it does elsewhere, so I was doing -j8 (still modest for a 4 core > hyperthreaded i7 I suppose). I can certainly give that a try, if > it's decided there are no fingerprints to find with the current state > of affairs. > Old advice used to be 1 thread per core, plus 1 extra. I recently saw something suggesting up to twice the number of cores, but that was an outlier. And your i7 perhaps only has 4 real cores, with hyperthreading - in that case -j8 should be close to maxing it out and I would only expect a faster completion with higher -j numbers if you have multiple very fast disks. > > [ snipping stuff about jhalfs because I don't use it - but you > > probably want the latest version from svn ]. > > Actually I'd rather NOT use jhalfs because it wouldn't fit in with my > existing infrastructure. All I was expecting from it were scripts I > could check against mine. > > > The other thing is that most people who are regularly contributing to > > this list think 7.7 is *old*. At the moment, anybody doing > > It's not that old! IMO, of course. Only two releases back. > A bit over a year. So much has changed. On linux desktops, a year is a long time. > > development testing is discovering the pleasures (in the sense of > > "this is now undefined behaviour, so we'll trash it") of g++-6.1. > > Sounds like some other organization we all know and love. <cough, cough> > It's the evolving c++ standards. Unfortunately, a lot of software is written in c++. > > I wasn't questioning the libc.so.6. It was found. Since I don't know > what the problem might be, I was showing all the ones not found. It > isn't clear to me why they aren't found and if that's a/the problem. And > yes, I have checked the 7.9 book and it appears no different but the gcc > version number. > > At the moment I have no better idea than Ken's -j1, and that isn't in > either book. The -j1 is only for running make check. But you haven't indicated how many tests failed. The toolchain libs tend to be put in different places by different build methods (i.e. different distros) - try running strace on a desktop program (on a working system) and sending the output to a file, then look at the many places where things might be found, particularly on 32-bit where directories such as i386 and i686 might be looked at. The fact that something was not found in the first place where the system looked for it is not important, provided it did eventually get found. So, we don't mention the not found messages because they are usually neither useful nor interesting. Let me try a different approach: you don't want to run jhalfs - at the risk of pissing off Bruce and Pierre, I can understand that : people who stick with LFS tend to have our own ideas. And you have your own scripts (good), which were working on x86_64. But do you understand how your scripts can fail ? And do they give you information to help identify where they got to before they failed ? A lot of people like the 'bail on error' approach: I have moved to that, but I cannot say that I *like* it. Many people add the absolute minimum into their scripts, but I take the opposite approach: My "driver" script (chroot.sh, in this context) tells me which "client" (package, or step) script it is trying to execute, and then the client scripts produce various messages to show things like patching, running seds, running configure, running make, running check [ actually that's a lie - most chroot scripts don't put anything on the screen for tests because I might build without running them, controlled by a function set by the options passed to the driver script : it's my BLFS scripts that report on running tests, for those few packages where I run them ], install, various fixups, etc. And I split some of what the book regards as one step into multiple scripts (so, if my fellow Britons vote to go back 70 years I will be able to apply the updated timezone file, with regret). ĸen -- I had to walk fifteen miles to school, barefoot in the snow. Uphill both ways. NB - that is a joke, like the four yorkshiremen sketch. -- http://lists.linuxfromscratch.org/listinfo/lfs-support FAQ: http://www.linuxfromscratch.org/blfs/faq.html Unsubscribe: See the above information page Do not top post on this list. A: Because it messes up the order in which people normally read text. Q: Why is top-posting such a bad thing? A: Top-posting. Q: What is the most annoying thing in e-mail? http://en.wikipedia.org/wiki/Posting_style
