On Sat, Oct 27, 2018 at 06:02:29AM +0100, Ken Moffat via blfs-dev wrote:
> 
> Last reported time 112m29.59.
> 
> Going to bed, as soon as I get the next attempt started.
> 
Status report (describing it as a Progress report might be too
optimistic - every question raises another).

Still playing with variations, but some tentative conclusions:

1. One of the ideas I got was to play with codegen units in the
RUSTFLAGS.  Never did get that to be accepted, even =4 was treated
as non-numeric (but only after a few minutes of the build, when it
got to the first rust crate, I suppose).  However, looking around
for how to correctly specify that I (re)discovered that it makes the
code bigger (optimizations only look at smaller parts of the code
instead of hte whole crate) at the expense of faster compiles, so
that is not the way to go.

2. The other idea I got was to try -jN on ./mach build.  It needs
to come after the target, ./mach -j4 build is quickly spat out.
This appears to prevent my Ryzen 3 falling back to 1 core for long
periods of time.

3. The build using llvm-7.0 is bigger than with llvm-6.0.  With the
options I'm currently using (and shipped graphite2, harfbuzz because
I've always described the system version as experimental), du -sch
reports (all with default CFLAGS, CXXFLAGS)

clang-6.0       8.5GB + 151MB = 8.6GB
clang-7.0       8.8GB + 153MB = 9.0GB
gcc-8.2         9.9GB + 154MB = 10GB

4. Once the real compilation has begun, when the log reports a time
it is when that item completed - sometimes it can be quite a while
before the next item although *many* tasks may be short-lived (I'm
looking at 'top').  Many clang tasks, and most rust tasks, are quick
but some of the rust tasks take several minutes of CPU time.

5. With rustc-1.29 and firefox-63.0, the decision on how many CPUs
are available seems to be flakey.  On my Ryzen 3 (4 cores, no
multithreading), although the build starts by using all cores, and
often gets back to four near the end, in the main part it can fall
back to 2 or even 1 (and all three loadavg readings close to 1.0).

6. I eventually noticed that the DESTDIR install starts with
 /usr/bin/make -C . -j4 -s -w install
so it *does* know how many cores exist.

7. The installs on this box take a lot longer than on the haswell,
and there I had the impression (building with only 4 cores
available, using taskset) that it had counted the number of online
cores - it seemed to be running with big loadavg figures.  But I had
not bothered to log any of hte installs there, it didn't seem
interesting :-(

8. Meanwhile, successive runs using -j4 continue to have very
different timings (for when similar items report, and overall).  At
first I thought the builds might be in different (i.e. random)
orders, but I now think the main parts come in the same sequence.
But I noticed that warning messages (mostly C/C+, sometimes rust)
appear (stderr) when they occur, and those seem to be in VERY
different places on the two builds - even though the total count for
warnings matches.

At the moment, I'm playing with -jN.  So far, -j6 seems better than
-j4 and -j8, currently trying -j5.  In theory, rust should be using
the same number of jobs as there are cores, but that might not be
optimal on good machines.  Need more data, but I lack enthusiasm for
doing it on my older machines.

The -j5 just finished (BLFS-8.3, llvm-6.0) - 20 SBU instead of 24
SBU, but both were single runs and on more recent svn with llvm-7.0
I've seen 37 SBU and 27 SBU for two -j4 runs.

By her noodliness, I really dislike the ./mach build system!
[ http://uncyclopedia.wikia.com/wiki/Pastafarian_Sects - well, at
times like this I need a little levity ]

ĸen
-- 
                        Is it about a bicycle ?
-- 
http://lists.linuxfromscratch.org/listinfo/blfs-dev
FAQ: http://www.linuxfromscratch.org/blfs/faq.html
Unsubscribe: See the above information page

Reply via email to