On Sat, Nov 10, 2018 at 12:23:04AM +0100, Christopher Gregory via blfs-support
wrote:
>
>
> > Sent: Saturday, November 10, 2018 at 11:27 AM
> > From: "Ken Moffat via blfs-support"
> > <[email protected]>
> >
> > Summary - with (only) 4 jobs, 8GB RAM might be enough if nothing
> > else is running.
> >
> > I'm _surprised_ that Christopher's machine OOM'd with a total of
> > 14GB RAM+swap and "only" 6 cores (that implies 8 jobs by default in
> > ninja) - I guess that hyperthreading, if available, was turned off
> > so that 'top' only shows 8 CPUs.
> >
>
> Hello Ken,
>
> I tried the build without ninja, the way that I have built webkit in the past
> on this machine, with make -j12 and it ran out of memory as well. I ran make
> -j9 and there were no more out of memory errors, and it built in around 2
> hours 20 mins, whereas with ninja it was 7 hours. I am not sure if
> hyperthreading is on or off.
>
The reason for now using ninja on cmake packages is that it usually
appears to be faster (with recent cmake - it only used to be faster
if rebuilding after a change to a built or part-built source).
But webkitgtk is big, and anything using C++ uses a lot of memory
for each running compile.
I suspect that when you said 6 cores, you meant 6 real cores and in
fact hyperthreading is on, giving 12 apparent cores. My settings in
'top' list a line for each online core, but that might not be the
default. The alternative is to look at /sys/devices/system/cpu to
see how many are reported, or to use lscpu 'the On-line CPU(s)
list:' line, or just 'less /proc/cpuinfo'.
As to the old "use twice the number of cores for the number of make
jobs" advice - that seems to work adequately for the kernel, but all
the code there is C, not C++. When I have tested bigger numbers of
parallel make jobs in big desktop applications, my general
impression has been that N or N+1 is best.
> For me personally, I see absolutely no reason to use ninja for as long as
> they provide cmake as the build. It is faster for me and less hassle to just
> drop ninja. I had a look at the ddr3 ram prices, and have noticed that here
> they are on the climb at $124 for 8 gig. I am still curious to know if swap
> even gets used on a solid state drive.
>
For swap - yes, it does get used (my figures were from a machine with
only a single 480GB SSD).
As to buying DDR3, whether it is worthwhile depends on the specs - I
have another machine (now mostly powered off) where it uses, I
think, DDR3-1866, 2x4GB. I got 2x8GB but it was slower (and
unstable at expected speeds), and running tests to see how long
things took to build was very disappointing so I went back to the
2x4GB.
> I guess I will have to get some more ram before ddr3 goes out of stock due to
> being obsolete. I am not able to afford the ryzen upgrade kits as they are
> around $1700NZD.
>
> Regards,
>
> Christopher.
Unfortunately, for some use-cases (e.g. rustc testsuite, and
firefox's Python2 build system) my experience with ryzen is
variable. And of course it's single-threaded performance is
comparatively poor. I think intel chips ramp up their frequency more
quickly, so moving a task across CPUs - I watch 'top' to see that -
appears to cause slow builds as a result. But series-2 ryzens
should be better for the speed.
ĸen
--
Is it about a bicycle ?
--
http://lists.linuxfromscratch.org/listinfo/blfs-support
FAQ: http://www.linuxfromscratch.org/blfs/faq.html
Unsubscribe: See the above information page