Re: [gentoo-user] Re: long compiles

2023-09-13 Thread Neil Bothwick
On Wed, 13 Sep 2023 21:20:20 - (UTC), Grant Edwards wrote:

> About a year ago I finally gave up building Chromium and switched to
> www-client/google-chrome.  It got to the point where it sometimes took
> longer to build Chromium than it did for the next version to come out.

That's why I run stable Chromium on an otherwise testing system.


-- 
Neil Bothwick

We all know what comes after 'X', said Tom, wisely.


pgpwfhEq3DXLG.pgp
Description: OpenPGP digital signature


[gentoo-user] Re: long compiles

2023-09-13 Thread Grant Edwards
On 2023-09-13, Kristian Poul Herkild  wrote:

> Nothing compares to Chromium (browser) in terms of compilation times. On 
> my system with 12 core threads it takes about 8 hours to compile - which 
> is 4 times longer than 10 years ago with 2 core threads ;)

About a year ago I finally gave up building Chromium and switched to
www-client/google-chrome.  It got to the point where it sometimes took
longer to build Chromium than it did for the next version to come out.

--
Grant




Re: [gentoo-user] long compiles

2023-09-13 Thread Kristian Poul Herkild

Hi.

Nothing compares to Chromium (browser) in terms of compilation times. On 
my system with 12 core threads it takes about 8 hours to compile - which 
is 4 times longer than 10 years ago with 2 core threads ;)


Libreoffice takes a few hours, but less than half of chromium. Nothing 
gets close to Chromium. But otherwise webkitgtk and qtwebengine are to 
big ones - but still about a quarter of Chromium.


Kristian Poul Herkild

Den 11.09.2023 kl. 21.19 skrev Alan McKinnon:
After my long time away from Gentoo, I thought perhaps some packages 
that always took ages to compile would have improved. I needed to change 
to ~amd64 anyway (dumb n00b mistake leaving it at amd64). So that's what 
I did and let emerge do it's thing.


chromium has been building since 10:14, it's now 21:16 and still going 
so 9 hours at least on this machine to build a browser - almost as bad 
as openoffice at it's worst (regularly took 12 hours). Nodejs also took 
a while, but I didn't record time.



What other packages have huge build times?

--
Alan McKinnon
alan dot mckinnon at gmail dot com




Re: [gentoo-user] long compiles

2023-09-13 Thread Michael
On Wednesday, 13 September 2023 13:41:00 BST Peter Humphrey wrote:
> On Wednesday, 13 September 2023 12:50:20 BST Wols Lists wrote:
> > On 13/09/2023 12:28, Peter Humphrey wrote:
> > > A thought on compiling, which I hope some devs will read: I was tempted
> > > to
> > > push the system hard at first, with load average and jobs as high as I
> > > thought I could set them. I've come to believe, though, that job control
> > > by portage and /usr/bin/make is weak at very high loads, because I would
> > > usually find that a few packages had failed to compile; also that some
> > > complex programs were sometimes unstable. Therefore I've had to throttle
> > > the system to be sure(r) of correctness. Seems a waste.
> > 
> > Bear in mind a lot of systems are thermally limited and can't run at
> > full pelt anyway ...
> 
> No doubt, but apparently not this box: I run it 24x7 with all 24 CPU threads
> fully loaded with floating-point calculations, which make a good deal more
> heat than 'mere' compiling with (I assume) integer arithmetic.   :)

I recall this being discussed in a previous thread, but if your CPU has 24 
threads and you've set:

EMERGE_DEFAULT_OPTS="--jobs=4 --load-average=32
MAKEOPTS="-j14"

You will be asking emerge to run up to 4 x 14 = 56 threads, which could 
potentially eat up to 2G of RAM each, i.e. 112G of RAM.  This will exhaust 
your 64G of RAM, not taking into account whatever else the OS will be trying 
to run at the time. The --load-average=32 is normally expected to be a 
floating point number indicating a percentage of load x the number of cores; 
e.g. for 12 cores you could set it at 12 x 0.9 = 10.8 to limit the load to 90% 
so as maintain some system responsiveness.

Of course, not all emerges use make and you may never or rarely emerge 4 x 
monster packages in parallel to need 2G of RAM for each thread at the same 
time.

If only we had at our disposal some AI algorithm to calculate dynamically each 
time we run emerge the optimal combo of parallel emerge jobs and number of 
make tasks, so as to achieve the highest total time saving Vs energy spent!  
Or just the highest total time saving.  ;-)

I haven't performed any meaningful comparisons to determine where the greatest 
gains are to be achieved.  Parallel emerges of many small packages, or a large 
number of make jobs for big packages.  The combination would change each time 
according to the individual packages waiting for an update.  In my use case, 
instinctively feels more beneficial reducing the time I have to wait for huge 
packages like qtwebengine to finish, than accelerating the updates of half a 
dozen smaller packages.  Therefore, as a rule I leave EMERGE_DEFAULT_OPTS 
unset.  I set MAKEOPTS jobs to the number of CPU threads +1 and the load 
average at 95%, so I can continue using the PC without any noticeable latency.  
On PCs where RAM is constrained I reduce the MAKEOPTS in /etc/portage/
package.env for any large packages which are bound to start swapping and 
thrashing the disk.


signature.asc
Description: This is a digitally signed message part.


Re: [gentoo-user] long compiles

2023-09-13 Thread Peter Humphrey
On Wednesday, 13 September 2023 12:50:20 BST Wols Lists wrote:
> On 13/09/2023 12:28, Peter Humphrey wrote:
> > A thought on compiling, which I hope some devs will read: I was tempted to
> > push the system hard at first, with load average and jobs as high as I
> > thought I could set them. I've come to believe, though, that job control
> > by portage and /usr/bin/make is weak at very high loads, because I would
> > usually find that a few packages had failed to compile; also that some
> > complex programs were sometimes unstable. Therefore I've had to throttle
> > the system to be sure(r) of correctness. Seems a waste.

> Bear in mind a lot of systems are thermally limited and can't run at
> full pelt anyway ...

No doubt, but apparently not this box: I run it 24x7 with all 24 CPU threads 
fully loaded with floating-point calculations, which make a good deal more heat 
than 'mere' compiling with (I assume) integer arithmetic.   :)

> You might find it's actually better (and more efficient) to run at lower
> loading. Certainly following the kernel lists you get the impression
> that the CPU regularly goes into thermal throttling under heavy load,
> and also that using a couple of cores lightly is more efficient than
> using one core heavily.

See above; besides, I have to limit the load anyway when compiling, for the 
reasons I gave last time.

> It's so difficult to know what's best ... (because too many people make
> decisions based on their interests, and then when you come along their
> decisions may conflict with each other and certainly conflict with you ...)

I agree with you there Wol, even without the parenthesis.  :)

-- 
Regards,
Peter.






Re: [gentoo-user] long compiles

2023-09-13 Thread Frank Steinmetzger
Am Wed, Sep 13, 2023 at 12:50:20PM +0100 schrieb Wols Lists:

> Bear in mind a lot of systems are thermally limited and can't run at full
> pelt anyway ...

Usually those are space-constrained systems like mini PCs or laptops. 
Typical Desktops shouldn’t be limited; even the stock CPU coolers should be 
capable of dissipating all heat, as long as the case has enough air flow.

> You might find it's actually better (and more efficient) to run at lower
> loading. Certainly following the kernel lists you get the impression that
> the CPU regularly goes into thermal throttling under heavy load, and also
> that using a couple of cores lightly is more efficient than using one core
> heavily.

At least very current CPUs tend to go into so high clock speeds that they 
become quite inefficient. If you set a 105 W Ryzen 7700X to 65 W eco mode in 
the BIOS (which means that the actual maximum intake goes down from 144 W to 
84 W), you reduce consuption by a third, but only lose ~15 % in performance.

At very low figures (15 W), Ryzen 5000 and 7000 CPUs are almost as efficient 
as Apple M1.

-- 
Grüße | Greetings | Qapla’
Please do not share anything from, with or about me on any social network.

Einstein is dead.  Newton is dead.  I’m feeling sick, too.


signature.asc
Description: PGP signature


Re: [gentoo-user] long compiles

2023-09-13 Thread Wols Lists

On 13/09/2023 12:28, Peter Humphrey wrote:

A thought on compiling, which I hope some devs will read: I was tempted to
push the system hard at first, with load average and jobs as high as I thought
I could set them. I've come to believe, though, that job control by portage
and /usr/bin/make is weak at very high loads, because I would usually find that
a few packages had failed to compile; also that some complex programs were
sometimes unstable. Therefore I've had to throttle the system to be sure(r) of
correctness. Seems a waste. Thus:


Bear in mind a lot of systems are thermally limited and can't run at 
full pelt anyway ...


You might find it's actually better (and more efficient) to run at lower 
loading. Certainly following the kernel lists you get the impression 
that the CPU regularly goes into thermal throttling under heavy load, 
and also that using a couple of cores lightly is more efficient than 
using one core heavily.


It's so difficult to know what's best ... (because too many people make 
decisions based on their interests, and then when you come along their 
decisions may conflict with each other and certainly conflict with you ...)


Cheers,
Wol



Re: [gentoo-user] long compiles

2023-09-13 Thread Peter Humphrey
On Tuesday, 12 September 2023 22:08:49 BST Wol wrote:

> There's all sorts of tricks, some work for some people, others work for
> others.

Quite so. Here I have two swap partitions: 8GB priority 20 on NVME and 50GB 
priority 10 on SSD. I've never noticed either of them being used, so I suppose 
I could dispense with the smaller one. When I bought the machine I wanted it 
to be as powerful as I could reasonably justify (to run BOINC projects, making 
my own small contribution to the state of knowlege in astrophysics), so it has 
64 GB RAM in its four slots.

Tmpfs earns its keep here. I don't set limits on it, preferring to let the 
kernel manage it itself. One tmpfs is on /tmp, the other on /var/tmp/portage. 
With all that RAM to play in, swap is rarely used, if ever.

A thought on compiling, which I hope some devs will read: I was tempted to 
push the system hard at first, with load average and jobs as high as I thought 
I could set them. I've come to believe, though, that job control by portage 
and /usr/bin/make is weak at very high loads, because I would usually find that 
a few packages had failed to compile; also that some complex programs were 
sometimes unstable. Therefore I've had to throttle the system to be sure(r) of 
correctness. Seems a waste. Thus:

$ grep '\-j' /etc/portage/make.conf
EMERGE_DEFAULT_OPTS="--jobs=4 --load-average=32 [...]"
MAKEOPTS="-j14"

That 14 will revert to its previous 12 if I find things going bump in the night 
again, or perhaps go up a bit more if not.

-- 
Regards,
Peter.