On Fri, May 12, 2023 at 10:42 AM Peter Humphrey <[email protected]> wrote: > > On Friday, 12 May 2023 17:58:46 BST Jack wrote: > > > Again, --load-average tells emerge whether it can start a new > > job/package, but has no control over how high the load will get based > > on the already started jobs. If emerge starts new jobs when the load > > is over that specified by --load-average, that does smell like a bug in > > emerge. > > Hooray! >
Peter, I agree with Jack's response, but the keyword & potential issue is all based around that one word - "If". The way I see this is unless you have tracked down realtime what processes are running and where the CPU usage is going, and can further be sure that it's a process emerge itself started, then we don't really know what is causing the problem. My concern is what happens if emerge is honoring --load-average but you're seeing system usage created by some tool emerge called that doesn't understand --jobs and emerge doesn't know about at that level? Think some Rust code getting built by a rust compiler, or some deep make system. Anyway, I had a couple of thoughts: 1) If it's really a bug then as others have said report it up the chain and hope for a fix. 2) If I wanted to solve the problem today(ish) then I'd build a Gentoo VM in Virtualbox, dedicate some number of cores to it, build everything with binary packages and probably run an NFS server in the VM which I mount in the host machine. I then update the host machine from the binary packages and Virtualbox manages to never use more cores than I give it. That fix is more or less guaranteed to work. 3) As a question for the far more knowledgeable system folks I'd ask "Can this problem be solved by cgroups?" If I have a cgroup with 10 processors in it, can I start emerge in the host environment and then just transfer the emerge process ID to a cgroup that I've set up for this purpose? Isn't that what cgroups is supposed to be used for? Anyway, just thoughts. Good luck, Mark

