[gentoo-user] Re: is multi-core really worth it?
On 2017-12-06 16:07, Wols Lists wrote: > The contents of /var/tmp are expected to survive a system crash, as that > is where vi, emacs, libreoffice et al are expected to store their > recovery logs. The case of vi has recently been discussed extensively on oss-security :-P As for emacs, that's just incorrect. By default, it puts its recovery files in the same directory as the original file. But of course it can be configured differently like everything in emacs. -- Please don't Cc: me privately on mailing lists and Usenet, if you also post the followup to the list or newsgroup. To reply privately _only_ on Usenet, fetch the TXT record for the domain.
Re: [gentoo-user] Re: is multi-core really worth it?
On 28 November 2017 11:07:58 GMT+01:00, Raffaele Belardiwrote: >Raffaele Belardi wrote: >> Hi, >> >> rebuilding system and world with gcc-7.2.0 on a 6-core AMD CPU I have >the impression that >> most of the ebuilds limit parallel builds to 1, 2 or 3 threads. I'm >aware it is only an >> impression, I did not spend the night monitoring the process, but >nevertheless every time >> I checked the load was very low. >> >> Does anyone have real-world statistics of CPU usage based on gentoo >world build? > >I graphed the number of parallel ebuilds while doing an 'emerge -e' >world on a 4-core CPU, >the graph is attached. There is an initial peak of ebuilds but I assume >it is fake data >due to prints being delayed. Then there is a long interval during which >there are few (~2) >ebuilds running. This may be due to lack of data (~700Mb still had to >be downloaded when I >started the emerge) or due to dependencies. Then, after ~500 merged >packages, finally the >number of parallel ebuilds rises to something very close to the >requested 5. > >Note: the graph represents the number of parallel ebuilds in time, not >the number of >parallel jobs. The latter would be more interesting but requires a lot >more effort. > >Note also in the log near the seamonkey build that the load rises to 15 >jobs; I suppose >seamonkey and other two potentially massively parallel jobs started >with low parallelism, >fooling emerge into starting all three of them, but then each one >spawned the full -j5 >jobs requested by MAKEOPTS. There's little emerge can do in these cases >to maintain the >load-average. > >All of this just to convince myself that yes, it is worth it! > >raffaele > >Method: >The relevant part of the command line: ># "MAKEOPTS=-j5 EMERGE_DEFAULT_OPTS=--jobs 3 --load-average 5" emerge >-e world >on a 4 core CPU. >In the log I substituted a +1 for every 'Emerging' and -1 for every >'Installing', removed >the rest of the line, summed and graphed the result. Add the load average part to the makeopts and make will keep the jobs down when load rises. -- Joost -- Sent from my Android device with K-9 Mail. Please excuse my brevity.
[gentoo-user] Re: is multi-core really worth it?
Raffaele Belardi wrote: > Hi, > > rebuilding system and world with gcc-7.2.0 on a 6-core AMD CPU I have the > impression that > most of the ebuilds limit parallel builds to 1, 2 or 3 threads. I'm aware it > is only an > impression, I did not spend the night monitoring the process, but > nevertheless every time > I checked the load was very low. > > Does anyone have real-world statistics of CPU usage based on gentoo world > build? I graphed the number of parallel ebuilds while doing an 'emerge -e' world on a 4-core CPU, the graph is attached. There is an initial peak of ebuilds but I assume it is fake data due to prints being delayed. Then there is a long interval during which there are few (~2) ebuilds running. This may be due to lack of data (~700Mb still had to be downloaded when I started the emerge) or due to dependencies. Then, after ~500 merged packages, finally the number of parallel ebuilds rises to something very close to the requested 5. Note: the graph represents the number of parallel ebuilds in time, not the number of parallel jobs. The latter would be more interesting but requires a lot more effort. Note also in the log near the seamonkey build that the load rises to 15 jobs; I suppose seamonkey and other two potentially massively parallel jobs started with low parallelism, fooling emerge into starting all three of them, but then each one spawned the full -j5 jobs requested by MAKEOPTS. There's little emerge can do in these cases to maintain the load-average. All of this just to convince myself that yes, it is worth it! raffaele Method: The relevant part of the command line: # "MAKEOPTS=-j5 EMERGE_DEFAULT_OPTS=--jobs 3 --load-average 5" emerge -e world on a 4 core CPU. In the log I substituted a +1 for every 'Emerging' and -1 for every 'Installing', removed the rest of the line, summed and graphed the result. jobs3-avg5.txt.orig.gz Description: application/gzip
[gentoo-user] Re: is multi-core really worth it?
David Hallerwrote: > > Mow is that meson_options.txt > maintained? Automatically or by hand? If the former: yay! No, the former would be bad since it would require an analogue of an "autoreconf" run which is what meson avoids. > If the latter, treat it as non-existant... I think you misunderstand: It is not a voluntarily text file. It is the (only) way how options are declared for meson. Options not listed are not accepted.
Re: [gentoo-user] Re: is multi-core really worth it?
Hello, On Wed, 22 Nov 2017, Martin Vaeth wrote: >David Hallerwrote: >> autotools is _by far_the best both from a users and a packagers view. > >I do not agree. Its main advantage is that it is compatible with >most existing unix systems (but I am already not so sure whether >this also holds if you also want to compile for windows, powerpc, >etc.) Aye. >> cmake [...] qmake > >I agree, these are horrible. The best build system currently >appears to be meson. > >> equivalent to "./configure --help" > >For meson, it is "cat meson_options.txt", and there is a clear >distinction between general options and project specific ones. I've not yet "enconutered" meson... How is that meson_options.txt maintained? Automatically or by hand? If the former: yay! If the latter, treat it as non-existant... >> transparent and easily hackable > >Hacking autotools is a nightmare: Things are often hidden in >subprojects, sometimes combined with project specific hacks, >generating/updating necessary configure files somewhere within the >projects tree etc. Well, now you're quite exaggerating! Projects using subprojects with distinct autotools stack are broken by design and that is _NOT_ a fault of autotools. And it's not "often". And then, usually it's projects pulling in e.g. a "local" copy of e.g. "ffmpeg". _That_ is broken and a nightmare. But then again, it shows the "hackability" of autotools. With all "bad" side-effects. And on that I _DO_ agree quite affirmatively! But then again, nobody keeps devs from using local copies of qmake/cmake/meson/whatnot subprojects, in a otherwise autotools/cmake/qmake/meson project and you'll have the same horrors... >And after each change you have to run autoreconf, often with >compatibility problems of autoconf/automake/gettext/... versions etc. You'll get use to that. And you can get around by directly patching configure and Makefile.in files. It _IS_ up to you! And that's one of the points I like about autotools. You can even go around and first run ./configure and _then_ patch the Makefiles and whatever like the generated config.h. Or just set some ENV-vars or defines. That's the flexibility of autotools. You can choose quite exactly where you want to mess about with the delivered self-contained build. You can change (or "mess with") _ANYTHING AT ANY STAGE_ of the process. As bad as autotools are, that's something, I've not seen with any other build-system yet. Be they self contained (IIRC: bjam anyone?) or generators like imake/qmake, and cmake, what's that? It generates makefiles but ... *gah* >With meson, there is an absolutely strict separation between >the distributed files and the generated/output files which are >always in a fresh dir (and thus are _always_ produced). >When hacking up, you need to modify only the *.meson files >and do not have to worry about re-generating any other data. > >This sounds like I am a meson fanboy. I am not; actually, I dislike >a lot of its design decisions. But compared to autotools, cmake, >and qmake, it did a lot of things right. Welcome to the club of build-system-haters! :) I do not know meson yet, but, as far as I remember, it's rather "closed" to interference from "packagers". I think I will rectify my ignorance. But tell me: - can you influence large parts of the build by just using ENV-vars or easy options? 'DESTDIR="/foo"' is the prime example but doesn't count, that's _too_ obvious. With autotools, you're free to fiddle with any var in the generated Makefiles... - project specific './configure --help' (see above). Python's setuptools have that IIRC clumsily via 'python setup.py help' or some such, and that's usually rather disappointing regarding project specific options... - etc. pp. I'm to tired... As a packager, I just love autotools. As an author, autotools is a piece of cake (you need _very_ few lines in your configure.ac/Makefile.am, BTDT, esp. if you put your stuff into subdirs and can use make's wildcard function for source/header etc. stuff). Well, but plainly implemented, Makefile.am are just some few lines, with some clever "includes" (see the magic libreoffice does with "plain" makefiles) you could probably prune it down to 2-3 lines or so ... But! Yes, I will look into meson and try building one of my "fully autotooled" stuff as a meson project.. At least, I'll learn how to munge meson stuff to my liking :) I'm always curious about new build systems as a packager/ebuilder, and I had a case recently, where I got a build failure due to some build-system stuff, and I could not track it down. And I've about 15-20 years experience of compiling and packaging all kinds of weird stuff. But this time, no go. Until I acutally used 'strace -e file'! WHAT THE *FSCK*! ISTR it might have been cmake, pulling in some file from under /usr/. That's _NOT_ transparent. Debugging the build via strace? Hey? Anyone home? And no, no debug option did help. And when it comes to packageing, "I'm a Vet,
[gentoo-user] Re: is multi-core really worth it?
David Hallerwrote: > autotools is _by far_the best both from a users and a packagers view. I do not agree. Its main advantage is that it is compatible with most existing unix systems (but I am already not so sure whether this also holds if you also want to compile for windows, powerpc, etc.) > cmake [...] qmake I agree, these are horrible. The best build system currently appears to be meson. > equivalent to "./configure --help" For meson, it is "cat meson_options.txt", and there is a clear distinction between general options and project specific ones. > transparent and easily hackable Hacking autotools is a nightmare: Things are often hidden in subprojects, sometimes combined with project specific hacks, generating/updating necessary configure files somewhere within the projects tree etc. And after each change you have to run autoreconf, often with compatibility problems of autoconf/automake/gettext/... versions etc. With meson, there is an absolutely strict separation between the distributed files and the generated/output files which are always in a fresh dir (and thus are _always_ produced). When hacking up, you need to modify only the *.meson files and do not have to worry about re-generating any other data. This sounds like I am a meson fanboy. I am not; actually, I dislike a lot of its design decisions. But compared to autotools, cmake, and qmake, it did a lot of things right.