Re: FYI: what it takes for RAM+swap to build devel/llvm40 with 4 processors or cores and WITH__DEBUG= (powerpc64 example)
Sorry Matthew, forgot to reply to this one. On Wed, Apr 05, 2017 at 07:01:35PM +0200, Matthew Rezny wrote: > On Wednesday 05 April 2017 16:15:41 Alexey Dokuchaev wrote: > > ... > > Hmm, I don't quite get it: shouldn't static linking actually increase > > the binaries (and thus the package) size? > > I might have reversed static and shared somewhere in the linking > explanation, or not properly described the situation. [...] Understood, makes sense now. > There was a brief period in which llvm39 was fully switched to dynamic > linking, which made it considerably smaller but caused runtime problems > (and was also likely to be slower). That still sounds like the most sane way to go; provided those problems are/would be fixed, I hope for that switch to happen again one day. (I somewhat doubt that "slower" was noticeable enough to worry about.) > The best solution to cut rebuild time for LLVM is ccache. Indeed, ccache helps greatly. Now that I've managed to cut down package times as well, situation with LLVM ports no longer looks as bad as I originally saw it; thank you. ./danfe ___ freebsd-toolchain@freebsd.org mailing list https://lists.freebsd.org/mailman/listinfo/freebsd-toolchain To unsubscribe, send any mail to "freebsd-toolchain-unsubscr...@freebsd.org"
Re: FYI: what it takes for RAM+swap to build devel/llvm40 with 4 processors or cores and WITH__DEBUG= (powerpc64 example)
On Wed, Apr 05, 2017 at 08:46:22PM +0200, Mathieu Arnold wrote: > Le 05/04/2017 ?? 19:20, Alexey Dokuchaev a ??crit : > > On Wed, Apr 05, 2017 at 07:12:06PM +0200, Mathieu Arnold wrote: > >> Le 05/04/2017 ?? 18:15, Alexey Dokuchaev a ??crit : > >>> ... > >>> That 1G looks like a big jump from 259M of llvm39-3.9.1_1.txz to me. > >> > >> So, you are comparing the size of the llvm39 package with the size of > >> the llvm40 after extraction ? > > > > Ha, didn't realize it, I'm so dumb. What [is] the size of llvm40-*.txz > > then? I don't have it cached locally yet... > > On my builds: > > -rw-r--r-- 6 nobody wheel 256105968 4 avr. 17:54 llvm39-3.9.1_4.txz > -rw-r--r-- 6 nobody wheel 304951340 4 avr. 18:02 llvm40-4.0.0_2.txz Thanks, now it all makes sense, sorry for confusion. ./danfe ___ freebsd-toolchain@freebsd.org mailing list https://lists.freebsd.org/mailman/listinfo/freebsd-toolchain To unsubscribe, send any mail to "freebsd-toolchain-unsubscr...@freebsd.org"
Re: FYI: what it takes for RAM+swap to build devel/llvm40 with 4 processors or cores and WITH__DEBUG= (powerpc64 example)
On 2017-Apr-5, at 10:20 AM, Alexey Dokuchaev wrote: > On Wed, Apr 05, 2017 at 07:12:06PM +0200, Mathieu Arnold wrote: >> Le 05/04/2017 ?? 18:15, Alexey Dokuchaev a ??crit : >>> ... >>> That 1G looks like a big jump from 259M of llvm39-3.9.1_1.txz to me. >> >> So, you are comparing the size of the llvm39 package with the size of >> the llvm40 after extraction ? > > Ha, didn't realize it, I'm so dumb. What the size of llvm40-*.txz then? > I don't have it cached locally yet... Someone else provided a comparison. But as for the installed-size goes: Looks like pkg delete's report of size is truncated or rounded to an integral GiByte count for llvm40. pkg info shows (reminder: powerpc64 context): # pkg info llvm40 | grep "Flat size" Flat size : 1.38GiB So I did not pick a good estimate to report for installed-size for the no-WITH_DEBUG variant's scale for installed-size. === Mark Millard markmi at dsl-only.net ___ freebsd-toolchain@freebsd.org mailing list https://lists.freebsd.org/mailman/listinfo/freebsd-toolchain To unsubscribe, send any mail to "freebsd-toolchain-unsubscr...@freebsd.org"
Re: FYI: what it takes for RAM+swap to build devel/llvm40 with 4 processors or cores and WITH__DEBUG= (powerpc64 example)
Le 05/04/2017 à 19:20, Alexey Dokuchaev a écrit : > On Wed, Apr 05, 2017 at 07:12:06PM +0200, Mathieu Arnold wrote: >> Le 05/04/2017 ?? 18:15, Alexey Dokuchaev a ??crit : >>> ... >>> That 1G looks like a big jump from 259M of llvm39-3.9.1_1.txz to me. >> So, you are comparing the size of the llvm39 package with the size of >> the llvm40 after extraction ? > Ha, didn't realize it, I'm so dumb. What the size of llvm40-*.txz then? > I don't have it cached locally yet... On my builds: -rw-r--r-- 6 nobody wheel 256105968 4 avr. 17:54 llvm39-3.9.1_4.txz -rw-r--r-- 6 nobody wheel 304951340 4 avr. 18:02 llvm40-4.0.0_2.txz -- Mathieu Arnold signature.asc Description: OpenPGP digital signature
Re: FYI: what it takes for RAM+swap to build devel/llvm40 with 4 processors or cores and WITH__DEBUG= (powerpc64 example)
On Wed, Apr 05, 2017 at 07:12:06PM +0200, Mathieu Arnold wrote: > Le 05/04/2017 ?? 18:15, Alexey Dokuchaev a ??crit : > > ... > > That 1G looks like a big jump from 259M of llvm39-3.9.1_1.txz to me. > > So, you are comparing the size of the llvm39 package with the size of > the llvm40 after extraction ? Ha, didn't realize it, I'm so dumb. What the size of llvm40-*.txz then? I don't have it cached locally yet... ./danfe ___ freebsd-toolchain@freebsd.org mailing list https://lists.freebsd.org/mailman/listinfo/freebsd-toolchain To unsubscribe, send any mail to "freebsd-toolchain-unsubscr...@freebsd.org"
Re: FYI: what it takes for RAM+swap to build devel/llvm40 with 4 processors or cores and WITH__DEBUG= (powerpc64 example)
On Wednesday 05 April 2017 19:44:51 Slawa Olhovchenkov wrote: > On Wed, Apr 05, 2017 at 04:15:41PM +, Alexey Dokuchaev wrote: > > > I've also tried without WITH_DEBUG= and now. . . > > > > > > # pkg delete llvm40 > > > Checking integrity... done (0 conflicting) > > > Deinstallation has been requested for the following 1 packages (of 0 > > > packages in the universe): > > > > > > Installed packages to be REMOVED: > > > llvm40-4.0.0 > > > > > > Number of packages to be removed: 1 > > > > > > The operation will free 1 GiB. > > > > That 1G looks like a big jump from 259M of llvm39-3.9.1_1.txz to me. > > I'm surely looking forward modularization of LLVM port; rebuilding it > > every time becomes a real PITA given that X11 stack requires it. :-( > > What real reason of requiring llvm for X11? > I am about run time depends: > > # pkg info -r llvm39 > llvm39-3.9.1_4: > libEGL-13.0.6 > dri-13.0.6,2 > > # ldd /usr/local/lib/libXvMCr600.so > /usr/local/lib/libXvMCr600.so: > [...] > libLLVM-3.9.so => /usr/local/llvm39/lib/libLLVM-3.9.so (0x803e0) > libLTO.so => /usr/local/llvm39/lib/../lib/libLTO.so (0x80820) [...] > > # ls -lh /usr/local/llvm39/lib/libLLVM-3.9.so > /usr/local/llvm39/lib/../lib/libLTO.so -rwxr-xr-x 1 root wheel38M Apr > 2 18:18 /usr/local/llvm39/lib/../lib/libLTO.so -rwxr-xr-x 1 root wheel > 47M Apr 2 18:18 /usr/local/llvm39/lib/libLLVM-3.9.so > > libXvMCr600 realy do run-time llvm interpetation and linker-time > optimisation?! > > Also, I am don't see any realy dependence libEGL-13.0.6 from llvm. Yes, Mesa really uses LLVM at runtime for shader compilation/optimization, and Xorg depends on Mesa for GLX, etc. ___ freebsd-toolchain@freebsd.org mailing list https://lists.freebsd.org/mailman/listinfo/freebsd-toolchain To unsubscribe, send any mail to "freebsd-toolchain-unsubscr...@freebsd.org"
Re: FYI: what it takes for RAM+swap to build devel/llvm40 with 4 processors or cores and WITH__DEBUG= (powerpc64 example)
Le 05/04/2017 à 18:15, Alexey Dokuchaev a écrit : > On Thu, Mar 30, 2017 at 07:26:43PM +0200, Matthew Rezny wrote: >> LLVM 3.8 introduced the option to build a shared LLVM library, which is >> what Mesa needs for use at runtime (for e.g. compiling shaders), separate >> from linking to it. Previous versions only had one option, if the library >> was built then all the LLVM binaries were staticly linked to it. [...] >> >> llvm{35,36,37} are statically linked and thus smaller than normal. llvm38 >> switched to dynamic linking, the default, thus the size grew. > Hmm, I don't quite get it: shouldn't static linking actually increase the > binaries (and thus the package) size? > >> I assume llvm40 will be a bit bigger, but do not expect to see another >> jump as you've observed. > As Mark Millard reports: > >> I've also tried without WITH_DEBUG= and now. . . >> >> # pkg delete llvm40 >> Checking integrity... done (0 conflicting) >> Deinstallation has been requested for the following 1 packages (of 0 >> packages in the universe): >> >> Installed packages to be REMOVED: >> llvm40-4.0.0 >> >> Number of packages to be removed: 1 >> >> The operation will free 1 GiB. > That 1G looks like a big jump from 259M of llvm39-3.9.1_1.txz to me. So, you are comparing the size of the llvm39 package with the size of the llvm40 after extraction ? -- Mathieu Arnold signature.asc Description: OpenPGP digital signature
Re: FYI: what it takes for RAM+swap to build devel/llvm40 with 4 processors or cores and WITH__DEBUG= (powerpc64 example)
On Wednesday 05 April 2017 16:15:41 Alexey Dokuchaev wrote: > On Thu, Mar 30, 2017 at 07:26:43PM +0200, Matthew Rezny wrote: > > LLVM 3.8 introduced the option to build a shared LLVM library, which is > > what Mesa needs for use at runtime (for e.g. compiling shaders), separate > > from linking to it. Previous versions only had one option, if the library > > was built then all the LLVM binaries were staticly linked to it. [...] > > > > llvm{35,36,37} are statically linked and thus smaller than normal. llvm38 > > switched to dynamic linking, the default, thus the size grew. > > Hmm, I don't quite get it: shouldn't static linking actually increase the > binaries (and thus the package) size? > I might have reversed static and shared somewhere in the linking explanation, or not properly described the situation. Versions prior to 3.8 could either build libLLVM as a static library and link all the LLVM binaries to that (recommended), or build it as a shared library and link the LLVM binaries to that (recommended for development only, but Mesa needs libLLVM.so). LLVM 3.8 introduced the 3rd option, build the shared library for external use, i.e. Mesa, but link the LLVM binaries to the libLLVM*.a bits that were used to build the shared library. llvm37 was built the non-recommended way for the benefit of Mesa, the LLVM binaries were linked to the shared library that Mesa used. llvm38 turned building/linking of the shared library, so there would be some increase since each LLVM binary now had that portion static linked. llvm39 turned on building of the shared library but did not enable linking with it so the LLVM binaries remain linking that part static, meaning the package grows by the size the shared library that is installed but not used by LLVM itself. Those were changes to a portion, not a complete change between static and shared linking for the whole package, so I was somewhat surprised by the size difference you noted, but of course I had forgotten that all the parts were collapsed into the llvm port. There was a brief period in which llvm39 was fully switched to dynamic linking, which made it considerably smaller but caused runtime problems (and was also likely to be slower). > > I assume llvm40 will be a bit bigger, but do not expect to see another > > jump as you've observed. > > As Mark Millard reports: > > I've also tried without WITH_DEBUG= and now. . . > > > > # pkg delete llvm40 > > Checking integrity... done (0 conflicting) > > Deinstallation has been requested for the following 1 packages (of 0 > > packages in the universe): > > > > Installed packages to be REMOVED: > > llvm40-4.0.0 > > > > Number of packages to be removed: 1 > > > > The operation will free 1 GiB. > > That 1G looks like a big jump from 259M of llvm39-3.9.1_1.txz to me. > I'm surely looking forward modularization of LLVM port; rebuilding it > every time becomes a real PITA given that X11 stack requires it. :-( > > ./danfe I have both llvm39 and llvm40 installed here (amd64) with all options enabled and without any WITH_DEBUG. According to pkg, the flat (installed) size of llvm39 is 1.10GB and llvm40 is 1.31GB, so not a huge difference (<20%) but still steady growth. The best solution to cut rebuild time for LLVM is ccache. ___ freebsd-toolchain@freebsd.org mailing list https://lists.freebsd.org/mailman/listinfo/freebsd-toolchain To unsubscribe, send any mail to "freebsd-toolchain-unsubscr...@freebsd.org"
Re: FYI: what it takes for RAM+swap to build devel/llvm40 with 4 processors or cores and WITH__DEBUG= (powerpc64 example)
On Wed, Apr 05, 2017 at 04:15:41PM +, Alexey Dokuchaev wrote: > > I've also tried without WITH_DEBUG= and now. . . > > > > # pkg delete llvm40 > > Checking integrity... done (0 conflicting) > > Deinstallation has been requested for the following 1 packages (of 0 > > packages in the universe): > > > > Installed packages to be REMOVED: > > llvm40-4.0.0 > > > > Number of packages to be removed: 1 > > > > The operation will free 1 GiB. > > That 1G looks like a big jump from 259M of llvm39-3.9.1_1.txz to me. > I'm surely looking forward modularization of LLVM port; rebuilding it > every time becomes a real PITA given that X11 stack requires it. :-( What real reason of requiring llvm for X11? I am about run time depends: # pkg info -r llvm39 llvm39-3.9.1_4: libEGL-13.0.6 dri-13.0.6,2 # ldd /usr/local/lib/libXvMCr600.so /usr/local/lib/libXvMCr600.so: [...] libLLVM-3.9.so => /usr/local/llvm39/lib/libLLVM-3.9.so (0x803e0) libLTO.so => /usr/local/llvm39/lib/../lib/libLTO.so (0x80820) [...] # ls -lh /usr/local/llvm39/lib/libLLVM-3.9.so /usr/local/llvm39/lib/../lib/libLTO.so -rwxr-xr-x 1 root wheel38M Apr 2 18:18 /usr/local/llvm39/lib/../lib/libLTO.so -rwxr-xr-x 1 root wheel47M Apr 2 18:18 /usr/local/llvm39/lib/libLLVM-3.9.so libXvMCr600 realy do run-time llvm interpetation and linker-time optimisation?! Also, I am don't see any realy dependence libEGL-13.0.6 from llvm. ___ freebsd-toolchain@freebsd.org mailing list https://lists.freebsd.org/mailman/listinfo/freebsd-toolchain To unsubscribe, send any mail to "freebsd-toolchain-unsubscr...@freebsd.org"
Re: FYI: what it takes for RAM+swap to build devel/llvm40 with 4 processors or cores and WITH__DEBUG= (powerpc64 example)
On Thu, Mar 30, 2017 at 07:26:43PM +0200, Matthew Rezny wrote: > LLVM 3.8 introduced the option to build a shared LLVM library, which is > what Mesa needs for use at runtime (for e.g. compiling shaders), separate > from linking to it. Previous versions only had one option, if the library > was built then all the LLVM binaries were staticly linked to it. [...] > > llvm{35,36,37} are statically linked and thus smaller than normal. llvm38 > switched to dynamic linking, the default, thus the size grew. Hmm, I don't quite get it: shouldn't static linking actually increase the binaries (and thus the package) size? > I assume llvm40 will be a bit bigger, but do not expect to see another > jump as you've observed. As Mark Millard reports: > I've also tried without WITH_DEBUG= and now. . . > > # pkg delete llvm40 > Checking integrity... done (0 conflicting) > Deinstallation has been requested for the following 1 packages (of 0 > packages in the universe): > > Installed packages to be REMOVED: > llvm40-4.0.0 > > Number of packages to be removed: 1 > > The operation will free 1 GiB. That 1G looks like a big jump from 259M of llvm39-3.9.1_1.txz to me. I'm surely looking forward modularization of LLVM port; rebuilding it every time becomes a real PITA given that X11 stack requires it. :-( ./danfe ___ freebsd-toolchain@freebsd.org mailing list https://lists.freebsd.org/mailman/listinfo/freebsd-toolchain To unsubscribe, send any mail to "freebsd-toolchain-unsubscr...@freebsd.org"
Re: FYI: what it takes for RAM+swap to build devel/llvm40 with 4 processors or cores and WITH__DEBUG= (powerpc64 example)
On 2017-Apr-1, at 3:51 AM, Mark Millard wrote: > On 2017-Mar-31, at 4:51 PM, Mark Millard wrote: > >> On 2017-Mar-30, at 7:51 PM, Mark Millard wrote: >> >>> On 2017-Mar-30, at 1:22 PM, Mark Millard wrote: >>> Sounds like the ALLOW_OPTIMIZATIONS_FOR_WITH_DEBUG technique would not change the "WITNESS and INVARIANTS"-like part of the issue. In fact if WITH_DEBUG= causes the cmake debug-style llvm40 build ALLOW_OPTIMIZATIONS_FOR_WITH_DEBUG might not make any difference: separate enforcing of lack of optimization. But just to see what results I've done "pkg delete llvm40" and am doing another build with ALLOW_OPTIMIZATIONS_FOR_WITH_DEBUG= and its supporting code in place in addition to using WITH_DEBUG= as the type of build fro FreeBSD's viewpoint. If you know that the test is a waste of machine cycles, you can let me know if you want. >>> >>> The experiment showed that ALLOW_OPTIMIZATIONS_FOR_WITH_DEBUG >>> use made no difference for devel/llvm40 so devel/llvm40 itself >>> has to change such as what Dimitry Andric reported separately >>> as a working change to the Makefile . >>> >>> (ALLOW_OPTIMIZATIONS_FOR_WITH_DEBUG would still have its uses >>> for various other ports.) >> >> I've now tried with both ALLOW_OPTIMIZATIONS_FOR_WITH_DEBUG and: > > I may have had a textual error that prevented > ALLOW_OPTIMIZATIONS_FOR_WITH_DEBUG from even potentially > contributing. So I'll re-run this test. > > For now I presume that what I reported was okay and so > I continue to refer to these figures later below. The retry got the same 42 GiByte and 102 GiByte sizes. (And again I was not monitoring the swap space usage.) So functionality like ALLOW_OPTIMIZATIONS_FOR_WITH_DEBUG (keeping optimization flags in CFLAGS) does not contribute to devel/llvm40's handling. Apparently the CMAKE_BUILD_TYPE has full control over such and RelWithDebInfo still makes for a massive build, though not as big as for DEBUG. For my context I've chosen to go with: # # From a local /usr/ports/Mk/bsd.port.mk extension: ALLOW_OPTIMIZATIONS_FOR_WITH_DEBUG= # .if ${.CURDIR:M*/devel/llvm*} #WITH_DEBUG= .else WITH_DEBUG= .endif WITH_DEBUG_FILES= (where ALLOW_OPTIMIZATIONS_FOR_WITH_DEBUG is from a local change to /usr/ports/Mk/bsd.port.mk but is not justified via devel/llvm* ports but via behavior for most other ports.) >> # svnlite diff /usr/ports/devel/llvm40/ >> Index: /usr/ports/devel/llvm40/Makefile >> === >> --- /usr/ports/devel/llvm40/Makefile (revision 436747) >> +++ /usr/ports/devel/llvm40/Makefile (working copy) >> @@ -236,6 +236,11 @@ >> >> .include >> >> +.if defined(WITH_DEBUG) >> +CMAKE_BUILD_TYPE= RelWithDebInfo >> +STRIP= >> +.endif >> + >> _CRTLIBDIR= >> ${LLVM_PREFIX:S|${PREFIX}/||}/lib/clang/${LLVM_RELEASE}/lib/freebsd >> .if ${ARCH} == "amd64" >> _COMPILER_RT_LIBS= \ >> >> >> >> pkg delete after the build reports: >> >> Installed packages to be REMOVED: >> llvm40-4.0.0 >> >> Number of packages to be removed: 1 >> >> The operation will free 42 GiB. >> >> So down by 7 GiBytes from 49 GiBytes. >> >> (I did not actually delete it.) >> >> Also: >> >> # du -sg /usr/obj/portswork/usr/ports/devel/llvm40 >> 102 /usr/obj/portswork/usr/ports/devel/llvm40 >> >> which is down by 16 GiBytes from 118 GiBytes. >> >> Reminder: These are from portmaster -DK so no >> cleanup after the build, which is what leaves >> the source code and such around in case of >> needing to look at a problem. >> >> (102+42) GiBytes == 146 GiBytes. >> vs. >> (118+49) GiBytes == 167 GiBytes. >> >> So a difference of 21 GiBytes (or so). >> >> But that is for everything in each case (and >> WITH_DEBUG= in use): >> >> # more /var/db/ports/devel_llvm40/options >> # This file is auto-generated by 'make config'. >> # Options for llvm40-4.0.0.r4 >> _OPTIONS_READ=llvm40-4.0.0.r4 >> _FILE_COMPLETE_OPTIONS_LIST=CLANG DOCS EXTRAS LIT LLD LLDB >> OPTIONS_FILE_SET+=CLANG >> OPTIONS_FILE_SET+=DOCS >> OPTIONS_FILE_SET+=EXTRAS >> OPTIONS_FILE_SET+=LIT >> OPTIONS_FILE_SET+=LLD >> OPTIONS_FILE_SET+=LLDB >> >> So avoiding WITH_DEBUG= and/or various build options >> is still the major way of avoiding use of lots of space >> if it is an issue. >> >> >> >> Why no RAM+SWAP total report this time: >> >> As far as I know FreeBSD does not track or report peak >> swap-space usage since the last boot. And, unfortunately >> I was not around to just sit and watch a top display this >> time and I did not set up any periodic recording into a >> file. >> >> That is why I've not reported on the RAM+SWAP total >> this time. It will have to be another experiment >> some other time. >> >> [I do wish FreeBSD had a way of reporting peak swap-space >> usage.] > > I've also tried without WITH_DEBUG= and now. . . > > > # pkg delete llvm40 > Checking integrity... done (0 conflicting) > Deinstallation has been requested for the follo
Re: FYI: what it takes for RAM+swap to build devel/llvm40 with 4 processors or cores and WITH__DEBUG= (powerpc64 example)
On 2017-Mar-31, at 4:51 PM, Mark Millard wrote: > On 2017-Mar-30, at 7:51 PM, Mark Millard wrote: > >> On 2017-Mar-30, at 1:22 PM, Mark Millard wrote: >> >>> Sounds like the ALLOW_OPTIMIZATIONS_FOR_WITH_DEBUG technique >>> would not change the "WITNESS and INVARIANTS"-like part of the >>> issue. In fact if WITH_DEBUG= causes the cmake debug-style >>> llvm40 build ALLOW_OPTIMIZATIONS_FOR_WITH_DEBUG might not >>> make any difference: separate enforcing of lack of optimization. >>> >>> But just to see what results I've done "pkg delete llvm40" >>> and am doing another build with ALLOW_OPTIMIZATIONS_FOR_WITH_DEBUG= >>> and its supporting code in place in addition to using WITH_DEBUG= >>> as the type of build fro FreeBSD's viewpoint. >>> >>> If you know that the test is a waste of machine cycles, you can >>> let me know if you want. >> >> The experiment showed that ALLOW_OPTIMIZATIONS_FOR_WITH_DEBUG >> use made no difference for devel/llvm40 so devel/llvm40 itself >> has to change such as what Dimitry Andric reported separately >> as a working change to the Makefile . >> >> (ALLOW_OPTIMIZATIONS_FOR_WITH_DEBUG would still have its uses >> for various other ports.) > > I've now tried with both ALLOW_OPTIMIZATIONS_FOR_WITH_DEBUG and: I may have had a textual error that prevented ALLOW_OPTIMIZATIONS_FOR_WITH_DEBUG from even potentially contributing. So I'll re-run this test. For now I presume that what I reported was okay and so I continue to refer to these figures later below. > # svnlite diff /usr/ports/devel/llvm40/ > Index: /usr/ports/devel/llvm40/Makefile > === > --- /usr/ports/devel/llvm40/Makefile (revision 436747) > +++ /usr/ports/devel/llvm40/Makefile (working copy) > @@ -236,6 +236,11 @@ > > .include > > +.if defined(WITH_DEBUG) > +CMAKE_BUILD_TYPE=RelWithDebInfo > +STRIP= > +.endif > + > _CRTLIBDIR= > ${LLVM_PREFIX:S|${PREFIX}/||}/lib/clang/${LLVM_RELEASE}/lib/freebsd > .if ${ARCH} == "amd64" > _COMPILER_RT_LIBS= \ > > > > pkg delete after the build reports: > > Installed packages to be REMOVED: > llvm40-4.0.0 > > Number of packages to be removed: 1 > > The operation will free 42 GiB. > > So down by 7 GiBytes from 49 GiBytes. > > (I did not actually delete it.) > > Also: > > # du -sg /usr/obj/portswork/usr/ports/devel/llvm40 > 102 /usr/obj/portswork/usr/ports/devel/llvm40 > > which is down by 16 GiBytes from 118 GiBytes. > > Reminder: These are from portmaster -DK so no > cleanup after the build, which is what leaves > the source code and such around in case of > needing to look at a problem. > > (102+42) GiBytes == 146 GiBytes. > vs. > (118+49) GiBytes == 167 GiBytes. > > So a difference of 21 GiBytes (or so). > > But that is for everything in each case (and > WITH_DEBUG= in use): > > # more /var/db/ports/devel_llvm40/options > # This file is auto-generated by 'make config'. > # Options for llvm40-4.0.0.r4 > _OPTIONS_READ=llvm40-4.0.0.r4 > _FILE_COMPLETE_OPTIONS_LIST=CLANG DOCS EXTRAS LIT LLD LLDB > OPTIONS_FILE_SET+=CLANG > OPTIONS_FILE_SET+=DOCS > OPTIONS_FILE_SET+=EXTRAS > OPTIONS_FILE_SET+=LIT > OPTIONS_FILE_SET+=LLD > OPTIONS_FILE_SET+=LLDB > > So avoiding WITH_DEBUG= and/or various build options > is still the major way of avoiding use of lots of space > if it is an issue. > > > > Why no RAM+SWAP total report this time: > > As far as I know FreeBSD does not track or report peak > swap-space usage since the last boot. And, unfortunately > I was not around to just sit and watch a top display this > time and I did not set up any periodic recording into a > file. > > That is why I've not reported on the RAM+SWAP total > this time. It will have to be another experiment > some other time. > > [I do wish FreeBSD had a way of reporting peak swap-space > usage.] I've also tried without WITH_DEBUG= and now. . . # pkg delete llvm40 Checking integrity... done (0 conflicting) Deinstallation has been requested for the following 1 packages (of 0 packages in the universe): Installed packages to be REMOVED: llvm40-4.0.0 Number of packages to be removed: 1 The operation will free 1 GiB. Proceed with deinstalling packages? [y/N]: n # du -sg /usr/obj/portswork/usr/ports/devel/llvm40/ 5 /usr/obj/portswork/usr/ports/devel/llvm40/ So the alternatives (with everything built each time): (5+1)GiBytes == 6 GiBytes. (without WITH_DEBUG=) vs. (102+42) GiBytes == 146 GiBytes. (WITH_DEBUG= but the adjusted llvm40/Makefiele) vs. (118+49) GiBytes == 167 GiBytes. (WITH_DEBUG= with Makefile adjustment) I'll likely end up having /etc/make.conf contain something like the following for most or all of my FreeBSD environments: # # From a local /usr/ports/Mk/bsd.port.mk extension: ALLOW_OPTIMIZATIONS_FOR_WITH_DEBUG= # .if ${.CURDIR:M*/devel/llvm*} #WITH_DEBUG= .else WITH_DEBUG= .endif WITH_DEBUG_FILES= Along with using: # svnlite diff /usr/ports/Mk/ Index: /usr/ports/Mk/bsd.port.m
Re: FYI: what it takes for RAM+swap to build devel/llvm40 with 4 processors or cores and WITH__DEBUG= (powerpc64 example)
On 2017-Mar-30, at 7:51 PM, Mark Millard wrote: > On 2017-Mar-30, at 1:22 PM, Mark Millard wrote: > >> Sounds like the ALLOW_OPTIMIZATIONS_FOR_WITH_DEBUG technique >> would not change the "WITNESS and INVARIANTS"-like part of the >> issue. In fact if WITH_DEBUG= causes the cmake debug-style >> llvm40 build ALLOW_OPTIMIZATIONS_FOR_WITH_DEBUG might not >> make any difference: separate enforcing of lack of optimization. >> >> But just to see what results I've done "pkg delete llvm40" >> and am doing another build with ALLOW_OPTIMIZATIONS_FOR_WITH_DEBUG= >> and its supporting code in place in addition to using WITH_DEBUG= >> as the type of build fro FreeBSD's viewpoint. >> >> If you know that the test is a waste of machine cycles, you can >> let me know if you want. > > The experiment showed that ALLOW_OPTIMIZATIONS_FOR_WITH_DEBUG > use made no difference for devel/llvm40 so devel/llvm40 itself > has to change such as what Dimitry Andric reported separately > as a working change to the Makefile . > > (ALLOW_OPTIMIZATIONS_FOR_WITH_DEBUG would still have its uses > for various other ports.) I've now tried with both ALLOW_OPTIMIZATIONS_FOR_WITH_DEBUG and: # svnlite diff /usr/ports/devel/llvm40/ Index: /usr/ports/devel/llvm40/Makefile === --- /usr/ports/devel/llvm40/Makefile(revision 436747) +++ /usr/ports/devel/llvm40/Makefile(working copy) @@ -236,6 +236,11 @@ .include +.if defined(WITH_DEBUG) +CMAKE_BUILD_TYPE= RelWithDebInfo +STRIP= +.endif + _CRTLIBDIR= ${LLVM_PREFIX:S|${PREFIX}/||}/lib/clang/${LLVM_RELEASE}/lib/freebsd .if ${ARCH} == "amd64" _COMPILER_RT_LIBS= \ pkg delete after the build reports: Installed packages to be REMOVED: llvm40-4.0.0 Number of packages to be removed: 1 The operation will free 42 GiB. So down by 7 GiBytes from 49 GiBytes. (I did not actually delete it.) Also: # du -sg /usr/obj/portswork/usr/ports/devel/llvm40 102 /usr/obj/portswork/usr/ports/devel/llvm40 which is down by 16 GiBytes from 118 GiBytes. Reminder: These are from portmaster -DK so no cleanup after the build, which is what leaves the source code and such around in case of needing to look at a problem. (102+42) GiBytes == 146 GiBytes. vs. (118+49) GiBytes == 167 GiBytes. So a difference of 21 GiBytes (or so). But that is for everything in each case (and WITH_DEBUG= in use): # more /var/db/ports/devel_llvm40/options # This file is auto-generated by 'make config'. # Options for llvm40-4.0.0.r4 _OPTIONS_READ=llvm40-4.0.0.r4 _FILE_COMPLETE_OPTIONS_LIST=CLANG DOCS EXTRAS LIT LLD LLDB OPTIONS_FILE_SET+=CLANG OPTIONS_FILE_SET+=DOCS OPTIONS_FILE_SET+=EXTRAS OPTIONS_FILE_SET+=LIT OPTIONS_FILE_SET+=LLD OPTIONS_FILE_SET+=LLDB So avoiding WITH_DEBUG= and/or various build options is still the major way of avoiding use of lots of space if it is an issue. Why no RAM+SWAP total report this time: As far as I know FreeBSD does not track or report peak swap-space usage since the last boot. And, unfortunately I was not around to just sit and watch a top display this time and I did not set up any periodic recording into a file. That is why I've not reported on the RAM+SWAP total this time. It will have to be another experiment some other time. [I do wish FreeBSD had a way of reporting peak swap-space usage.] === Mark Millard markmi at dsl-only.net ___ freebsd-toolchain@freebsd.org mailing list https://lists.freebsd.org/mailman/listinfo/freebsd-toolchain To unsubscribe, send any mail to "freebsd-toolchain-unsubscr...@freebsd.org"
Re: FYI: what it takes for RAM+swap to build devel/llvm40 with 4 processors or cores and WITH__DEBUG= (powerpc64 example)
On 2017-Mar-30, at 1:22 PM, Mark Millard wrote: > On 2017-Mar-29, at 8:53 AM, Brooks Davis wrote: > >> On Mon, Mar 27, 2017 at 03:25:04AM -0700, Mark Millard wrote: >>> On 2017-Mar-27, at 2:41 AM, Dimitry Andric wrote: >>> On 26 Mar 2017, at 23:36, Mark Millard wrote: > > I upgraded from llvm40 r4 to final. An interesting result was > its creation of a backup package for llvm40-4.0.0.r4: > > about 13 cpu-core-hours running pkg create > > (Remember: I've been building with WITH_DEBUG= ) Its > single-threaded status stands out via elapsed time > approximately matching. > > I'll note that it was somewhat under 6 elapsed hours for > staging to have been populated (-j4 with 4 cores present > helps for this part). > > (Of course these elapsed-time figures are rather system > dependent, although the ratio might be more stable.) > > > > Also interesting was: > > Installed packages to be REMOVED: > llvm40-4.0.0.r4 > > Number of packages to be removed: 1 > > The operation will free 49 GiB. Yes, this is big. But there is no real need to build the llvm ports with debug information, unless you want to hack on llvm itself. And in that case, you are better served by a Subversion checkout or Git clone from upstream instead. -Dimitry >>> >>> FYI: >>> >>> Historically unless something extreme like this ends up >>> involved I build everything using WITH_DEBUG= or explicit >>> -g's in order to have better information on any failure. >>> >>> This is extreme enough that next time I synchronize >>> /usr/ports and it has a devel/llvm40 update I'll >>> likely rebuild devel/llvm40 without using WITH_DEBUG= . >>> I'm more concerned with the time it takes than with >>> the file system space involved. >> >> In the case of LLVM, enabling debug builds does a LOT more than adding >> symbols. It's much more like enabling WITNESS and INVARIANTS in your >> kernel, except that the performance of the resulting binary is much >> worse than a WITNESS kernel (more like 10x slowdown). >> >> As Dimitry points out, these builds are of questionable value in ports >> so garbage collecting the knob might be the sensable thing to do. > > Sounds like the ALLOW_OPTIMIZATIONS_FOR_WITH_DEBUG technique > would not change the "WITNESS and INVARIANTS"-like part of the > issue. In fact if WITH_DEBUG= causes the cmake debug-style > llvm40 build ALLOW_OPTIMIZATIONS_FOR_WITH_DEBUG might not > make any difference: separate enforcing of lack of optimization. > > But just to see what results I've done "pkg delete llvm40" > and am doing another build with ALLOW_OPTIMIZATIONS_FOR_WITH_DEBUG= > and its supporting code in place in addition to using WITH_DEBUG= > as the type of build fro FreeBSD's viewpoint. > > If you know that the test is a waste of machine cycles, you can > let me know if you want. The experiment showed that ALLOW_OPTIMIZATIONS_FOR_WITH_DEBUG use made no difference for devel/llvm40 so devel/llvm40 itself has to change such as what Dimitry Andric reported separately as a working change to the Makefile . (ALLOW_OPTIMIZATIONS_FOR_WITH_DEBUG would still have its uses for various other ports.) === Mark Millard markmi at dsl-only.net ___ freebsd-toolchain@freebsd.org mailing list https://lists.freebsd.org/mailman/listinfo/freebsd-toolchain To unsubscribe, send any mail to "freebsd-toolchain-unsubscr...@freebsd.org"
Re: FYI: what it takes for RAM+swap to build devel/llvm40 with 4 processors or cores and WITH__DEBUG= (powerpc64 example)
On 30 Mar 2017, at 19:55, Brooks Davis wrote: > > On Thu, Mar 30, 2017 at 07:26:19PM +0200, Dimitry Andric wrote: ... >> >> As said, this is because of WITH_DEBUG. Don't use that for the llvm >> ports, for now. It will also allow you to build them with much less RAM >> in the machine: especially linking can take multiple GB when debuginfo >> is enabled, and optimization is off. > > I'm looking into improving WITH_DEBUG. I stole the following method from graphics/darktable: Index: devel/llvm40/Makefile === --- devel/llvm40/Makefile (revision 436685) +++ devel/llvm40/Makefile (working copy) @@ -236,6 +236,11 @@ NOT_FOR_ARCH= ia64 .include +.if defined(WITH_DEBUG) +CMAKE_BUILD_TYPE= RelWithDebInfo +STRIP= +.endif + _CRTLIBDIR= ${LLVM_PREFIX:S|${PREFIX}/||}/lib/clang/${LLVM_RELEASE}/lib/freebsd .if ${ARCH} == "amd64" _COMPILER_RT_LIBS= \ This appears to work for me. > P.S. Somewhat off topice, but related. FAIR WARNING: the days of > self-hosted 32-bit systems are numbered. Switching to lld from our > ancient BFD linker will probably buy us some time, but I'd be surprised > if you will be able to build LLVM+CLANG with a 2GB address space in 5 > years. The sooner people make their peace with this, the better. Yes, with that above RelWithDebInfo change, GNU ld tends to use ~5G of memory to link the larger llvm executables, so that is definitely beyond i386, even if you run it in a jail on an amd64 host. And if you would want to use link time optimization, the requirements will increase even more... -Dimitry signature.asc Description: Message signed with OpenPGP
Re: FYI: what it takes for RAM+swap to build devel/llvm40 with 4 processors or cores and WITH__DEBUG= (powerpc64 example)
On 2017-Mar-30, at 10:55 AM, Brooks Davis wrote: > P.S. Somewhat off topice, but related. FAIR WARNING: the days of > self-hosted 32-bit systems are numbered. Switching to lld from our > ancient BFD linker will probably buy us some time, but I'd be surprised > if you will be able to build LLVM+CLANG with a 2GB address space in 5 > years. The sooner people make their peace with this, the better. Yep. It fights with time preferences as well: when I tried building gcc6 "full bootstrap" via poudriere cross- builds on amd64 (4 cores/threads used) and native on a bpim3 (-mcpu=cortex-a7 with 4 cores supported by FreeBSD and 2GB if RAM) the native build was much faster as I remember. Of course once the cross build was using the gcc6 internal bootstrap compiler not much was running native cross-toolchain materials. (Building that internal bootstrap compiler did have a native-clang cross-compiler involved.) [I do not have access to server-class thread counts or RAM either. And the non-multithread stages contribute even in those contexts as well.] So I'm not looking forward to the issue from that point of view. === Mark Millard markmi at dsl-only.net ___ freebsd-toolchain@freebsd.org mailing list https://lists.freebsd.org/mailman/listinfo/freebsd-toolchain To unsubscribe, send any mail to "freebsd-toolchain-unsubscr...@freebsd.org"
Re: FYI: what it takes for RAM+swap to build devel/llvm40 with 4 processors or cores and WITH__DEBUG= (powerpc64 example)
On 2017-Mar-29, at 8:53 AM, Brooks Davis wrote: > On Mon, Mar 27, 2017 at 03:25:04AM -0700, Mark Millard wrote: >> On 2017-Mar-27, at 2:41 AM, Dimitry Andric wrote: >> >>> On 26 Mar 2017, at 23:36, Mark Millard wrote: I upgraded from llvm40 r4 to final. An interesting result was its creation of a backup package for llvm40-4.0.0.r4: about 13 cpu-core-hours running pkg create (Remember: I've been building with WITH_DEBUG= ) Its single-threaded status stands out via elapsed time approximately matching. I'll note that it was somewhat under 6 elapsed hours for staging to have been populated (-j4 with 4 cores present helps for this part). (Of course these elapsed-time figures are rather system dependent, although the ratio might be more stable.) Also interesting was: Installed packages to be REMOVED: llvm40-4.0.0.r4 Number of packages to be removed: 1 The operation will free 49 GiB. >>> >>> Yes, this is big. But there is no real need to build the llvm ports >>> with debug information, unless you want to hack on llvm itself. And >>> in that case, you are better served by a Subversion checkout or Git >>> clone from upstream instead. >>> >>> -Dimitry >> >> FYI: >> >> Historically unless something extreme like this ends up >> involved I build everything using WITH_DEBUG= or explicit >> -g's in order to have better information on any failure. >> >> This is extreme enough that next time I synchronize >> /usr/ports and it has a devel/llvm40 update I'll >> likely rebuild devel/llvm40 without using WITH_DEBUG= . >> I'm more concerned with the time it takes than with >> the file system space involved. > > In the case of LLVM, enabling debug builds does a LOT more than adding > symbols. It's much more like enabling WITNESS and INVARIANTS in your > kernel, except that the performance of the resulting binary is much > worse than a WITNESS kernel (more like 10x slowdown). > > As Dimitry points out, these builds are of questionable value in ports > so garbage collecting the knob might be the sensable thing to do. Sounds like the ALLOW_OPTIMIZATIONS_FOR_WITH_DEBUG technique would not change the "WITNESS and INVARIANTS"-like part of the issue. In fact if WITH_DEBUG= causes the cmake debug-style llvm40 build ALLOW_OPTIMIZATIONS_FOR_WITH_DEBUG might not make any difference: separate enforcing of lack of optimization. But just to see what results I've done "pkg delete llvm40" and am doing another build with ALLOW_OPTIMIZATIONS_FOR_WITH_DEBUG= and its supporting code in place in addition to using WITH_DEBUG= as the type of build fro FreeBSD's viewpoint. If you know that the test is a waste of machine cycles, you can let me know if you want. === Mark Millard markmi at dsl-only.net ___ freebsd-toolchain@freebsd.org mailing list https://lists.freebsd.org/mailman/listinfo/freebsd-toolchain To unsubscribe, send any mail to "freebsd-toolchain-unsubscr...@freebsd.org"
Re: FYI: what it takes for RAM+swap to build devel/llvm40 with 4 processors or cores and WITH__DEBUG= (powerpc64 example)
On Thu, Mar 30, 2017 at 07:26:19PM +0200, Dimitry Andric wrote: > On 30 Mar 2017, at 19:06, Alexey Dokuchaev wrote: > > > > On Mon, Mar 27, 2017 at 11:41:40AM +0200, Dimitry Andric wrote: > >> On 26 Mar 2017, at 23:36, Mark Millard wrote: > >>> ... > >>> Also interesting was: > >>> > >>> Installed packages to be REMOVED: > >>> llvm40-4.0.0.r4 > >>> > >>> Number of packages to be removed: 1 > >>> > >>> The operation will free 49 GiB. > >> > >> Yes, this is big. But there is no real need to build the llvm ports > >> with debug information, unless you want to hack on llvm itself. > > > > Cc'ing jmd@ and rezny@. > > > > I've been watching increasing size of our LLVM packages with increasing > > worry. This is from my tinderbox cache: > > > > $ % env LANG=C ls -lh llvm3* > > -rw-r--r-- 1 root wheel17M Jan 29 2016 llvm35-3.5.2_1.txz > > -rw-r--r-- 1 root wheel18M Mar 7 2016 llvm36-3.6.2_2.txz > > -rw-r--r-- 1 root wheel27M Feb 28 01:05 llvm37-3.7.1_4.txz > > -rw-r--r-- 1 root wheel 207M Jan 19 18:20 llvm38-3.8.1_5.txz > > -rw-r--r-- 1 root wheel 244M Mar 23 16:42 llvm39-3.9.1_2.txz > > > > Dimitry, do you know what had causes such a huge bump in 37 -> 38? > > Yes, up to llvm37, the ports were split in devel/llvmXY and lang/clangXY > parts, with separate ports for e.g. compiler-rt and other LLVM projects. It's mostly that we build both shared and static libraries. llvm37 merged clang and llvm (with a clang37 metaport). Dynamic builds were broken for a while too which blew up program size. > > They take lots of time to build and package. And given that llvm > > is indirect dependency of any X11-related port, it pessimises their > > build times as well (devel/libclc now requires devel/llvm40 after > > r437268). > > The previous split looks pretty hard to maintain, so that is most likely > the reason for combining all components in one port after 3.8. > Unfortunately the side effect is that it is way less granular. I kept it up for several revisions past when it was desupported, but it's absolutly impossible with the cmake infrastructure unless we want want to build all of LLVM four times to get clang, llvm, lldb, and lld. I'm pretty sure that would case more complaints. :) > If we ever get infrastructure for generating multiple packages out of > one port, the devel/llvm* ports are very good candidates. :) Very much so. > > With 49 GiB llvm40, I guess I won't be able to build-test post as my > > hardware would just not be capable enough. > > As said, this is because of WITH_DEBUG. Don't use that for the llvm > ports, for now. It will also allow you to build them with much less RAM > in the machine: especially linking can take multiple GB when debuginfo > is enabled, and optimization is off. I'm looking into improving WITH_DEBUG. -- Brooks P.S. Somewhat off topice, but related. FAIR WARNING: the days of self-hosted 32-bit systems are numbered. Switching to lld from our ancient BFD linker will probably buy us some time, but I'd be surprised if you will be able to build LLVM+CLANG with a 2GB address space in 5 years. The sooner people make their peace with this, the better. signature.asc Description: PGP signature
Re: FYI: what it takes for RAM+swap to build devel/llvm40 with 4 processors or cores and WITH__DEBUG= (powerpc64 example)
On Thursday 30 March 2017 17:06:48 Alexey Dokuchaev wrote: > On Mon, Mar 27, 2017 at 11:41:40AM +0200, Dimitry Andric wrote: > > On 26 Mar 2017, at 23:36, Mark Millard wrote: > > > ... > > > Also interesting was: > > > > > > Installed packages to be REMOVED: > > > llvm40-4.0.0.r4 > > > > > > Number of packages to be removed: 1 > > > > > > The operation will free 49 GiB. > > > > Yes, this is big. But there is no real need to build the llvm ports > > with debug information, unless you want to hack on llvm itself. > > Cc'ing jmd@ and rezny@. > > I've been watching increasing size of our LLVM packages with increasing > worry. This is from my tinderbox cache: > > $ % env LANG=C ls -lh llvm3* > -rw-r--r-- 1 root wheel17M Jan 29 2016 llvm35-3.5.2_1.txz > -rw-r--r-- 1 root wheel18M Mar 7 2016 llvm36-3.6.2_2.txz > -rw-r--r-- 1 root wheel27M Feb 28 01:05 llvm37-3.7.1_4.txz > -rw-r--r-- 1 root wheel 207M Jan 19 18:20 llvm38-3.8.1_5.txz > -rw-r--r-- 1 root wheel 244M Mar 23 16:42 llvm39-3.9.1_2.txz > > Dimitry, do you know what had causes such a huge bump in 37 -> 38? > > They take lots of time to build and package. And given that llvm > is indirect dependency of any X11-related port, it pessimises their > build times as well (devel/libclc now requires devel/llvm40 after > r437268). > > With 49 GiB llvm40, I guess I won't be able to build-test post as my > hardware would just not be capable enough. > > ./danfe LLVM 3.8 introduced the option to build a shared LLVM library, which is what Mesa needs for use at runtime (for e.g. compiling shaders), separate from linking to it. Previous versions only had one option, if the library was built then all the LLVM binaries were staticly linked to it. LLVM devs state that static linking the LLVM binaries is only for developer use, users should not do that. Mesa's need was causing distributions to ship static linked LLVM binaries against that advice. So, they added a pair of switches so that we can separately control whether that library is built (required for Mesa) and used to link LLVM binaries (not desired). llvm{35,36,37} are statically linked and thus smaller than normal. llvm38 switched to dynamic linking, the default, thus the size grew. llvm39 added the library Mesa needs (we didn't turn on the option in llvm38 since Mesa jumped from 37 to 39), so it grew a little more. I assume llvm40 will be a bit bigger, but do not expect to see another jump as you've observed. ___ freebsd-toolchain@freebsd.org mailing list https://lists.freebsd.org/mailman/listinfo/freebsd-toolchain To unsubscribe, send any mail to "freebsd-toolchain-unsubscr...@freebsd.org"
Re: FYI: what it takes for RAM+swap to build devel/llvm40 with 4 processors or cores and WITH__DEBUG= (powerpc64 example)
On 30 Mar 2017, at 19:06, Alexey Dokuchaev wrote: > > On Mon, Mar 27, 2017 at 11:41:40AM +0200, Dimitry Andric wrote: >> On 26 Mar 2017, at 23:36, Mark Millard wrote: >>> ... >>> Also interesting was: >>> >>> Installed packages to be REMOVED: >>> llvm40-4.0.0.r4 >>> >>> Number of packages to be removed: 1 >>> >>> The operation will free 49 GiB. >> >> Yes, this is big. But there is no real need to build the llvm ports >> with debug information, unless you want to hack on llvm itself. > > Cc'ing jmd@ and rezny@. > > I've been watching increasing size of our LLVM packages with increasing > worry. This is from my tinderbox cache: > > $ % env LANG=C ls -lh llvm3* > -rw-r--r-- 1 root wheel17M Jan 29 2016 llvm35-3.5.2_1.txz > -rw-r--r-- 1 root wheel18M Mar 7 2016 llvm36-3.6.2_2.txz > -rw-r--r-- 1 root wheel27M Feb 28 01:05 llvm37-3.7.1_4.txz > -rw-r--r-- 1 root wheel 207M Jan 19 18:20 llvm38-3.8.1_5.txz > -rw-r--r-- 1 root wheel 244M Mar 23 16:42 llvm39-3.9.1_2.txz > > Dimitry, do you know what had causes such a huge bump in 37 -> 38? Yes, up to llvm37, the ports were split in devel/llvmXY and lang/clangXY parts, with separate ports for e.g. compiler-rt and other LLVM projects. For llvm38 and later, the devel/llvmXY port contains almost *all* upstream LLVM components, which are then selectable at port configure time. For instance, devel/llvm40 shows: ┌─── llvm40-4.0.0 ─┐ │ ┌──┐ │ │ │ [x] CLANGBuild clang │ │ │ │ [x] COMPILER_RT Sanitizer libraries │ │ │ │ [x] DOCS Build and/or install documentation │ │ │ │ [x] EXTRAS Extra clang tools │ │ │ │ [x] GOLD Build the LLVM Gold plugin for LTO │ │ │ │ [x] LIT Install lit and FileCheck test tools│ │ │ │ [x] LLD Install lld, the LLVM linker│ │ │ │ [x] LLDB Install lldb, the LLVM debugger (ignored on 9.x)│ │ │ │ [x] OPENMP Install libomp, the LLVM OpenMP runtime library │ │ │ └──┘ │ ├──┤ │ < OK > │ └──┘ If you want to reduce the size of the package, only select the part(s) you need. I think you can get by with just the CLANG option, for most dependent ports. > They take lots of time to build and package. And given that llvm > is indirect dependency of any X11-related port, it pessimises their > build times as well (devel/libclc now requires devel/llvm40 after > r437268). The previous split looks pretty hard to maintain, so that is most likely the reason for combining all components in one port after 3.8. Unfortunately the side effect is that it is way less granular. If we ever get infrastructure for generating multiple packages out of one port, the devel/llvm* ports are very good candidates. :) > With 49 GiB llvm40, I guess I won't be able to build-test post as my > hardware would just not be capable enough. As said, this is because of WITH_DEBUG. Don't use that for the llvm ports, for now. It will also allow you to build them with much less RAM in the machine: especially linking can take multiple GB when debuginfo is enabled, and optimization is off. -Dimitry signature.asc Description: Message signed with OpenPGP
Re: FYI: what it takes for RAM+swap to build devel/llvm40 with 4 processors or cores and WITH__DEBUG= (powerpc64 example)
On Mon, Mar 27, 2017 at 11:41:40AM +0200, Dimitry Andric wrote: > On 26 Mar 2017, at 23:36, Mark Millard wrote: > > ... > > Also interesting was: > > > > Installed packages to be REMOVED: > > llvm40-4.0.0.r4 > > > > Number of packages to be removed: 1 > > > > The operation will free 49 GiB. > > Yes, this is big. But there is no real need to build the llvm ports > with debug information, unless you want to hack on llvm itself. Cc'ing jmd@ and rezny@. I've been watching increasing size of our LLVM packages with increasing worry. This is from my tinderbox cache: $ % env LANG=C ls -lh llvm3* -rw-r--r-- 1 root wheel17M Jan 29 2016 llvm35-3.5.2_1.txz -rw-r--r-- 1 root wheel18M Mar 7 2016 llvm36-3.6.2_2.txz -rw-r--r-- 1 root wheel27M Feb 28 01:05 llvm37-3.7.1_4.txz -rw-r--r-- 1 root wheel 207M Jan 19 18:20 llvm38-3.8.1_5.txz -rw-r--r-- 1 root wheel 244M Mar 23 16:42 llvm39-3.9.1_2.txz Dimitry, do you know what had causes such a huge bump in 37 -> 38? They take lots of time to build and package. And given that llvm is indirect dependency of any X11-related port, it pessimises their build times as well (devel/libclc now requires devel/llvm40 after r437268). With 49 GiB llvm40, I guess I won't be able to build-test post as my hardware would just not be capable enough. ./danfe ___ freebsd-toolchain@freebsd.org mailing list https://lists.freebsd.org/mailman/listinfo/freebsd-toolchain To unsubscribe, send any mail to "freebsd-toolchain-unsubscr...@freebsd.org"
Re: FYI: what it takes for RAM+swap to build devel/llvm40 with 4 processors or cores and WITH__DEBUG= (powerpc64 example)
On 29 Mar 2017, at 17:53, Brooks Davis wrote: > > On Mon, Mar 27, 2017 at 03:25:04AM -0700, Mark Millard wrote: ... >> This is extreme enough that next time I synchronize >> /usr/ports and it has a devel/llvm40 update I'll >> likely rebuild devel/llvm40 without using WITH_DEBUG= . >> I'm more concerned with the time it takes than with >> the file system space involved. > > In the case of LLVM, enabling debug builds does a LOT more than adding > symbols. It's much more like enabling WITNESS and INVARIANTS in your > kernel, except that the performance of the resulting binary is much > worse than a WITNESS kernel (more like 10x slowdown). > > As Dimitry points out, these builds are of questionable value in ports > so garbage collecting the knob might be the sensable thing to do. I suggest that for the LLVM ports, the DEBUG option should set the RelWithDebInfo build type for CMake. That will give you binaries which can produce good backtraces, and a fair chance at debugging, in a pinch. -Dimitry signature.asc Description: Message signed with OpenPGP
Re: FYI: what it takes for RAM+swap to build devel/llvm40 with 4 processors or cores and WITH__DEBUG= (powerpc64 example)
On Mon, Mar 27, 2017 at 03:25:04AM -0700, Mark Millard wrote: > On 2017-Mar-27, at 2:41 AM, Dimitry Andric wrote: > > > On 26 Mar 2017, at 23:36, Mark Millard wrote: > >> > >> I upgraded from llvm40 r4 to final. An interesting result was > >> its creation of a backup package for llvm40-4.0.0.r4: > >> > >> about 13 cpu-core-hours running pkg create > >> > >> (Remember: I've been building with WITH_DEBUG= ) Its > >> single-threaded status stands out via elapsed time > >> approximately matching. > >> > >> I'll note that it was somewhat under 6 elapsed hours for > >> staging to have been populated (-j4 with 4 cores present > >> helps for this part). > >> > >> (Of course these elapsed-time figures are rather system > >> dependent, although the ratio might be more stable.) > >> > >> > >> > >> Also interesting was: > >> > >> Installed packages to be REMOVED: > >>llvm40-4.0.0.r4 > >> > >> Number of packages to be removed: 1 > >> > >> The operation will free 49 GiB. > > > > Yes, this is big. But there is no real need to build the llvm ports > > with debug information, unless you want to hack on llvm itself. And > > in that case, you are better served by a Subversion checkout or Git > > clone from upstream instead. > > > > -Dimitry > > FYI: > > Historically unless something extreme like this ends up > involved I build everything using WITH_DEBUG= or explicit > -g's in order to have better information on any failure. > > This is extreme enough that next time I synchronize > /usr/ports and it has a devel/llvm40 update I'll > likely rebuild devel/llvm40 without using WITH_DEBUG= . > I'm more concerned with the time it takes than with > the file system space involved. In the case of LLVM, enabling debug builds does a LOT more than adding symbols. It's much more like enabling WITNESS and INVARIANTS in your kernel, except that the performance of the resulting binary is much worse than a WITNESS kernel (more like 10x slowdown). As Dimitry points out, these builds are of questionable value in ports so garbage collecting the knob might be the sensable thing to do. -- Brooks signature.asc Description: PGP signature
Re: FYI: what it takes for RAM+swap to build devel/llvm40 with 4 processors or cores and WITH__DEBUG= (powerpc64 example)
On 27 Mar 2017, at 23:11, Mark Millard wrote: > > On 2017-Mar-27, at 5:53 AM, Dimitry Andric wrote: >> On 27 Mar 2017, at 12:25, Mark Millard wrote: >>> >>> On 2017-Mar-27, at 2:41 AM, Dimitry Andric wrote: On 26 Mar 2017, at 23:36, Mark Millard wrote: >> ... > Installed packages to be REMOVED: > llvm40-4.0.0.r4 > > Number of packages to be removed: 1 > > The operation will free 49 GiB. Yes, this is big. But there is no real need to build the llvm ports with debug information, unless you want to hack on llvm itself. And in that case, you are better served by a Subversion checkout or Git clone from upstream instead. >> ... >>> Historically unless something extreme like this ends up >>> involved I build everything using WITH_DEBUG= or explicit >>> -g's in order to have better information on any failure. >> >> The problem with the ports implementation of WITH_DEBUG is that it >> always disables all optimizations, without a possibility to override. >> Which bloats the resulting object files, libraries and executables, and >> especially so for large C++ projects such as LLVM. >> >> I can recommend the following workaround. If you want to build a port >> with optimizations disabled, you can always pass -O0 in CFLAGS. >> >> -Dimitry >> >> Index: Mk/bsd.port.mk >> === >> --- Mk/bsd.port.mk (revision 436685) >> +++ Mk/bsd.port.mk (working copy) >> @@ -1646,7 +1646,7 @@ MAKE_ENV+= DONTSTRIP=yes >> STRIP_CMD= ${TRUE} >> .endif >> DEBUG_FLAGS?=-g >> -CFLAGS:=${CFLAGS:N-O*:N-fno-strict*} ${DEBUG_FLAGS} >> +CFLAGS:=${CFLAGS} ${DEBUG_FLAGS} >> .if defined(INSTALL_TARGET) >> INSTALL_TARGET:= ${INSTALL_TARGET:S/^install-strip$/install/g} >> .endif > > Interesting. WITH_DEBUG's description in the file does not > mention that stripping of optimization flags: > > # WITH_DEBUG- If set, debugging flags are added to CFLAGS and the > # binaries don't get stripped by INSTALL_PROGRAM or > # INSTALL_LIB. Besides, individual ports might > # add their specific to produce binaries for debugging > # purposes. You can override the debug flags that are > # passed to the compiler by setting DEBUG_FLAGS. It is > # set to "-g" at default. > > I'll probably give myself an override that I can specify in > /etc/make.conf , such as: > > # svnlite diff /usr/ports/Mk/bsd.port.mk > Index: /usr/ports/Mk/bsd.port.mk > === > --- /usr/ports/Mk/bsd.port.mk (revision 436747) > +++ /usr/ports/Mk/bsd.port.mk (working copy) > @@ -1646,7 +1646,11 @@ > STRIP_CMD=${TRUE} > .endif > DEBUG_FLAGS?= -g > +.if defined(ALLOW_OPTIMIZATIONS_FOR_WITH_DEBUG) > +CFLAGS:= ${CFLAGS} ${DEBUG_FLAGS} > +.else > CFLAGS:= ${CFLAGS:N-O*:N-fno-strict*} ${DEBUG_FLAGS} > +.endif > .if defined(INSTALL_TARGET) > INSTALL_TARGET:= ${INSTALL_TARGET:S/^install-strip$/install/g} > .endif Effectively, this gives some sort of support for three of CMake's build types, e.g: * Debug, which disables optimization, and obviously adds debuginfo * Release, which optimizes for speed, and does not add debuginfo * RelWithDebInfo, similar to Release but does add debuginfo It would be nice if there was some way of directly utilizing these. The RelWithDebInfo target is very useful with the LLVM projects. -Dimitry signature.asc Description: Message signed with OpenPGP
Re: FYI: what it takes for RAM+swap to build devel/llvm40 with 4 processors or cores and WITH__DEBUG= (powerpc64 example)
On 2017-Mar-27, at 5:53 AM, Dimitry Andric wrote: > On 27 Mar 2017, at 12:25, Mark Millard wrote: >> >> On 2017-Mar-27, at 2:41 AM, Dimitry Andric wrote: >>> On 26 Mar 2017, at 23:36, Mark Millard wrote: > ... Installed packages to be REMOVED: llvm40-4.0.0.r4 Number of packages to be removed: 1 The operation will free 49 GiB. >>> >>> Yes, this is big. But there is no real need to build the llvm ports >>> with debug information, unless you want to hack on llvm itself. And >>> in that case, you are better served by a Subversion checkout or Git >>> clone from upstream instead. > ... >> Historically unless something extreme like this ends up >> involved I build everything using WITH_DEBUG= or explicit >> -g's in order to have better information on any failure. > > The problem with the ports implementation of WITH_DEBUG is that it > always disables all optimizations, without a possibility to override. > Which bloats the resulting object files, libraries and executables, and > especially so for large C++ projects such as LLVM. > > I can recommend the following workaround. If you want to build a port > with optimizations disabled, you can always pass -O0 in CFLAGS. > > -Dimitry > > Index: Mk/bsd.port.mk > === > --- Mk/bsd.port.mk(revision 436685) > +++ Mk/bsd.port.mk(working copy) > @@ -1646,7 +1646,7 @@ MAKE_ENV+= DONTSTRIP=yes > STRIP_CMD=${TRUE} > .endif > DEBUG_FLAGS?= -g > -CFLAGS:= ${CFLAGS:N-O*:N-fno-strict*} ${DEBUG_FLAGS} > +CFLAGS:= ${CFLAGS} ${DEBUG_FLAGS} > .if defined(INSTALL_TARGET) > INSTALL_TARGET:= ${INSTALL_TARGET:S/^install-strip$/install/g} > .endif Interesting. WITH_DEBUG's description in the file does not mention that stripping of optimization flags: # WITH_DEBUG- If set, debugging flags are added to CFLAGS and the # binaries don't get stripped by INSTALL_PROGRAM or # INSTALL_LIB. Besides, individual ports might # add their specific to produce binaries for debugging # purposes. You can override the debug flags that are # passed to the compiler by setting DEBUG_FLAGS. It is # set to "-g" at default. I'll probably give myself an override that I can specify in /etc/make.conf , such as: # svnlite diff /usr/ports/Mk/bsd.port.mk Index: /usr/ports/Mk/bsd.port.mk === --- /usr/ports/Mk/bsd.port.mk (revision 436747) +++ /usr/ports/Mk/bsd.port.mk (working copy) @@ -1646,7 +1646,11 @@ STRIP_CMD= ${TRUE} .endif DEBUG_FLAGS?= -g +.if defined(ALLOW_OPTIMIZATIONS_FOR_WITH_DEBUG) +CFLAGS:= ${CFLAGS} ${DEBUG_FLAGS} +.else CFLAGS:= ${CFLAGS:N-O*:N-fno-strict*} ${DEBUG_FLAGS} +.endif .if defined(INSTALL_TARGET) INSTALL_TARGET:= ${INSTALL_TARGET:S/^install-strip$/install/g} .endif === Mark Millard markmi at dsl-only.net ___ freebsd-toolchain@freebsd.org mailing list https://lists.freebsd.org/mailman/listinfo/freebsd-toolchain To unsubscribe, send any mail to "freebsd-toolchain-unsubscr...@freebsd.org"
Re: FYI: what it takes for RAM+swap to build devel/llvm40 with 4 processors or cores and WITH__DEBUG= (powerpc64 example)
On 27 Mar 2017, at 12:25, Mark Millard wrote: > > On 2017-Mar-27, at 2:41 AM, Dimitry Andric wrote: >> On 26 Mar 2017, at 23:36, Mark Millard wrote: ... >>> Installed packages to be REMOVED: >>> llvm40-4.0.0.r4 >>> >>> Number of packages to be removed: 1 >>> >>> The operation will free 49 GiB. >> >> Yes, this is big. But there is no real need to build the llvm ports >> with debug information, unless you want to hack on llvm itself. And >> in that case, you are better served by a Subversion checkout or Git >> clone from upstream instead. ... > Historically unless something extreme like this ends up > involved I build everything using WITH_DEBUG= or explicit > -g's in order to have better information on any failure. The problem with the ports implementation of WITH_DEBUG is that it always disables all optimizations, without a possibility to override. Which bloats the resulting object files, libraries and executables, and especially so for large C++ projects such as LLVM. I can recommend the following workaround. If you want to build a port with optimizations disabled, you can always pass -O0 in CFLAGS. -Dimitry Index: Mk/bsd.port.mk === --- Mk/bsd.port.mk (revision 436685) +++ Mk/bsd.port.mk (working copy) @@ -1646,7 +1646,7 @@ MAKE_ENV+=DONTSTRIP=yes STRIP_CMD= ${TRUE} .endif DEBUG_FLAGS?= -g -CFLAGS:= ${CFLAGS:N-O*:N-fno-strict*} ${DEBUG_FLAGS} +CFLAGS:= ${CFLAGS} ${DEBUG_FLAGS} .if defined(INSTALL_TARGET) INSTALL_TARGET:= ${INSTALL_TARGET:S/^install-strip$/install/g} .endif signature.asc Description: Message signed with OpenPGP
Re: FYI: what it takes for RAM+swap to build devel/llvm40 with 4 processors or cores and WITH__DEBUG= (powerpc64 example)
On 2017-Mar-27, at 3:25 AM, Mark Millard wrote: > On 2017-Mar-27, at 2:41 AM, Dimitry Andric wrote: > >> On 26 Mar 2017, at 23:36, Mark Millard wrote: >>> >>> I upgraded from llvm40 r4 to final. An interesting result was >>> its creation of a backup package for llvm40-4.0.0.r4: >>> >>> about 13 cpu-core-hours running pkg create >>> >>> (Remember: I've been building with WITH_DEBUG= ) Its >>> single-threaded status stands out via elapsed time >>> approximately matching. >>> >>> I'll note that it was somewhat under 6 elapsed hours for >>> staging to have been populated (-j4 with 4 cores present >>> helps for this part). >>> >>> (Of course these elapsed-time figures are rather system >>> dependent, although the ratio might be more stable.) >>> >>> >>> >>> Also interesting was: >>> >>> Installed packages to be REMOVED: >>> llvm40-4.0.0.r4 >>> >>> Number of packages to be removed: 1 >>> >>> The operation will free 49 GiB. >> >> Yes, this is big. But there is no real need to build the llvm ports >> with debug information, unless you want to hack on llvm itself. And >> in that case, you are better served by a Subversion checkout or Git >> clone from upstream instead. >> >> -Dimitry > > FYI: > > Historically unless something extreme like this ends up > involved I build everything using WITH_DEBUG= or explicit > -g's in order to have better information on any failure. > > This is extreme enough that next time I synchronize > /usr/ports and it has a devel/llvm40 update I'll > likely rebuild devel/llvm40 without using WITH_DEBUG= . > I'm more concerned with the time it takes than with > the file system space involved. [Bad example of a incomplete thought. . .] That last presumes a hardware environment with lots of RAM (such as 16 GiBytes) --which I usually do not have access to. Having such is why the report was from a powerpc64 context: that is where I've access to that much RAM for FreeBSD. === Mark Millard markmi at dsl-only.net ___ freebsd-toolchain@freebsd.org mailing list https://lists.freebsd.org/mailman/listinfo/freebsd-toolchain To unsubscribe, send any mail to "freebsd-toolchain-unsubscr...@freebsd.org"
Re: FYI: what it takes for RAM+swap to build devel/llvm40 with 4 processors or cores and WITH__DEBUG= (powerpc64 example)
On 2017-Mar-27, at 2:41 AM, Dimitry Andric wrote: > On 26 Mar 2017, at 23:36, Mark Millard wrote: >> >> I upgraded from llvm40 r4 to final. An interesting result was >> its creation of a backup package for llvm40-4.0.0.r4: >> >> about 13 cpu-core-hours running pkg create >> >> (Remember: I've been building with WITH_DEBUG= ) Its >> single-threaded status stands out via elapsed time >> approximately matching. >> >> I'll note that it was somewhat under 6 elapsed hours for >> staging to have been populated (-j4 with 4 cores present >> helps for this part). >> >> (Of course these elapsed-time figures are rather system >> dependent, although the ratio might be more stable.) >> >> >> >> Also interesting was: >> >> Installed packages to be REMOVED: >> llvm40-4.0.0.r4 >> >> Number of packages to be removed: 1 >> >> The operation will free 49 GiB. > > Yes, this is big. But there is no real need to build the llvm ports > with debug information, unless you want to hack on llvm itself. And > in that case, you are better served by a Subversion checkout or Git > clone from upstream instead. > > -Dimitry FYI: Historically unless something extreme like this ends up involved I build everything using WITH_DEBUG= or explicit -g's in order to have better information on any failure. This is extreme enough that next time I synchronize /usr/ports and it has a devel/llvm40 update I'll likely rebuild devel/llvm40 without using WITH_DEBUG= . I'm more concerned with the time it takes than with the file system space involved. === Mark Millard markmi at dsl-only.net ___ freebsd-toolchain@freebsd.org mailing list https://lists.freebsd.org/mailman/listinfo/freebsd-toolchain To unsubscribe, send any mail to "freebsd-toolchain-unsubscr...@freebsd.org"
Re: FYI: what it takes for RAM+swap to build devel/llvm40 with 4 processors or cores and WITH__DEBUG= (powerpc64 example)
On 26 Mar 2017, at 23:36, Mark Millard wrote: > > I upgraded from llvm40 r4 to final. An interesting result was > its creation of a backup package for llvm40-4.0.0.r4: > > about 13 cpu-core-hours running pkg create > > (Remember: I've been building with WITH_DEBUG= ) Its > single-threaded status stands out via elapsed time > approximately matching. > > I'll note that it was somewhat under 6 elapsed hours for > staging to have been populated (-j4 with 4 cores present > helps for this part). > > (Of course these elapsed-time figures are rather system > dependent, although the ratio might be more stable.) > > > > Also interesting was: > > Installed packages to be REMOVED: > llvm40-4.0.0.r4 > > Number of packages to be removed: 1 > > The operation will free 49 GiB. Yes, this is big. But there is no real need to build the llvm ports with debug information, unless you want to hack on llvm itself. And in that case, you are better served by a Subversion checkout or Git clone from upstream instead. -Dimitry signature.asc Description: Message signed with OpenPGP
Re: FYI: what it takes for RAM+swap to build devel/llvm40 with 4 processors or cores and WITH__DEBUG= (powerpc64 example)
[I add some what-it-take-for-an-upgrade information.] On 2017-Mar-12, at 6:53 PM, Mark Millard wrote: > Summary: RAM+(peak swap) was about 26 GiBytes. > Also: about 118 GiByte /usr/obj/. . ./llvm40/ area. > (2 processors, 2 cores each, all in use; > WITH_DEBUG= used) > > The peak usage times were when the 4 cores were > each busy running ld at the same time. > > [So far as I know FreeBSD does not report peak swap usage > "since boot". So I do not have a cross check on if I missed > seeing a higher peak then I report in the details below.] > > What all this note spans as part of the build: > > # more /var/db/ports/devel_llvm40/options > # This file is auto-generated by 'make config'. > # Options for llvm40-4.0.0.r4 > _OPTIONS_READ=llvm40-4.0.0.r4 > _FILE_COMPLETE_OPTIONS_LIST=CLANG DOCS EXTRAS LIT LLD LLDB > OPTIONS_FILE_SET+=CLANG > OPTIONS_FILE_SET+=DOCS > OPTIONS_FILE_SET+=EXTRAS > OPTIONS_FILE_SET+=LIT > OPTIONS_FILE_SET+=LLD > OPTIONS_FILE_SET+=LLDB > > The system clang 4.0 was used to do the build. A port > binutils was used (-B${LOCALBASE}/bin/ in CFLAGS, > CXXFLAGS, an CPPFLAGS). The kernel was non-debug generally > but buildworld buildkernel did not have MALLOC_PRODUCTION= . > The llvm40 build did have MALLOC_PRODUCTION= . > > # uname -paKU > FreeBSD FBSDG5L 12.0-CURRENT FreeBSD 12.0-CURRENT r314687M powerpc > powerpc64 1200023 1200023 > > Most of what I have access to for FreeBSD does not have a > big enough configuration to do a WITH_DEBUG= build of llvm40 > on a machine with 4 cores, all in use. > > One type of environment that does is an old PowerMac G5 > so-called "Quad Core" that has 16 GiBytes of RAM, 17 GiBytes > of swap, and a 480 GiByte SSD (but extra over provisioned so > it appears even smaller for the file system+swap). > > Watching with top the peak swap usage that I saw was > 56% of the 17 GiByte --so call it 10 GiBytes or so. > > So something like 16 GiBytes RAM + 10 GiBytes swap > and so something like 26 GiByte total. > > I used portmaster with -DK. Afterwards the /usr/obj/ > sub-area for llvm40 used totaled to a size of: > > # du -sg /usr/obj/portswork/usr/ports/devel/llvm40 > 118 /usr/obj/portswork/usr/ports/devel/llvm40 > > So around 118 GiBytes of disk space. > > Showing the major space usage contributions: > > # du -sg /usr/obj/portswork/usr/ports/devel/llvm40/work/.build/* > /usr/obj/portswork/usr/ports/devel/llvm40/work/stage/usr/local/llvm40/* > . . . > 29/usr/obj/portswork/usr/ports/devel/llvm40/work/.build/bin > . . . > 29/usr/obj/portswork/usr/ports/devel/llvm40/work/.build/lib > . . . > 12/usr/obj/portswork/usr/ports/devel/llvm40/work/.build/tools > . . . > 26 > /usr/obj/portswork/usr/ports/devel/llvm40/work/stage/usr/local/llvm40/bin > . . . > 24 > /usr/obj/portswork/usr/ports/devel/llvm40/work/stage/usr/local/llvm40/lib > . . . > > > > Side notes that are more system specific: > > The timestamps on the script output file indicate that > the build took about 8 hours 24 minutes. > > The powerpc64 system used was built with the system clang > 4.0 compiler and a port-based binutils. This is despite that > clang 4.0 produces code that has any thrown C++ exceptions > completely non-functional for powerpc64 (program crashes > via signals reporting problems). I upgraded from llvm40 r4 to final. An interesting result was its creation of a backup package for llvm40-4.0.0.r4: about 13 cpu-core-hours running pkg create (Remember: I've been building with WITH_DEBUG= ) Its single-threaded status stands out via elapsed time approximately matching. I'll note that it was somewhat under 6 elapsed hours for staging to have been populated (-j4 with 4 cores present helps for this part). (Of course these elapsed-time figures are rather system dependent, although the ratio might be more stable.) Also interesting was: Installed packages to be REMOVED: llvm40-4.0.0.r4 Number of packages to be removed: 1 The operation will free 49 GiB. === Mark Millard markmi at dsl-only.net ___ freebsd-toolchain@freebsd.org mailing list https://lists.freebsd.org/mailman/listinfo/freebsd-toolchain To unsubscribe, send any mail to "freebsd-toolchain-unsubscr...@freebsd.org"
FYI: what it takes for RAM+swap to build devel/llvm40 with 4 processors or cores and WITH__DEBUG= (powerpc64 example)
Summary: RAM+(peak swap) was about 26 GiBytes. Also: about 118 GiByte /usr/obj/. . ./llvm40/ area. (2 processors, 2 cores each, all in use; WITH_DEBUG= used) The peak usage times were when the 4 cores were each busy running ld at the same time. [So far as I know FreeBSD does not report peak swap usage "since boot". So I do not have a cross check on if I missed seeing a higher peak then I report in the details below.] What all this note spans as part of the build: # more /var/db/ports/devel_llvm40/options # This file is auto-generated by 'make config'. # Options for llvm40-4.0.0.r4 _OPTIONS_READ=llvm40-4.0.0.r4 _FILE_COMPLETE_OPTIONS_LIST=CLANG DOCS EXTRAS LIT LLD LLDB OPTIONS_FILE_SET+=CLANG OPTIONS_FILE_SET+=DOCS OPTIONS_FILE_SET+=EXTRAS OPTIONS_FILE_SET+=LIT OPTIONS_FILE_SET+=LLD OPTIONS_FILE_SET+=LLDB The system clang 4.0 was used to do the build. A port binutils was used (-B${LOCALBASE}/bin/ in CFLAGS, CXXFLAGS, an CPPFLAGS). The kernel was non-debug generally but buildworld buildkernel did not have MALLOC_PRODUCTION= . The llvm40 build did have MALLOC_PRODUCTION= . # uname -paKU FreeBSD FBSDG5L 12.0-CURRENT FreeBSD 12.0-CURRENT r314687M powerpc powerpc64 1200023 1200023 Most of what I have access to for FreeBSD does not have a big enough configuration to do a WITH_DEBUG= build of llvm40 on a machine with 4 cores, all in use. One type of environment that does is an old PowerMac G5 so-called "Quad Core" that has 16 GiBytes of RAM, 17 GiBytes of swap, and a 480 GiByte SSD (but extra over provisioned so it appears even smaller for the file system+swap). Watching with top the peak swap usage that I saw was 56% of the 17 GiByte --so call it 10 GiBytes or so. So something like 16 GiBytes RAM + 10 GiBytes swap and so something like 26 GiByte total. I used portmaster with -DK. Afterwards the /usr/obj/ sub-area for llvm40 used totaled to a size of: # du -sg /usr/obj/portswork/usr/ports/devel/llvm40 118 /usr/obj/portswork/usr/ports/devel/llvm40 So around 118 GiBytes of disk space. Showing the major space usage contributions: # du -sg /usr/obj/portswork/usr/ports/devel/llvm40/work/.build/* /usr/obj/portswork/usr/ports/devel/llvm40/work/stage/usr/local/llvm40/* . . . 29 /usr/obj/portswork/usr/ports/devel/llvm40/work/.build/bin . . . 29 /usr/obj/portswork/usr/ports/devel/llvm40/work/.build/lib . . . 12 /usr/obj/portswork/usr/ports/devel/llvm40/work/.build/tools . . . 26 /usr/obj/portswork/usr/ports/devel/llvm40/work/stage/usr/local/llvm40/bin . . . 24 /usr/obj/portswork/usr/ports/devel/llvm40/work/stage/usr/local/llvm40/lib . . . Side notes that are more system specific: The timestamps on the script output file indicate that the build took about 8 hours 24 minutes. The powerpc64 system used was built with the system clang 4.0 compiler and a port-based binutils. This is despite that clang 4.0 produces code that has any thrown C++ exceptions completely non-functional for powerpc64 (program crashes via signals reporting problems). === Mark Millard markmi at dsl-only.net ___ freebsd-toolchain@freebsd.org mailing list https://lists.freebsd.org/mailman/listinfo/freebsd-toolchain To unsubscribe, send any mail to "freebsd-toolchain-unsubscr...@freebsd.org"