On Wednesday 20 February 2008 12:15:05 Dan Nicholson wrote: > On Wed, Feb 20, 2008 at 6:59 AM, Wilco Beekhuizen > > <[EMAIL PROTECTED]> wrote: > > The use of make -j 3 really speeds up compilation on my new dual core > > laptop. I noticed some problems with autoconf and make -j. > > Furthermore, I checked the gentoo portage and it seems a lot of > > packages are broken and need -j 1. So changing everything to -j 3 is > > not an option. > > Yep. It's not autoconf that's the problem since that's run > synchronously from a shell script, but the Makefiles, whether hand > written or generated by automake. It's actually somewhat difficult to > write correct Makefiles so make will block on dependencies at correct > times. Automake generated makefiles do an excellent job with this. The > problems usually come when people write their own Makefiles or add > custom rules in addition to default automake rules. This is nasty but real ;-(, in fact I think autoconf/automake and make is become to old for the actual necessity of development (qmake?) but they will stay with us for a very long time yet.
> > What I was doing before was setting MAKEFLAGS=-j3 and passing -j1 when > necessary. There aren't tons of packages that are broken in this > sense, but it happens. > In my experience, I usually compile with -j5 in a Quad Core. I go very deep in BLFS and I see errors in some packages. Gettext, make, net-snmp are examples that comes in my mind right now and back to PLAIN make without -j. The speed up of -j in multi-cores, HTs, etc is monstruous, and compiling a huge ammount of source to various serves (Mail, Web, IPBX, DNS, SMB/CIFS) without tests the time is about 02:20. I do not compile anymore with -j1 to compare the results now, but in my AthlonXP 2300+ Notebook the process nine times slow. Note that I create a personal packages manage who strip, divide (-devel, -tools, -libso, etc...), compress and upload to a local repository! When I turn the tests on, the time comes down in various levels thanks to not use -jn in make tests/checks (but I in temptation to try it ;-) if not to tecnical curiosity just for fun). The whole time to do with test reach about six hours. I graph with RRDTool the behaviour and obviously the multi cores doesn't used by the make test/check. Disk I/O is not a major factor, but 15% of speed is unfluenced by him, Memory is used generaly by caching and reach the 200MB in the worst case, Swap barely is used by enter in scene in the middle of whole process. All of this is measured with CFLAGS in hugh optimization and inside a Virtual Xen Machine (The last doesn't issue any performance penalty, at last noticiable). > > I think the future of make is to use multiple cores by default, or GCC > > should support multi-core compilation since every new processor is a > > dual or quad core processor. > > I don't think make will be defaulting to multiple jobs at any time > since the result is so dependent on the Makefile, which make doesn't > control. Sort of like compiler optimization, it has to be a buy-in > type of thing. Doens It architect depent issue too? The make (or whatever) needs detect the ammount of core presents to be effective. > > -- > Dan -- http://linuxfromscratch.org/mailman/listinfo/lfs-support FAQ: http://www.linuxfromscratch.org/lfs/faq.html Unsubscribe: See the above information page
