On Tuesday 13 January 2009 20:51:16 Jamie Lokier wrote:
> Paul Mundt wrote:
> > This happens in a lot of places, like embedded gentoo ports, where almost
> > all of the work is sent across distcc to a cross-compilation machine. In
> > systems that use package management, it is done on the host through
> > emulation, or painfully cross-compiled.
>
> Ah yes, I remember using embedded Gentoo.
>
> 95% of the time in ./configure scripts, 5% in compilations.

With SMP becoming commonplace, expect this to become the norm everywhere.  
Once you get to around quad processor, any C program with a ./configure step 
is probably going to take longer to configure than to compile.  (Of course C++ 
manages to remain slow enough that autoconf isn't so obvious a bottleneck.)

> And this is on x86!  I dread to think how slow it gets on something
> slow.

My friend Mark's been experimenting with the amazon "cloud" thing, feeding in 
an image with a qemu instance and distcc+cross-compiler, and running builds 
under that.  Renting an 8-way ~2.5 ghz server with 7 gigabytes of ram and 1.6 
terabytes of disk is 80 cents/hour through them plus another few cents/day for 
bandwidth and persistent storage and such.  That's likely to get cheaper as 
time goes on.

We're still planning to buy a build server of our own to have something in-
house, but for running nightly builds it's almost to the point where 
depreciation on the hardware is more than buying time from a server farm.  
Just _one_ of those 8-way servers is enough hardware to build an entire distro 
in an hour or so.

What this really allows us to do is experiment with "how parallel can we get 
our build"?  Because renting ten 8-way servers in a cluster is $8/hour, and 
distcc already scales trivially over that.  Down the road what Firmware Linux 
is working towards is multiple qemu instances running in parallel with a 
central instance distributing builds to each one, so each can do its own 
./configure in parallel, distribute compilation to the distccd instances as it 
has stuff to compile, and then package up the resulting binary into one of 
those portage tarballs and send it back to the central node to install on a 
network mount that the lot of 'em can mount as build context, so the packages 
can get their dependencies right.  (You don't want your build taking place in 
a network mount, but your OS being on one you never write to isn't so bad as 
long as you have local storage to build in.)

We'll probably leverage the heck out of Portage for this, and might wind up 
modifying it heavily.  Dunno yet.  (We can even force dependencies on portage 
so it doesn't need to calculate 'em, the central node can do that and then say 
"you have these packages, _build_"...)

But yeah, hobbyists with a laptop, network access, and a monthly budget of $20 
can do cluster builds these days.

Rob

P.S.  I still hope autoconf dies off and the world wakes up and moves away 
from that.  And from makefiles for that matter.  But in the meantime, I can 
work around it with enough effort.
--
To unsubscribe from this list: send the line "unsubscribe linux-embedded" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html

Reply via email to