I'm pretty sure that by the time we're reaching that number of cores we'll
be blocked on preprocessing, as preprocessing occurs on the local machine
(where the header files are), and on network I/O. I imagine that peak
efficiency is well below 2k machines.

In addition, there's some unavoidable latency due to the 1+ minute spent
doing configure/export and such at the start/end of the build.

I imagine that it will be hard to get build times below 3-4 minutes without
eliminating those constant time overheads.

On Thu, Mar 9, 2017 at 8:16 AM, Mike Hommey <m...@glandium.org> wrote:

> On Thu, Mar 09, 2017 at 07:45:13AM -0500, Ted Mielczarek wrote:
> > On Wed, Mar 8, 2017, at 05:43 PM, Ehsan Akhgari wrote:
> > > On 2017-03-08 11:31 AM, Simon Sapin wrote:
> > > > On 08/03/17 15:24, Ehsan Akhgari wrote:
> > > >> What we did in the Toronto office was walked to people who ran
> Linux on
> > > >> their desktop machines and installed the icecream server on their
> > > >> computer.  I suggest you do the same in London.  There is no need to
> > > >> wait for dedicated build machines.  ;-)
> > > >
> > > > We’ve just started doing that in the Paris office.
> > > >
> > > > Just a few machines seem to be enough to get to the point of
> diminishing
> > > > returns. Does that sound right?
> > >
> > > I doubt it...  At one point I personally managed to saturate 80 or so
> > > cores across a number of build slaves at the office here.  (My personal
> > > setup has been broken so unfortunately I have been building like a
> > > turtle for a while now myself...)
> >
> > A quick check on my local objdir shows that we have ~1800 source files
> > that get compiled during the build:
> > $ find /build/debug-mozilla-central/ -name backend.mk -o -name
> > ipdlsrcs.mk -o -name webidlsrcs.mk | xargs grep CPPSRCS | grep -vF
> > 'CPPSRCS += $(UNIFIED_CPPSRCS)' | cut -f3- -d' ' | tr ' ' '\n' | wc -l
> > 1809
> >
> > That's the count of actual files that will be passed to the C++
> > compiler. The build system is very good at parallelizing the compile
> > tier nowadays, so you should be able to scale the compile tier up to
> > nearly that many cores. There's still some overhead in the non-compile
> > tiers, but if you are running `mach build binaries` it shouldn't matter.
>
> Note that several files take more than 30s to build on their own, so a
> massively parallel build can't be faster than that + the time it takes
> to link libxul. IOW, don't expect to be able to go below 1 minute, even
> with 2000 cores.
>
> Mike
> _______________________________________________
> dev-platform mailing list
> dev-platform@lists.mozilla.org
> https://lists.mozilla.org/listinfo/dev-platform
>
_______________________________________________
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform

Reply via email to