Re: Is there a way to improve partial compilation times?

2017-03-09 Thread Karl Tomlinson
zbranie...@mozilla.com writes:

>  * I still have only 8GB of ram which is probably the ultimate
>  limiting factor

You are right here.  RAM is required not only for link time, but
also when compiling several large unified files at a time (though
perhaps this is not so significant with only 4 cores), and for
caching files between phases.

I notice build time and system responsiveness affected each time a
4GB memory card fails leaving that machine with only 12GB of RAM.
There is often a web browser running on the same machine though.
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Is there a way to improve partial compilation times?

2017-03-09 Thread zbraniecki
Reporting first results.

We got an icecream setup in SF office and I was able to plug myself into it and 
got a icecc+ccache+gcc combo with a fresh debug build in <30 min.

On top of that, I had low load on my machine, which is nice as in the meantime 
I was able to work on other things.

Now, two things that are probably still limiting me are:
 * network, I did this over wifi. I'll get a usb->eth adapter and this should 
speed up things additionally
 * I still have only 8GB of ram which is probably the ultimate limiting factor

:bdhal says that he got his builds under 5min, which is close to the lower 
bound I guess.

Other notes:
 * I didn't test without ccache. It may also work better for me, I'll test it 
later
 * I failed to get icecc work with clang for some reason

Lastly, as much as this does help me, it doesn't help us lower the barrier for 
contributors not working from the office. They usually have less powerful 
machines, with less ram and no access to farms.
So any work we can do to split those central headers that make 2day rebuilds 
full rebuilds, would go along way at making the experience of contributing to 
Gecko better.

zb.
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Is there a way to improve partial compilation times?

2017-03-09 Thread Michael Shal
On Tue, Mar 7, 2017 at 5:40 PM, Randell Jesup  wrote:

> >On 07/03/17 20:29, zbranie...@mozilla.com wrote:
> >
> >> I was just wondering if really two days of patches landing in Gecko
> should result
> >> in what seems like basically full rebuild.
> >>
> >> A clean build takes 65-70, a rebuild after two days of patches takes
> 50-60min.
> >
> >That seems pretty normal to me nowadays.  My very vague impression is that
> >this has gotten worse in the past few months.  Nowadays I assume that
> every
> >m-i to m-c sync (twice a day) will entail more or less a full rebuild.
>
> someone hits Clobber, or someone touches an MFBT or xpcom header, and
> poof, full rebuild.
>
>
We also have bug 902825, which means anytime someone touches anything in
configure (even just deleting an unused variable in old-configure.in), it
does a full rebuild. Things like mozilla-config.h, xpcom-config.h,
js-config.h, and a bunch of other headers are dependent on the configure
output, and then everything else is dependent on those headers.

-Mike
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Is there a way to improve partial compilation times?

2017-03-09 Thread Michael Layzell
I'm pretty sure that by the time we're reaching that number of cores we'll
be blocked on preprocessing, as preprocessing occurs on the local machine
(where the header files are), and on network I/O. I imagine that peak
efficiency is well below 2k machines.

In addition, there's some unavoidable latency due to the 1+ minute spent
doing configure/export and such at the start/end of the build.

I imagine that it will be hard to get build times below 3-4 minutes without
eliminating those constant time overheads.

On Thu, Mar 9, 2017 at 8:16 AM, Mike Hommey  wrote:

> On Thu, Mar 09, 2017 at 07:45:13AM -0500, Ted Mielczarek wrote:
> > On Wed, Mar 8, 2017, at 05:43 PM, Ehsan Akhgari wrote:
> > > On 2017-03-08 11:31 AM, Simon Sapin wrote:
> > > > On 08/03/17 15:24, Ehsan Akhgari wrote:
> > > >> What we did in the Toronto office was walked to people who ran
> Linux on
> > > >> their desktop machines and installed the icecream server on their
> > > >> computer.  I suggest you do the same in London.  There is no need to
> > > >> wait for dedicated build machines.  ;-)
> > > >
> > > > We’ve just started doing that in the Paris office.
> > > >
> > > > Just a few machines seem to be enough to get to the point of
> diminishing
> > > > returns. Does that sound right?
> > >
> > > I doubt it...  At one point I personally managed to saturate 80 or so
> > > cores across a number of build slaves at the office here.  (My personal
> > > setup has been broken so unfortunately I have been building like a
> > > turtle for a while now myself...)
> >
> > A quick check on my local objdir shows that we have ~1800 source files
> > that get compiled during the build:
> > $ find /build/debug-mozilla-central/ -name backend.mk -o -name
> > ipdlsrcs.mk -o -name webidlsrcs.mk | xargs grep CPPSRCS | grep -vF
> > 'CPPSRCS += $(UNIFIED_CPPSRCS)' | cut -f3- -d' ' | tr ' ' '\n' | wc -l
> > 1809
> >
> > That's the count of actual files that will be passed to the C++
> > compiler. The build system is very good at parallelizing the compile
> > tier nowadays, so you should be able to scale the compile tier up to
> > nearly that many cores. There's still some overhead in the non-compile
> > tiers, but if you are running `mach build binaries` it shouldn't matter.
>
> Note that several files take more than 30s to build on their own, so a
> massively parallel build can't be faster than that + the time it takes
> to link libxul. IOW, don't expect to be able to go below 1 minute, even
> with 2000 cores.
>
> Mike
> ___
> dev-platform mailing list
> dev-platform@lists.mozilla.org
> https://lists.mozilla.org/listinfo/dev-platform
>
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Is there a way to improve partial compilation times?

2017-03-09 Thread Mike Hommey
On Thu, Mar 09, 2017 at 07:45:13AM -0500, Ted Mielczarek wrote:
> On Wed, Mar 8, 2017, at 05:43 PM, Ehsan Akhgari wrote:
> > On 2017-03-08 11:31 AM, Simon Sapin wrote:
> > > On 08/03/17 15:24, Ehsan Akhgari wrote:
> > >> What we did in the Toronto office was walked to people who ran Linux on
> > >> their desktop machines and installed the icecream server on their
> > >> computer.  I suggest you do the same in London.  There is no need to
> > >> wait for dedicated build machines.  ;-)
> > > 
> > > We’ve just started doing that in the Paris office.
> > > 
> > > Just a few machines seem to be enough to get to the point of diminishing
> > > returns. Does that sound right?
> > 
> > I doubt it...  At one point I personally managed to saturate 80 or so
> > cores across a number of build slaves at the office here.  (My personal
> > setup has been broken so unfortunately I have been building like a
> > turtle for a while now myself...)
> 
> A quick check on my local objdir shows that we have ~1800 source files
> that get compiled during the build:
> $ find /build/debug-mozilla-central/ -name backend.mk -o -name
> ipdlsrcs.mk -o -name webidlsrcs.mk | xargs grep CPPSRCS | grep -vF
> 'CPPSRCS += $(UNIFIED_CPPSRCS)' | cut -f3- -d' ' | tr ' ' '\n' | wc -l
> 1809
> 
> That's the count of actual files that will be passed to the C++
> compiler. The build system is very good at parallelizing the compile
> tier nowadays, so you should be able to scale the compile tier up to
> nearly that many cores. There's still some overhead in the non-compile
> tiers, but if you are running `mach build binaries` it shouldn't matter.

Note that several files take more than 30s to build on their own, so a
massively parallel build can't be faster than that + the time it takes
to link libxul. IOW, don't expect to be able to go below 1 minute, even
with 2000 cores.

Mike
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Is there a way to improve partial compilation times?

2017-03-09 Thread Ted Mielczarek
On Wed, Mar 8, 2017, at 05:43 PM, Ehsan Akhgari wrote:
> On 2017-03-08 11:31 AM, Simon Sapin wrote:
> > On 08/03/17 15:24, Ehsan Akhgari wrote:
> >> What we did in the Toronto office was walked to people who ran Linux on
> >> their desktop machines and installed the icecream server on their
> >> computer.  I suggest you do the same in London.  There is no need to
> >> wait for dedicated build machines.  ;-)
> > 
> > We’ve just started doing that in the Paris office.
> > 
> > Just a few machines seem to be enough to get to the point of diminishing
> > returns. Does that sound right?
> 
> I doubt it...  At one point I personally managed to saturate 80 or so
> cores across a number of build slaves at the office here.  (My personal
> setup has been broken so unfortunately I have been building like a
> turtle for a while now myself...)

A quick check on my local objdir shows that we have ~1800 source files
that get compiled during the build:
$ find /build/debug-mozilla-central/ -name backend.mk -o -name
ipdlsrcs.mk -o -name webidlsrcs.mk | xargs grep CPPSRCS | grep -vF
'CPPSRCS += $(UNIFIED_CPPSRCS)' | cut -f3- -d' ' | tr ' ' '\n' | wc -l
1809

That's the count of actual files that will be passed to the C++
compiler. The build system is very good at parallelizing the compile
tier nowadays, so you should be able to scale the compile tier up to
nearly that many cores. There's still some overhead in the non-compile
tiers, but if you are running `mach build binaries` it shouldn't matter.

-Ted
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Is there a way to improve partial compilation times?

2017-03-09 Thread Paul Adenot
On Thu, Mar 9, 2017, at 07:42 AM, Wei-Cheng Pan wrote:
> We are using icecream in the Taipei office too, and it is a big enhance.
> Sadly when we tried to use it on Mac OS, we always got wrong stack
> information.
> I've read the article on MDN, seems it's related to a compiler flag
> (-fdebug-compilation-dir),
> do you have any detailed information (or thread) that talking about this?

If found this yesterday after setting up the cluster in Paris. I might
take a shot at fixing it.

Cheers,
Paul.
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Is there a way to improve partial compilation times?

2017-03-08 Thread Wei-Cheng Pan
On 08/03/2017 10:24 PM, Ehsan Akhgari wrote:
> On 2017-03-08 6:40 AM, James Graham wrote:
>> On 08/03/17 11:11, Frederik Braun wrote:
>>> On 08.03.2017 01:17, Ralph Giles wrote:
 I second Jeff's point about building with icecream[1]. If you work in
 an office with a build farm, or near a fast desktop machine you can
 pass jobs to, this makes laptop builds much more tolerable.

>>> What do you mean by build farm?
>>> Do some offices have dedicated build machines?
>>> Or is this "just" a lot of desktop machines wired up to work together?
>> Several offices are using icecream with whatever desktop machines happen
>> to be available to improve build times. The London office was told last
>> year that there's a plan to buy dedicated build machines to add to the
>> cluster, but I am unaware if there is any progress or whether the idea
>> was dropped.
> What we did in the Toronto office was walked to people who ran Linux on
> their desktop machines and installed the icecream server on their
> computer.  I suggest you do the same in London.  There is no need to
> wait for dedicated build machines.  ;-)
We are using icecream in the Taipei office too, and it is a big enhance.
Sadly when we tried to use it on Mac OS, we always got wrong stack
information.
I've read the article on MDN, seems it's related to a compiler flag
(-fdebug-compilation-dir),
do you have any detailed information (or thread) that talking about this?
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Is there a way to improve partial compilation times?

2017-03-08 Thread Steve Fink

On 03/08/2017 06:21 AM, Ehsan Akhgari wrote:

On 2017-03-07 2:49 PM, Eric Rahm wrote:

I often wonder if unified builds are making things slower for folks who use
ccache (I assume one file changing would mean a rebuild for the entire
unified chunk), I'm not sure if there's a solution to that but it would be
interesting to see if compiling w/o ccache is actually faster at this point.

Unified builds are the only way we build so this is only of theoretical
interest.  But at any rate, if your use case is building the tree once
every couple of days, you should definitely disable ccache with or
without unified builds.  ccache is only helpful for folks who end up
compiling the same code over and over again (for example, if you use
interactive rebase a lot, or if you switch between branches, or use
other VCS commands that touch the file modified times without changing
the dates and times.  Basically if you don't switch between branches a
lot and don't write a lot of C++ code, ccache probably hurts you more
than it helps you.


We only build unified, but it's not a binary thing -- within the JS 
engine, I found that unified builds substantially speed up full 
compiles, but also substantially slow down incremental compiles. And 
that wasn't a great tradeoff when it is pretty common to iterate on a 
single C++ file, since you'd end up rebuilding the concatenation of 
dozens of files, which took a long time to get the compile errors out 
of. I found setting FILES_PER_UNIFIED_FILE to 6 to be a pretty good 
compromise [1], but it depends on the average size of your .cpp files 
and the common usage patterns of people using that portion of the tree. 
Oh, and the number of files in a directory -- if you don't have that 
many files and you have a lot of cores, then you want enough chunks to 
keep your cores busy.


Also note that dropping the unified sizes did *not* speed up the time to 
get a new JS shell binary after a single-cpp change, oddly enough. The 
compile time was much faster, but it made up for it seemingly by moving 
the time into a slower link. I guess unified compilation is sort of 
doing some of the linking up-front? Still, when iterations are bound by 
the time it takes to see the latest round of compile errors, dropping 
the unified chunk sizes can be a pretty big win. You might want to 
consider it for your component.


1. http://searchfox.org/mozilla-central/source/js/src/moz.build#14

___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Is there a way to improve partial compilation times?

2017-03-08 Thread zbraniecki
On Wednesday, March 8, 2017 at 8:57:57 AM UTC-8, James Graham wrote:
> On 08/03/17 14:21, Ehsan Akhgari wrote:

> At risk of stating the obvious, if you aren't touching C++ code (or 
> maybe jsm?), and aren't using any funky compile options, you should be 
> using an artifact build for best performance.

I am working 905 of my time in C++, I just do most of my work within a single, 
quite small, module (`/intl`) so my recompilation times after changes are good 
(30sec?).

But, every couple days, when I rebase on top of new master, is when I have the 
one hour recompile.

zb.
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Is there a way to improve partial compilation times?

2017-03-08 Thread James Graham

On 08/03/17 14:21, Ehsan Akhgari wrote:

On 2017-03-07 2:49 PM, Eric Rahm wrote:

I often wonder if unified builds are making things slower for folks who use
ccache (I assume one file changing would mean a rebuild for the entire
unified chunk), I'm not sure if there's a solution to that but it would be
interesting to see if compiling w/o ccache is actually faster at this point.


Unified builds are the only way we build so this is only of theoretical
interest.  But at any rate, if your use case is building the tree once
every couple of days, you should definitely disable ccache with or
without unified builds.  ccache is only helpful for folks who end up
compiling the same code over and over again (for example, if you use
interactive rebase a lot, or if you switch between branches, or use
other VCS commands that touch the file modified times without changing
the dates and times.  Basically if you don't switch between branches a
lot and don't write a lot of C++ code, ccache probably hurts you more
than it helps you.


At risk of stating the obvious, if you aren't touching C++ code (or 
maybe jsm?), and aren't using any funky compile options, you should be 
using an artifact build for best performance.


___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Is there a way to improve partial compilation times?

2017-03-08 Thread Simon Sapin

On 08/03/17 15:24, Ehsan Akhgari wrote:

What we did in the Toronto office was walked to people who ran Linux on
their desktop machines and installed the icecream server on their
computer.  I suggest you do the same in London.  There is no need to
wait for dedicated build machines.  ;-)


We’ve just started doing that in the Paris office.

Just a few machines seem to be enough to get to the point of diminishing 
returns. Does that sound right?


--
Simon Sapin
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Is there a way to improve partial compilation times?

2017-03-08 Thread Frederik Braun
Gotcha.
Problem for the Berlin office: There are only 3 people who have a
desktop and run linux. Two of them are part of our "cluster" :)
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Is there a way to improve partial compilation times?

2017-03-08 Thread Ehsan Akhgari
On 2017-03-08 6:40 AM, James Graham wrote:
> On 08/03/17 11:11, Frederik Braun wrote:
>> On 08.03.2017 01:17, Ralph Giles wrote:
>>> I second Jeff's point about building with icecream[1]. If you work in
>>> an office with a build farm, or near a fast desktop machine you can
>>> pass jobs to, this makes laptop builds much more tolerable.
>>>
>>
>> What do you mean by build farm?
>> Do some offices have dedicated build machines?
>> Or is this "just" a lot of desktop machines wired up to work together?
> 
> Several offices are using icecream with whatever desktop machines happen
> to be available to improve build times. The London office was told last
> year that there's a plan to buy dedicated build machines to add to the
> cluster, but I am unaware if there is any progress or whether the idea
> was dropped.

What we did in the Toronto office was walked to people who ran Linux on
their desktop machines and installed the icecream server on their
computer.  I suggest you do the same in London.  There is no need to
wait for dedicated build machines.  ;-)

___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Is there a way to improve partial compilation times?

2017-03-08 Thread Ehsan Akhgari
On 2017-03-07 2:49 PM, Eric Rahm wrote:
> I often wonder if unified builds are making things slower for folks who use
> ccache (I assume one file changing would mean a rebuild for the entire
> unified chunk), I'm not sure if there's a solution to that but it would be
> interesting to see if compiling w/o ccache is actually faster at this point.

Unified builds are the only way we build so this is only of theoretical
interest.  But at any rate, if your use case is building the tree once
every couple of days, you should definitely disable ccache with or
without unified builds.  ccache is only helpful for folks who end up
compiling the same code over and over again (for example, if you use
interactive rebase a lot, or if you switch between branches, or use
other VCS commands that touch the file modified times without changing
the dates and times.  Basically if you don't switch between branches a
lot and don't write a lot of C++ code, ccache probably hurts you more
than it helps you.

___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Is there a way to improve partial compilation times?

2017-03-08 Thread James Graham

On 08/03/17 11:11, Frederik Braun wrote:

On 08.03.2017 01:17, Ralph Giles wrote:

I second Jeff's point about building with icecream[1]. If you work in
an office with a build farm, or near a fast desktop machine you can
pass jobs to, this makes laptop builds much more tolerable.



What do you mean by build farm?
Do some offices have dedicated build machines?
Or is this "just" a lot of desktop machines wired up to work together?


Several offices are using icecream with whatever desktop machines happen 
to be available to improve build times. The London office was told last 
year that there's a plan to buy dedicated build machines to add to the 
cluster, but I am unaware if there is any progress or whether the idea 
was dropped.


___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Is there a way to improve partial compilation times?

2017-03-08 Thread Gabriele Svelto
On 08/03/2017 01:11, Mike Hommey wrote:
> You probably want a desktop machine, not a new laptop.

I second that, modern laptops are usually thermally limited. I actually
drilled holes in the back of my Thinkpad to improve airflow (and it did
improve build times).

My main box is a not-so-recent Xeon (E3-1270v2) and a clobber build is
usually 20-25 minutes. A more modern machine would be measurably faster
than that.

Another trick you could use is to run builds before you start working to
warm ccache. I've got a cronjob that runs all the builds I care about on
clean trees in the early morning, so when I get to work builds with my
patches applied run mostly out of ccache.

 Gabriele



signature.asc
Description: OpenPGP digital signature
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Is there a way to improve partial compilation times?

2017-03-08 Thread Frederik Braun
On 08.03.2017 01:17, Ralph Giles wrote:
> I second Jeff's point about building with icecream[1]. If you work in
> an office with a build farm, or near a fast desktop machine you can
> pass jobs to, this makes laptop builds much more tolerable.
>

What do you mean by build farm?
Do some offices have dedicated build machines?
Or is this "just" a lot of desktop machines wired up to work together?
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Is there a way to improve partial compilation times?

2017-03-07 Thread Boris Zbarsky

On 3/7/17 4:25 PM, Chris Peterson wrote:

Can you just nice mach?


I seem to recall trying that and it not helping enough (on MacOS) with 
the default "use -j8 on a 4-core machine" behavior.  YMMV based on OS, 
ratio of RAM to cores, and whatnot.


-Boris
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Is there a way to improve partial compilation times?

2017-03-07 Thread Ralph Giles
I second Jeff's point about building with icecream[1]. If you work in
an office with a build farm, or near a fast desktop machine you can
pass jobs to, this makes laptop builds much more tolerable. Despite
the warnings on the mdn page, I do this over the wan as well. It's a
lot slower than when I'm in the office, but still a lot faster than a
local-only build.

There's also a proposal to pull from the continuous integration build
cache[2]. That doesn't work yet though.

 -r

[1] 
https://developer.mozilla.org/en-US/docs/Mozilla/Developer_guide/Using_Icecream
[2] https://github.com/mozilla/sccache/issues/30

On Tue, Mar 7, 2017 at 3:50 PM,   wrote:
> On Tuesday, March 7, 2017 at 3:24:33 PM UTC-8, Mike Hommey wrote:
>> On what OS? I have a XPS 12 from 2013 and a XPS 13 9360, and both do
>> clobber builds in 40 minutes (which is the sad surprise that laptop CPUs
>> performance have not improved in 3 years), on Linux. 70 minutes is way
>> too much.
>
> Arch Linux.
>
> Sometimes I'll get down to 40min, but often it's 60.
>
> I'm going to try to remove ccache for the next rebuild and see how it affects 
> things.
>
> I may also have to request a new laptop although I was really hoping now to 
> have to for at least another year...
>
> zb.
> ___
> dev-platform mailing list
> dev-platform@lists.mozilla.org
> https://lists.mozilla.org/listinfo/dev-platform
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Is there a way to improve partial compilation times?

2017-03-07 Thread Mike Hommey
On Tue, Mar 07, 2017 at 03:50:56PM -0800, zbranie...@mozilla.com wrote:
> On Tuesday, March 7, 2017 at 3:24:33 PM UTC-8, Mike Hommey wrote:
> > On what OS? I have a XPS 12 from 2013 and a XPS 13 9360, and both do
> > clobber builds in 40 minutes (which is the sad surprise that laptop CPUs
> > performance have not improved in 3 years), on Linux. 70 minutes is way
> > too much.
> 
> Arch Linux.
> 
> Sometimes I'll get down to 40min, but often it's 60.
> 
> I'm going to try to remove ccache for the next rebuild and see how it affects 
> things.
> 
> I may also have to request a new laptop although I was really hoping now to 
> have to for at least another year...

You probably want a desktop machine, not a new laptop.

Mike
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Is there a way to improve partial compilation times?

2017-03-07 Thread zbraniecki
On Tuesday, March 7, 2017 at 3:24:33 PM UTC-8, Mike Hommey wrote:
> On what OS? I have a XPS 12 from 2013 and a XPS 13 9360, and both do
> clobber builds in 40 minutes (which is the sad surprise that laptop CPUs
> performance have not improved in 3 years), on Linux. 70 minutes is way
> too much.

Arch Linux.

Sometimes I'll get down to 40min, but often it's 60.

I'm going to try to remove ccache for the next rebuild and see how it affects 
things.

I may also have to request a new laptop although I was really hoping now to 
have to for at least another year...

zb.
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Is there a way to improve partial compilation times?

2017-03-07 Thread Dave Townsend
70 minutes is about what a clobber build takes on my Surface Book. And yes
I agree, it is way too much!

On Tue, Mar 7, 2017 at 3:24 PM, Mike Hommey  wrote:

> On Tue, Mar 07, 2017 at 11:29:00AM -0800, zbranie...@mozilla.com wrote:
> > So,
> >
> > I'm on Dell XPS 13 (9350), and I don't think that toying with
> MOZ_MAKE_FLAGS will help me here. "-j4" seems to be a bit high and a bit
> slowing down my work while the compilation is going on, but bearable.
> >
> > I was just wondering if really two days of patches landing in Gecko
> should result in what seems like basically full rebuild.
> >
> > A clean build takes 65-70, a rebuild after two days of patches takes
> 50-60min.
>
> On what OS? I have a XPS 12 from 2013 and a XPS 13 9360, and both do
> clobber builds in 40 minutes (which is the sad surprise that laptop CPUs
> performance have not improved in 3 years), on Linux. 70 minutes is way
> too much.
>
> Mike
> ___
> dev-platform mailing list
> dev-platform@lists.mozilla.org
> https://lists.mozilla.org/listinfo/dev-platform
>
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Is there a way to improve partial compilation times?

2017-03-07 Thread Randell Jesup
>On 07/03/17 20:29, zbranie...@mozilla.com wrote:
>
>> I was just wondering if really two days of patches landing in Gecko should 
>> result
>> in what seems like basically full rebuild.
>> 
>> A clean build takes 65-70, a rebuild after two days of patches takes 
>> 50-60min.
>
>That seems pretty normal to me nowadays.  My very vague impression is that
>this has gotten worse in the past few months.  Nowadays I assume that every
>m-i to m-c sync (twice a day) will entail more or less a full rebuild.

someone hits Clobber, or someone touches an MFBT or xpcom header, and
poof, full rebuild.

>I would also say that, if your machine takes 65-70 mins for a clean build,
>and that's on Linux, you need a faster machine.  I reckon on about 23 mins
>for gcc6 "-g -Og", and clang can do it in just about 20.  Linking is also
>a bottleneck -- you really need 16GB for sane linking performance.

Right.  My 5.5 year old desktop Linux machine (admittedly 1 (old) XEON,
16GB, SSD) does builds in around 25ish min (up from about 12 a couple of
years ago!)  I suspect the issue is the laptop has a lower-HP/MHz CPU
(maybe), and/or the cooling solution is causing thermal throttling -
mobile CPUs often can't run all-cores-flat-out for long in typical
laptop cooling.

Windows builds slower.  Of course.

-- 
Randell Jesup, Mozilla Corp
remove "news" for personal email
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Is there a way to improve partial compilation times?

2017-03-07 Thread Chris Peterson

On 3/7/2017 11:19 AM, Steve Fink wrote:

I have at times spun off builds into their own cgroup. It seems to
isolate the load pretty well, when I want to bother with remembering how
to set it up again. Perhaps it'd be a good thing for mach to do
automatically.

Then again, if dropping the -j count buys you responsiveness for only a
3-5% loss, then perhaps cgroups are not worth bothering with.


Can you just nice mach?
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Is there a way to improve partial compilation times?

2017-03-07 Thread Jeff Muizelaar
On Tue, Mar 7, 2017 at 2:29 PM,   wrote:
> So,
>
> I'm on Dell XPS 13 (9350), and I don't think that toying with MOZ_MAKE_FLAGS 
> will help me here. "-j4" seems to be a bit high and a bit slowing down my 
> work while the compilation is going on, but bearable.
>
> I was just wondering if really two days of patches landing in Gecko should 
> result in what seems like basically full rebuild.

Two days of patches landing requiring basically a full rebuild is not
surprising to me. All it takes is some changes in some frequently
included headers and then basically everything needs to be rebuilt.

-Jeff
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Is there a way to improve partial compilation times?

2017-03-07 Thread zbraniecki
So,

I'm on Dell XPS 13 (9350), and I don't think that toying with MOZ_MAKE_FLAGS 
will help me here. "-j4" seems to be a bit high and a bit slowing down my work 
while the compilation is going on, but bearable.

I was just wondering if really two days of patches landing in Gecko should 
result in what seems like basically full rebuild.

A clean build takes 65-70, a rebuild after two days of patches takes 50-60min.

It seems like something is wrong and I'd expect such partial rebuilds to be 
actually quite fast, but somehow it seems that either ccache, or our build 
system can't narrow down things that require recompilation well?

zb.
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Is there a way to improve partial compilation times?

2017-03-07 Thread Steve Fink

On 03/07/2017 11:10 AM, Boris Zbarsky wrote:

On 3/7/17 2:05 PM, Mike Conley wrote:

FWIW, the MOZ_MAKE_FLAGS bit can probably be removed, as I believe mach
will just choose the optimal number based on examining your processor 
cores.


Except mach's definition of "optimal" is "maybe optimize for compile 
throughput", not "optimize for doing anything else at all with your 
computer while compiling".


I had to manually set a lower -j value than it was picking, because it 
was impossible for me to load webpages (read: do reviews) or read 
emails while a build was running, due to mach spawning too many jobs.  :(


Now my builds are 1-2 minutes slower (read: 3-5%), but at least I can 
get something else done while they're running. 


I have at times spun off builds into their own cgroup. It seems to 
isolate the load pretty well, when I want to bother with remembering how 
to set it up again. Perhaps it'd be a good thing for mach to do 
automatically.


Then again, if dropping the -j count buys you responsiveness for only a 
3-5% loss, then perhaps cgroups are not worth bothering with.


___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Is there a way to improve partial compilation times?

2017-03-07 Thread Boris Zbarsky

On 3/7/17 2:05 PM, Mike Conley wrote:

FWIW, the MOZ_MAKE_FLAGS bit can probably be removed, as I believe mach
will just choose the optimal number based on examining your processor cores.


Except mach's definition of "optimal" is "maybe optimize for compile 
throughput", not "optimize for doing anything else at all with your 
computer while compiling".


I had to manually set a lower -j value than it was picking, because it 
was impossible for me to load webpages (read: do reviews) or read emails 
while a build was running, due to mach spawning too many jobs.  :(


Now my builds are 1-2 minutes slower (read: 3-5%), but at least I can 
get something else done while they're running.


-Boris
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Is there a way to improve partial compilation times?

2017-03-07 Thread Marco Bonardo
On Tue, Mar 7, 2017 at 8:05 PM, Mike Conley  wrote:

> FWIW, the MOZ_MAKE_FLAGS bit can probably be removed, as I believe mach
> will just choose the optimal number based on examining your processor
> cores.


>From my experience the chosen value is too conservative, I think it
defaults to cpu_count(). What I usually use is cpu_count() * 1.5 that gives
me faster results.
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Is there a way to improve partial compilation times?

2017-03-07 Thread Jeff Muizelaar
Perhaps you need a faster computer(s). Are you building on Windows?
With icecream on Linux I can do a full clobber build in ~5 minutes.

-Jeff

On Tue, Mar 7, 2017 at 1:59 PM,   wrote:
> I'm on Linux (Arch), with ccache, and I work on mozilla-central, rebasing my 
> bookmarks on top of central every couple days.
>
> And every couple days the recompilation takes 50-65 minutes.
>
> Here's my mozconfig:
> ▶ cat mozconfig
> mk_add_options MOZ_MAKE_FLAGS="-j4"
> mk_add_options AUTOCLOBBER=1
> ac_add_options --with-ccache=/usr/bin/ccache
> ac_add_options --enable-optimize="-g -Og"
> ac_add_options --enable-debug-symbols
> ac_add_options --enable-debug
>
> Here's my ccache:
> ▶ ccache -s
> cache directory /home/zbraniecki/.ccache
> primary config  /home/zbraniecki/.ccache/ccache.conf
> secondary config  (readonly)/etc/ccache.conf
> cache hit (direct) 23811
> cache hit (preprocessed)3449
> cache miss 25352
> cache hit rate 51.81 %
> called for link 2081
> called for preprocessing 495
> compile failed   388
> preprocessor error   546
> bad compiler arguments 8
> autoconf compile/link   1242
> no input file169
> cleanups performed42
> files in cache 36965
> cache size  20.0 GB
> max cache size  21.5 GB
>
> And all I do is pull -u central, and `./mach build`.
>
> Today I updated from Sunday, it's two days of changes, and my recompilation 
> is taking 60 minutes already.
>
> I'd like to hope that there's some bug in my configuration rather than the 
> nature of things.
>
> Would appreciate any leads,
> zb.
> ___
> dev-platform mailing list
> dev-platform@lists.mozilla.org
> https://lists.mozilla.org/listinfo/dev-platform
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Is there a way to improve partial compilation times?

2017-03-07 Thread zbraniecki
I'm on Linux (Arch), with ccache, and I work on mozilla-central, rebasing my 
bookmarks on top of central every couple days.

And every couple days the recompilation takes 50-65 minutes.

Here's my mozconfig:
▶ cat mozconfig 
mk_add_options MOZ_MAKE_FLAGS="-j4"
mk_add_options AUTOCLOBBER=1
ac_add_options --with-ccache=/usr/bin/ccache
ac_add_options --enable-optimize="-g -Og"
ac_add_options --enable-debug-symbols
ac_add_options --enable-debug

Here's my ccache:
▶ ccache -s
cache directory /home/zbraniecki/.ccache
primary config  /home/zbraniecki/.ccache/ccache.conf
secondary config  (readonly)/etc/ccache.conf
cache hit (direct) 23811
cache hit (preprocessed)3449
cache miss 25352
cache hit rate 51.81 %
called for link 2081
called for preprocessing 495
compile failed   388
preprocessor error   546
bad compiler arguments 8
autoconf compile/link   1242
no input file169
cleanups performed42
files in cache 36965
cache size  20.0 GB
max cache size  21.5 GB

And all I do is pull -u central, and `./mach build`.

Today I updated from Sunday, it's two days of changes, and my recompilation is 
taking 60 minutes already.

I'd like to hope that there's some bug in my configuration rather than the 
nature of things.

Would appreciate any leads,
zb.
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform