On 11/26/14 10:58 AM, Thomas Zimmermann wrote:
Hi

Am 26.11.2014 um 17:35 schrieb Michael Shal:
Would it make sense to check in some of the libraries we build that we very
rarely change, and that don’t have a lot of configure dependencies people
twiddle with? (icu, pixman, cairo, vp8, vp9). This could speed up build
times in our infrastructure and for developers. This doesn’t have to be in
mozilla-central. mach could pick up a matching binary for the current
configuration from github or similar. Has anyone looked into this?

If the code for the library isn't changing, it's the build system's 
responsibility to ensure that nothing is done. One of the problems is that the 
build system we use (make) is so broken that we have to clobber frequently.

That's not true. We use CLOBBER because the build scripts are broken,
not make or the concepts behind make.

I once worked on a software with similar requirements, tons of
auto-generated code, and make-based build system. The Makefiles were
badly written and didn't track dependencies correctly. Consequently we
ran into exactly the same problems as with Gecko: we sometimes had to
cleanup dependencies files manually and often rebuilt too many files.

Once we fixed the Makefiles, building got fast and we never again had to
fix any dependencies by hand.


For non-clobber builds, at least in our infra, caching can still help by 
sharing objects among machines (eg: for a newly spun up AWS instance with no 
previous objdir). However, caching still doesn't prevent make from doing lots 
of unnecessary work (reading Makfiles, building a DAG and stat()ing files) for 
things that haven't changed. In other words, if icu hasn't changed, the ideal 
incremental build time for that component is zero, but with make it will always 
be more than that.

This seems like it would speed up first-build and clobber build times, but
at least for me, it's incremental build performance I care about.

gps/glandium have some more fixes in the works, but unfortunately make wasn't 
designed to scale to projects of this size.

Make is fine and we're not the only project of this size that uses make.
The Linux kernel also does and achieves way better results here. The
problem is in our build scripts.

No, make is not fine. make is not capable of handling a single DAG the size of a large project like Firefox or Linux. That is a fact and not up for debate. Mike Shal can show you numbers.

Large projects hack around the scaling limitations of make by establishing multiple make contexts / DAGs. This is what Linux and Firefox do. Count how many invocations of `make` there are in both projects (hint: hundreds).

Any time you split the DAG, you need to manually reconstruct those lost dependencies through custom traversal order. Again, this is what Firefox and other large projects do.

We have many inefficiencies in the way we do traversal. But as long as there are separate DAGs and we are using a build system that doesn't know to clean up orphaned artifacts (make doesn't), we run into the possibility of clobbers being required.

Modern build tools like Tup do not have these limitations. They can handle an insanely large DAG just fine. And they can clean up orphaned artifacts and integrate artifact caching into their build process.

Make is not fine and anyone who thinks otherwise is fortunate to have never had to maintain a large or complex project while supporting anything resembling a modern and productive workflow.

_______________________________________________
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform

Reply via email to