On Friday, 17 June 2016 at 06:18:28 UTC, H. S. Teoh wrote:
On Fri, Jun 17, 2016 at 05:41:30AM +0000, Jason White via
Digitalmars-d-announce wrote: [...]
Where Make gets slow is when checking for changes on a ton of
files. I haven't tested it, but I'm sure Button is faster than
Make in this case because it checks for changed files using
multiple threads. Using the file system watcher can also bring
this down to a near-zero time.
IMO using the file system watcher is the way to go. It's the
only way to beat the O(n) pause at the beginning of a build as
the build system scans for what has changed.
See, I used to think that, then I measured. tup uses fuse for
this and that's exactly why it's fast. I was considering a
similar approach with the reggae binary backend, and so I went
and timed make, tup, ninja and itself on a synthetic project.
Basically I wrote a program to write out source files to be
compiled, with a runtime parameter indicating how many source
files to write.
The most extensive tests I did was on a synthetic project of 30k
source files. That's a lot bigger than the vast majority of
developers are ever likely to work on. As a comparison, the
2.6.11 version of the Linux kernel had 17k files.
A no-op build on my laptop was about (from memory):
ninja, binary: 1.3s
It turns out that just stat'ing everything is fast enough for
pretty much everybody, so I just kept the simple algorithm. Bear
in mind the Makefiles here were the simplest possible - doing
anything that usually goes on in Makefileland would have made it
far, far slower. I know: I converted a build system at work from
make to hand-written ninja and it no-op builds went from nearly 2
minutes to 1s.
If you happen to be unlucky enough to work on a project so large
you need to watch the file system, then use the tup backend I