Based on a post I saw here I thought it would be fun to see how much I could speed up compilation just by using per-file makefiles

I decided to use DCD(server only) as the test bench, as it's slightly big with all of its dependencies and takes a medium amount of time to build. Plus it already uses make, a prime candidate.

I hacked in another rule quickly to compile by file(just used LDC's -od & -op flags, took about a minute), and got some surprising results that I thought might interest someone here. BTW, I use a 6 core CPU on my home computer. The usage seemed to drop off a cliff at ~450% CPU though so a strong intel CPU would may have had LDC go past DMD.

LDC baseline w/ optimization:  15.402 seconds
LDC baseline no optimization:  8.018  seconds
DMD w/ optimizations:          14.174 seconds
DMD no optimizations:          3.179  seconds
LDC -j6 w/ optimization:       7.708  seconds
LDC -j6 no optimization:       5.210  seconds


Weirdly, it also seemed to run faster.

LDC baseline:
./bin/dcd-server -I /usr/include/dlang/dmd
...
[info ] 49562 symbols cached.
[info ] Startup completed in 776.922 milliseconds.


LDC -j6:
./bin/dcd-server -I /usr/include/dlang/dmd
...
[info ] 49562 symbols cached.
[info ] Startup completed in 744.215 milliseconds.


Wasn't a random occurrence, consistently about 30-40ms shorter with the parallel version vs baseline, same flags(except for output flags.) P.S., I actually got worse performance on DMD by trying this. I think I remember reading walter say something about DMD being optimized for throwing every file at it at once. Although, this does enable incremental builds.

Reply via email to