On 10.03.2013 15:07, Vladimir Panteleev wrote:
On Sunday, 10 March 2013 at 13:35:34 UTC, Rainer Schuetze wrote:
Looks pretty ok, but considering the number of modules in dfeed (I
count about 24) and them being not very large, that makes compilation
speed for each module about 1 second. It will only be faster if the
number of modules to compile does not exceed twice the number of cores
available.
~/DFeed$ cat all.txt | wc -l
62
A, I didn't notice the ae link. I was already suspecting that 1
second/module was a bit long.
I think it does not scale well with increasing numbers of modules.
Why? Wouldn't it scale linearly? Or you mean due to the increased number
of graph edges when increasing graph nodes?
I assume that while a project grows, module dependencies also increase.
So each single compile will get slower while the number of modules grow.
Full build scales linearly with code size, but single file compilation
time increases more (as a function of code size, module count and
dependencies).
Anyway, the programmer can take steps in lessening intermodule
dependencies to reduce incremental build times. That's not an option
with compiling everything at once, unless you split the code manually
into libraries.
True.