On Monday, 30 March 2015 at 22:55:50 UTC, H. S. Teoh wrote:
On Mon, Mar 30, 2015 at 10:39:50PM +0000, lobo via
Digitalmars-d wrote:
On Sunday, 29 March 2015 at 23:14:31 UTC, Martin Krejcirik
wrote:
>It seems like every DMD release makes compilation slower.
>This time I
>see 10.8s vs 7.8s on my little project. I know this is
>generally
>least of concern, and D1's lighting-fast times are long gone,
>but
>since Walter often claims D's superior compilation speeds,
>maybe some
>profiling is in order ?
I'm finding memory usage the biggest problem for me. 3s speed
increase
is not nice but an increase of 500MB RAM usage with DMD 2.067
over
2.066 means I can no longer build one of my projects.
[...]
Yeah, dmd memory consumption is way off the charts, because
under the
pretext of compile speed it never frees allocated memory.
Unfortunately,
the assumption that not managing memory == faster quickly
becomes untrue
once dmd runs out of RAM and the OS starts thrashing. Compile
times
quickly skyrocket exponentially as everything gets stuck on I/O.
This is one of the big reasons I can't use D on my work PC,
because it's
an older machine with limited RAM, and when DMD is running the
whole box
slows down to an unusable crawl.
This is not the first time this issue was brought up, but it
seems
nobody in the compiler team cares enough to do anything about
it. :-(
T
I sometimes think DMD's memory should be... garbage collected. I
used the forbidden phrase!
Seriously though, allocating a bunch of memory until you hit some
maximum threshold, possibly configured, and freeing unreferenced
memory at that point, pausing compilation while that happens?
This is GC. I wonder if someone enterprising enough would be
willing to try it out with DDMD by swapping malloc calls with
calls to D's GC or something.