Re: DMD compilation speed

2016-02-11 Thread Andrea Fontana via Digitalmars-d
On Thursday, 11 February 2016 at 05:38:54 UTC, Andrew Godfrey 
wrote:
I just upgraded from DMD 2.065.0 (so about 2 years old) to 
2.070.0, and noticed a difference in compilation speed. I'll 
detail what I see, in case it's interesting, but really I just 
want to ask: What should I expect? I know that DMD is now 
selfhosting, and I know there's a tradeoff between compilation 
speed and speed of generated code. Or maybe it's just a tweak 
to default compiler options.

So maybe this is completely known.


Check this:
http://digger.k3.1azy.net/trend/




Re: DMD compilation speed

2016-02-11 Thread Ola Fosheim Grøstad via Digitalmars-d
On Thursday, 11 February 2016 at 08:37:29 UTC, Andrea Fontana 
wrote:

Check this:
http://digger.k3.1azy.net/trend/


Cool, why did the peak heap size during compilation drop from 
approx. 180MB to 30MB?


Re: DMD compilation speed

2016-02-11 Thread Andrea Fontana via Digitalmars-d
On Thursday, 11 February 2016 at 08:46:19 UTC, Ola Fosheim 
Grøstad wrote:
On Thursday, 11 February 2016 at 08:37:29 UTC, Andrea Fontana 
wrote:

Check this:
http://digger.k3.1azy.net/trend/


Cool, why did the peak heap size during compilation drop from 
approx. 180MB to 30MB?


Zooming on graph you can see that the improvement was due to this:
https://github.com/D-Programming-Language/dmd/pull/4923



Re: DMD compilation speed

2016-02-11 Thread Ola Fosheim Grøstad via Digitalmars-d
On Thursday, 11 February 2016 at 08:57:22 UTC, Andrea Fontana 
wrote:
Zooming on graph you can see that the improvement was due to 
this:

https://github.com/D-Programming-Language/dmd/pull/4923


But why? Is it using the GC, or what?



Re: DMD compilation speed

2016-02-11 Thread Andrew Godfrey via Digitalmars-d
On Thursday, 11 February 2016 at 08:37:29 UTC, Andrea Fontana 
wrote:


Check this:
http://digger.k3.1azy.net/trend/


Very nice!


Re: DMD compilation speed

2016-02-11 Thread Chris Wright via Digitalmars-d
On Thu, 11 Feb 2016 09:04:22 +, Ola Fosheim Grøstad wrote:

> On Thursday, 11 February 2016 at 08:57:22 UTC, Andrea Fontana wrote:
>> Zooming on graph you can see that the improvement was due to this:
>> https://github.com/D-Programming-Language/dmd/pull/4923
> 
> But why? Is it using the GC, or what?

>From dmd/src/mars.d:

version (GC)
{
}
else
{
  GC.disable();
}

And the compiler is still not compiled with -version=GC by default, as 
far as I can determine.

The talks I've heard suggested that DMD may hide pointers where the GC 
can't find them (a relic from the C++ days), producing crashes and memory 
corruption when the GC is enabled. So the GC was disabled for stability.

That trend of memory reduction holds for the other samples, so this is 
probably something to do with the compiler rather than, say, writefln 
changing implementation to require much less memory to compile. But I'm 
not sure what might have caused it.


Re: DMD compilation speed

2016-02-10 Thread Joakim via Digitalmars-d
On Thursday, 11 February 2016 at 05:38:54 UTC, Andrew Godfrey 
wrote:
I just upgraded from DMD 2.065.0 (so about 2 years old) to 
2.070.0, and noticed a difference in compilation speed. I'll 
detail what I see, in case it's interesting, but really I just 
want to ask: What should I expect? I know that DMD is now 
selfhosting, and I know there's a tradeoff between compilation 
speed and speed of generated code. Or maybe it's just a tweak 
to default compiler options.

So maybe this is completely known.

[...]


I don't think it's unexpected, though something to work on 
beating back.


Re: DMD compilation speed

2015-04-09 Thread John Colvin via Digitalmars-d

On Monday, 30 March 2015 at 00:12:09 UTC, Walter Bright wrote:

On 3/29/2015 4:14 PM, Martin Krejcirik wrote:
It seems like every DMD release makes compilation slower. This 
time I see 10.8s
vs 7.8s on my little project. I know this is generally least 
of concern, and
D1's lighting-fast times are long gone, but since Walter often 
claims D's

superior compilation speeds, maybe some profiling is in order ?


Sigh. Two things happen constantly:

1. object file sizes creep up
2. compilation speed slows down

It's like rust on your car. Fixing it requires constant 
vigilance.


I just did some profiling of building phobos. I noticed ~20% of 
the runtime and ~40% of the L2 cache misses were in slist_reset. 
Is this expected?


Re: DMD compilation speed

2015-04-09 Thread Gary Willoughby via Digitalmars-d

On Monday, 30 March 2015 at 00:12:09 UTC, Walter Bright wrote:

On 3/29/2015 4:14 PM, Martin Krejcirik wrote:
It seems like every DMD release makes compilation slower. This 
time I see 10.8s
vs 7.8s on my little project. I know this is generally least 
of concern, and
D1's lighting-fast times are long gone, but since Walter often 
claims D's

superior compilation speeds, maybe some profiling is in order ?


Sigh. Two things happen constantly:

1. object file sizes creep up
2. compilation speed slows down

It's like rust on your car. Fixing it requires constant 
vigilance.


Are there any plans to fix this up in a point release? The 
compile times have really taken a nose dive in v2.067. It's 
really taken the fun out of the language.


Re: DMD compilation speed

2015-04-09 Thread w0rp via Digitalmars-d

On Wednesday, 1 April 2015 at 02:25:44 UTC, Random D-user wrote:
I've used D's GC with DDMD.  It works*, but you're trading 
better memory usage for worse allocation speed.  It's quite 
possible we could add a switch to ddmd to enable the GC.


As a random d-user (who cares about perf/speed and just 
happened to read this) a switch sounds VERY good to me. I don't 
want to pay the price of GC because of some low-end machines. 
Memory is really cheap these days and pretty much every machine 
is 64-bits (even phones are trasitioning fast to 64-bits).


Also, I wanted to add that freeing (at least to the OS (does 
this apply to GC?)) isn't exactly free either. Infact it can be 
more costly than mallocing.
Here's enlightening article: 
https://randomascii.wordpress.com/2014/12/10/hidden-costs-of-memory-allocation/


I think a switch would be good. My main reason for asking for 
such a thing isn't for performance (not directly), it's for being 
able to compile some D programs on computers with less memory. 
I've had machines with 1 or 2 GB of memory on them, wanted to 
compile a D program, DMD ran out of memory, and the compiler 
crashed. You can maybe start swapping on disk, but that won't be 
too great.


Re: DMD compilation speed

2015-04-01 Thread Daniel Murphy via Digitalmars-d
Jake The Baker  wrote in message 
news:bmwxxjmcoszhbexot...@forum.dlang.org...


As far as memory is concerned. How hard would it be to simply have DMD use 
a swap file? This would fix the out of memory issues and provide some 
safety(at least you can get your project to compile. Seems like it would 
be a relatively simple thing to add?


It seems unlikely that having dmd use its own swap file would perform better 
than the operating system's implementation. 



Re: DMD compilation speed

2015-04-01 Thread deadalnix via Digitalmars-d

On Wednesday, 1 April 2015 at 04:51:26 UTC, weaselcat wrote:

On Wednesday, 1 April 2015 at 04:49:55 UTC, ketmar wrote:

On Wed, 01 Apr 2015 02:25:43 +, Random D-user wrote:

GC because of some low-end machines. Memory is really cheap 
these days
and pretty much every machine is 64-bits (even phones are 
trasitioning

fast to 64-bits).


this is the essense of modern computing, btw. hey, we have 
this
resource! hey, we have the only program user will ever want to 
run, so
assume that all that resource is ours! what? just buy a better 
box!


google/mozilla's developer mantra regarding web browsers.


They must have an agreement with DRAM vendor, I see no other 
explanation...


Re: DMD compilation speed

2015-04-01 Thread Daniel Murphy via Digitalmars-d

lobo  wrote in message news:vydmnbzapttzjnnct...@forum.dlang.org...

I think the main culprit now is my attempts to (ab)use CTFE. After 
switching to DMD 2.066 I started adding `enum val=f()` where I could. 
After reading the discussions here I went about reverting most of these 
back to `auto val=blah` and I'm building again :-)


DMD 2.067 is now maxing out at ~3.8GB and stable.


Yeah, the big problem is that dmd's interpreter sort of evolved out of the 
constant folder, and wasn't designed for ctfe.  A new interpreter for dmd is 
one of the projects I hope to get to after DDMD is complete, unless somebody 
beats me to it. 



Re: DMD compilation speed

2015-04-01 Thread ketmar via Digitalmars-d
On Wed, 01 Apr 2015 06:21:58 +, deadalnix wrote:

 On Wednesday, 1 April 2015 at 04:51:26 UTC, weaselcat wrote:
 On Wednesday, 1 April 2015 at 04:49:55 UTC, ketmar wrote:
 On Wed, 01 Apr 2015 02:25:43 +, Random D-user wrote:

 GC because of some low-end machines. Memory is really cheap these
 days and pretty much every machine is 64-bits (even phones are
 trasitioning fast to 64-bits).

 this is the essense of modern computing, btw. hey, we have this
 resource! hey, we have the only program user will ever want to run, so
 assume that all that resource is ours! what? just buy a better box!

 google/mozilla's developer mantra regarding web browsers.
 
 They must have an agreement with DRAM vendor, I see no other
 explanation...

maybe vendors just giving 'em free DRAM chips...

signature.asc
Description: PGP signature


Re: DMD compilation speed

2015-03-31 Thread Andrei Alexandrescu via Digitalmars-d

On 3/30/15 7:41 PM, Vladimir Panteleev wrote:

On Monday, 30 March 2015 at 22:51:43 UTC, Andrei Alexandrescu wrote:

Part of our acceptance tests should be peak memory, object file size,
executable file size, and run time for building a few test programs
(starting with hello, world). Any change in these must be
investigated, justified, and documented. -- Andrei


I have filed this issue today:

https://issues.dlang.org/show_bug.cgi?id=14381


The current situation is a shame. I appreciate the free service we're 
getting, but sometimes you just can't afford the free stuff. -- Andrei




Re: DMD compilation speed

2015-03-31 Thread Vladimir Panteleev via Digitalmars-d

On Tuesday, 31 March 2015 at 05:42:02 UTC, ketmar wrote:
i think that DDMD can start with GC turned off, and 
automatically turn it
on when RAM consumption goes over 1GB, for example. this way 
small-sized
(and even middle-sized) projects without heavy CTFE will still 
enjoy
nofree is fast strategy, and big projects will not eat the 
whole box'

RAM.


Recording the information necessary to free memory costs 
performance (and more memory) itself. With a basic 
bump-the-pointer scheme, you don't need to worry about page sizes 
or free lists or heap fragmentation - all allocated data is 
contiguous, there is no metadata, and you can't back out of that.


Re: DMD compilation speed

2015-03-31 Thread ketmar via Digitalmars-d
On Tue, 31 Mar 2015 05:57:45 +, Vladimir Panteleev wrote:

 On Tuesday, 31 March 2015 at 05:42:02 UTC, ketmar wrote:
 i think that DDMD can start with GC turned off, and automatically turn
 it on when RAM consumption goes over 1GB, for example. this way
 small-sized (and even middle-sized) projects without heavy CTFE will
 still enjoy nofree is fast strategy, and big projects will not eat
 the whole box'
 RAM.
 
 Recording the information necessary to free memory costs performance
 (and more memory) itself. With a basic bump-the-pointer scheme, you
 don't need to worry about page sizes or free lists or heap fragmentation
 - all allocated data is contiguous, there is no metadata, and you can't
 back out of that.

TANSTAAFL. alas. yet without `free()` there aren't free lists to scan and 
so on, so it can be almost as fast as bump-the-pointer. the good thing is 
that user doesn't have to do the work that machine can do for him, i.e. 
thinking about how to invoke the compiler -- with GC or without GC.

signature.asc
Description: PGP signature


Re: DMD compilation speed

2015-03-31 Thread Jacob Carlborg via Digitalmars-d

On 2015-03-31 01:28, w0rp wrote:


I sometimes think DMD's memory should be... garbage collected. I used
the forbidden phrase!


Doesn't DMD already have a GC that is disabled?

--
/Jacob Carlborg


Re: DMD compilation speed

2015-03-31 Thread ketmar via Digitalmars-d
On Tue, 31 Mar 2015 10:14:05 +, Temtaime wrote:

 Is anyone there looked how msvc for example compiles really big files ?
 I never seen it goes over 200 MB. And it is written in C++, so no GC.
 And compiles very quick.

and it has no CTFE, so...

CTFE is a big black hole that eats memory like crazy.

signature.asc
Description: PGP signature


Re: DMD compilation speed

2015-03-31 Thread Temtaime via Digitalmars-d

*use pools...


Re: DMD compilation speed

2015-03-31 Thread Daniel Murphy via Digitalmars-d
Vladimir Panteleev  wrote in message 
news:remgknxogqlfwfnsu...@forum.dlang.org...


Recording the information necessary to free memory costs performance (and 
more memory) itself. With a basic bump-the-pointer scheme, you don't need 
to worry about page sizes or free lists or heap fragmentation - all 
allocated data is contiguous, there is no metadata, and you can't back out 
of that.


It's possible that we could use a hybrid approach, where a GB or so is 
allocated from the GC in one chunk, then filled up using a bump-pointer 
allocator.  When that's exhausted, the GC can start being used as normal for 
the rest of the compilation.  The big chunk will obviously never be freed, 
but the GC still has a good chance to keep memory usage under control.  (on 
64-bit at least) 



Re: DMD compilation speed

2015-03-31 Thread Temtaime via Digitalmars-d
Is anyone there looked how msvc for example compiles really big 
files ?
I never seen it goes over 200 MB. And it is written in C++, so no 
GC. And compiles very quick.
I think DMD should be refactored and free the memory, pools and 
other techniques.


Re: DMD compilation speed

2015-03-31 Thread Daniel Murphy via Digitalmars-d
Jacob Carlborg  wrote in message news:mfe0dm$2i6l$1...@digitalmars.com... 


Doesn't DMD already have a GC that is disabled?


It did once, but it's been gone for a while now.


Re: DMD compilation speed

2015-03-31 Thread Random D-user via Digitalmars-d
I've used D's GC with DDMD.  It works*, but you're trading 
better memory usage for worse allocation speed.  It's quite 
possible we could add a switch to ddmd to enable the GC.


As a random d-user (who cares about perf/speed and just happened 
to read this) a switch sounds VERY good to me. I don't want to 
pay the price of GC because of some low-end machines. Memory is 
really cheap these days and pretty much every machine is 64-bits 
(even phones are trasitioning fast to 64-bits).


Also, I wanted to add that freeing (at least to the OS (does this 
apply to GC?)) isn't exactly free either. Infact it can be more 
costly than mallocing.
Here's enlightening article: 
https://randomascii.wordpress.com/2014/12/10/hidden-costs-of-memory-allocation/


Re: DMD compilation speed

2015-03-31 Thread ketmar via Digitalmars-d
On Wed, 01 Apr 2015 02:25:43 +, Random D-user wrote:

 GC because of some low-end machines. Memory is really cheap these days
 and pretty much every machine is 64-bits (even phones are trasitioning
 fast to 64-bits).

this is the essense of modern computing, btw. hey, we have this 
resource! hey, we have the only program user will ever want to run, so 
assume that all that resource is ours! what? just buy a better box!

signature.asc
Description: PGP signature


Re: DMD compilation speed

2015-03-31 Thread lobo via Digitalmars-d

On Wednesday, 1 April 2015 at 02:54:48 UTC, Jake The Baker wrote:

On Tuesday, 31 March 2015 at 19:27:35 UTC, Adam D. Ruppe wrote:
On Tuesday, 31 March 2015 at 19:20:20 UTC, Jake The Baker 
wrote:
As far as memory is concerned. How hard would it be to simply 
have DMD use a swap file?


That'd hit the same walls as the operating system trying to 
use a swap file at least - running out of address space, and 
being brutally slow even if it does keep running.


I doubt it. If most modules are sparsely used it would improve
memory usage in proportion to that.

Basically if D would monitor file/module usage and compile areas
that are relatively independent it should minimize disk usage.
Basically page out stuff you know won't be needed. If it was
smart enough it could order the data through module usage and
compile the independent ones first, then only the ones that are
simple dependencies, etc).

The benefits to such a system is that larger projects get the
biggest boost(there are more independent modules floating 
around.

Hence at some point it becomes a non-issue.


I have no idea what you're talking about here, sorry.

I'm compiling modules separately already to object files. I think 
that helps reduce memory usage but I could be mistaken.


I think the main culprit now is my attempts to (ab)use CTFE. 
After switching to DMD 2.066 I started adding `enum val=f()` 
where I could. After reading the discussions here I went about 
reverting most of these back to `auto val=blah` and I'm 
building again :-)


DMD 2.067 is now maxing out at ~3.8GB and stable.

bye,
lobo





Re: DMD compilation speed

2015-03-31 Thread Jake The Baker via Digitalmars-d

On Tuesday, 31 March 2015 at 19:27:35 UTC, Adam D. Ruppe wrote:

On Tuesday, 31 March 2015 at 19:20:20 UTC, Jake The Baker wrote:
As far as memory is concerned. How hard would it be to simply 
have DMD use a swap file?


That'd hit the same walls as the operating system trying to use 
a swap file at least - running out of address space, and being 
brutally slow even if it does keep running.


I doubt it. If most modules are sparsely used it would improve
memory usage in proportion to that.

Basically if D would monitor file/module usage and compile areas
that are relatively independent it should minimize disk usage.
Basically page out stuff you know won't be needed. If it was
smart enough it could order the data through module usage and
compile the independent ones first, then only the ones that are
simple dependencies, etc).

The benefits to such a system is that larger projects get the
biggest boost(there are more independent modules floating around.
Hence at some point it becomes a non-issue.


Re: DMD compilation speed

2015-03-31 Thread weaselcat via Digitalmars-d

On Wednesday, 1 April 2015 at 04:49:55 UTC, ketmar wrote:

On Wed, 01 Apr 2015 02:25:43 +, Random D-user wrote:

GC because of some low-end machines. Memory is really cheap 
these days
and pretty much every machine is 64-bits (even phones are 
trasitioning

fast to 64-bits).


this is the essense of modern computing, btw. hey, we have 
this
resource! hey, we have the only program user will ever want to 
run, so
assume that all that resource is ours! what? just buy a better 
box!


google/mozilla's developer mantra regarding web browsers.


Re: DMD compilation speed

2015-03-31 Thread Martin Nowak via Digitalmars-d
On 03/31/2015 05:51 AM, deadalnix wrote:
 Yes, compiler to perform significantly better with GC than with other
 memory management strategy. Ironically, I think that weighted a bit too
 much in favor of GC for language design in the general case.

Why? Compilers use a lot of long-lived data structures (AST, metadata)
which is particularly bad for a conservative GC.
Any evidence to the contrary?


Re: DMD compilation speed

2015-03-31 Thread Adam D. Ruppe via Digitalmars-d

On Tuesday, 31 March 2015 at 19:20:20 UTC, Jake The Baker wrote:
As far as memory is concerned. How hard would it be to simply 
have DMD use a swap file?


That'd hit the same walls as the operating system trying to use a 
swap file at least - running out of address space, and being 
brutally slow even if it does keep running.


Re: DMD compilation speed

2015-03-31 Thread Temtaime via Digitalmars-d
I don't use CTFE in my game engine and DMD uses about 600 MB 
memory per file for instance.


Re: DMD compilation speed

2015-03-31 Thread deadalnix via Digitalmars-d

On Tuesday, 31 March 2015 at 19:19:23 UTC, Martin Nowak wrote:

On 03/31/2015 05:51 AM, deadalnix wrote:
Yes, compiler to perform significantly better with GC than 
with other
memory management strategy. Ironically, I think that weighted 
a bit too

much in favor of GC for language design in the general case.


Why? Compilers use a lot of long-lived data structures (AST, 
metadata)

which is particularly bad for a conservative GC.
Any evidence to the contrary?


The graph is not acyclic, which makes it even worse for anything 
else.


Re: DMD compilation speed

2015-03-31 Thread Jake The Baker via Digitalmars-d

On Monday, 30 March 2015 at 22:47:51 UTC, lobo wrote:

On Monday, 30 March 2015 at 22:39:51 UTC, lobo wrote:
On Sunday, 29 March 2015 at 23:14:31 UTC, Martin Krejcirik 
wrote:
It seems like every DMD release makes compilation slower. 
This time I see 10.8s vs 7.8s on my little project. I know 
this is generally least of concern, and D1's lighting-fast 
times are long gone, but since Walter often claims D's 
superior compilation speeds, maybe some profiling is in order 
?


I'm finding memory usage the biggest problem for me. 3s speed 
increase is not nice but an increase of 500MB RAM usage with 
DMD 2.067 over 2.066 means I can no longer build one of my 
projects.


bye,
lobo


I should add that I am on a 32-bit machine with 4GB RAM. I just 
ran some tests measuring RAM usage:


DMD 2.067 ~4.2GB (fails here so not sure of the full amount 
required)

DMD 2.066 ~3.7GB (maximum)
DMD 2.065 ~3.1GB (maximum)

It was right on the edge with 2.066 anyway but this trend of 
more RAM usage seems to also be occurring with each DMD release.


bye,
lobo


As far as memory is concerned. How hard would it be to simply 
have DMD use a swap file? This would fix the out of memory issues 
and provide some safety(at least you can get your project to 
compile. Seems like it would be a relatively simple thing to add?


Re: DMD compilation speed

2015-03-31 Thread deadalnix via Digitalmars-d

On Tuesday, 31 March 2015 at 11:29:23 UTC, ketmar wrote:

On Tue, 31 Mar 2015 10:14:05 +, Temtaime wrote:

Is anyone there looked how msvc for example compiles really 
big files ?
I never seen it goes over 200 MB. And it is written in C++, so 
no GC.

And compiles very quick.


and it has no CTFE, so...

CTFE is a big black hole that eats memory like crazy.


I'm going to propose again the same thing as in the past :
 - before CTFE switch pool.
 - CTFE in the new pool.
 - deep copy result from ctfe pool to main pool.
 - ditch ctfe pool.


Re: DMD compilation speed

2015-03-31 Thread ketmar via Digitalmars-d
On Tue, 31 Mar 2015 18:24:48 +, deadalnix wrote:

 On Tuesday, 31 March 2015 at 11:29:23 UTC, ketmar wrote:
 On Tue, 31 Mar 2015 10:14:05 +, Temtaime wrote:

 Is anyone there looked how msvc for example compiles really big files
 ?
 I never seen it goes over 200 MB. And it is written in C++, so no GC.
 And compiles very quick.

 and it has no CTFE, so...

 CTFE is a big black hole that eats memory like crazy.
 
 I'm going to propose again the same thing as in the past :
   - before CTFE switch pool.
   - CTFE in the new pool.
   - deep copy result from ctfe pool to main pool.
   - ditch ctfe pool.

this won't really help long CTFE calls (like building a parser based on 
grammar, for example, as this is a one very long call). it will slow down 
simple CFTE calls though.

it *may* help, but i'm looking at my life samle, for example, and see 
that it eats all my RAM while parsing big .lif file. it has to do that in 
one call, as there is no way to enumerate existing files in directory and 
process them sequentially -- as there is no way to store state between 
CTFE calls, so i can't even create numbered arrays with data.

signature.asc
Description: PGP signature


Re: DMD compilation speed

2015-03-31 Thread Martin Nowak via Digitalmars-d
On 03/31/2015 08:24 PM, deadalnix wrote:
 I'm going to propose again the same thing as in the past :
  - before CTFE switch pool.
  - CTFE in the new pool.
  - deep copy result from ctfe pool to main pool.
  - ditch ctfe pool.

No, it's trivial enough to implement a full AST interpreter.
The way it's done currently (using AST nodes as CTFE interpreter values)
makes it very hard to use a distinct allocator, because ownership can
move from CTFE to compiler and vice versa.


Re: DMD compilation speed

2015-03-31 Thread John Colvin via Digitalmars-d

On Tuesday, 31 March 2015 at 18:24:49 UTC, deadalnix wrote:

On Tuesday, 31 March 2015 at 11:29:23 UTC, ketmar wrote:

On Tue, 31 Mar 2015 10:14:05 +, Temtaime wrote:

Is anyone there looked how msvc for example compiles really 
big files ?
I never seen it goes over 200 MB. And it is written in C++, 
so no GC.

And compiles very quick.


and it has no CTFE, so...

CTFE is a big black hole that eats memory like crazy.


I'm going to propose again the same thing as in the past :
 - before CTFE switch pool.
 - CTFE in the new pool.
 - deep copy result from ctfe pool to main pool.
 - ditch ctfe pool.


Wait, you mean DMD doesn't already do something like that? Yikes. 
I had always assumed (without looking) that ctfe used some 
separate heap that was chucked after each call.


Re: DMD compilation speed

2015-03-31 Thread deadalnix via Digitalmars-d

On Tuesday, 31 March 2015 at 21:53:29 UTC, Martin Nowak wrote:

On 03/31/2015 08:24 PM, deadalnix wrote:

I'm going to propose again the same thing as in the past :
 - before CTFE switch pool.
 - CTFE in the new pool.
 - deep copy result from ctfe pool to main pool.
 - ditch ctfe pool.


No, it's trivial enough to implement a full AST interpreter.
The way it's done currently (using AST nodes as CTFE 
interpreter values)
makes it very hard to use a distinct allocator, because 
ownership can

move from CTFE to compiler and vice versa.


This is why I introduced a deep copy step in there.


Re: DMD compilation speed

2015-03-31 Thread lobo via Digitalmars-d

On Tuesday, 31 March 2015 at 19:20:20 UTC, Jake The Baker wrote:

On Monday, 30 March 2015 at 22:47:51 UTC, lobo wrote:

On Monday, 30 March 2015 at 22:39:51 UTC, lobo wrote:
On Sunday, 29 March 2015 at 23:14:31 UTC, Martin Krejcirik 
wrote:
It seems like every DMD release makes compilation slower. 
This time I see 10.8s vs 7.8s on my little project. I know 
this is generally least of concern, and D1's lighting-fast 
times are long gone, but since Walter often claims D's 
superior compilation speeds, maybe some profiling is in 
order ?


I'm finding memory usage the biggest problem for me. 3s speed 
increase is not nice but an increase of 500MB RAM usage with 
DMD 2.067 over 2.066 means I can no longer build one of my 
projects.


bye,
lobo


I should add that I am on a 32-bit machine with 4GB RAM. I 
just ran some tests measuring RAM usage:


DMD 2.067 ~4.2GB (fails here so not sure of the full amount 
required)

DMD 2.066 ~3.7GB (maximum)
DMD 2.065 ~3.1GB (maximum)

It was right on the edge with 2.066 anyway but this trend of 
more RAM usage seems to also be occurring with each DMD 
release.


bye,
lobo


As far as memory is concerned. How hard would it be to simply 
have DMD use a swap file? This would fix the out of memory 
issues and provide some safety(at least you can get your 
project to compile. Seems like it would be a relatively simple 
thing to add?


It's incredibly slow and unproductive it's not really an option. 
My primary reason for using D is that I can be as productive as I 
am in Python but retain the same raw native power of C++.


Anyway, it sounds D devs have a few good ideas on how to resolve 
this.


bye,
lobo


Re: DMD compilation speed

2015-03-30 Thread Martin Nowak via Digitalmars-d
On 03/30/2015 01:14 AM, Martin Krejcirik wrote:
 It seems like every DMD release makes compilation slower. This time I
 see 10.8s vs 7.8s on my little project. I know this is generally least
 of concern, and D1's lighting-fast times are long gone, but since Walter
 often claims D's superior compilation speeds, maybe some profiling is in
 order ?

25% slowdown is severe, can you share the project and probably file a
bug report?


Re: DMD compilation speed

2015-03-30 Thread Mathias Lang via Digitalmars-d
Is it only DMD compile time or DMD + ld ? ld can be very slow sometimes.

2015-03-30 1:14 GMT+02:00 Martin Krejcirik via Digitalmars-d 
digitalmars-d@puremagic.com:

 It seems like every DMD release makes compilation slower. This time I see
 10.8s vs 7.8s on my little project. I know this is generally least of
 concern, and D1's lighting-fast times are long gone, but since Walter often
 claims D's superior compilation speeds, maybe some profiling is in order ?



Re: DMD compilation speed

2015-03-30 Thread Jacob Carlborg via Digitalmars-d

On 2015-03-30 18:09, Martin Krejcirik wrote:

Here is one example:

Orange d5b2e0127c67f50bd885ee43a7dd61dd418b1661
https://github.com/jacob-carlborg/orange.git
make

2.065.0
real0m9.028s
user0m7.972s
sys 0m0.940s

2.066.1
real0m10.796s
user0m9.629s
sys 0m1.056s

2.067.0
real0m13.543s
user0m12.097s
sys 0m1.348s


These are the timings for compiling the unit tests without linking. It 
passes all the files to DMD in one command. The make file invokes DMD 
once per file.


1.076
real0m0.212s
user0m0.187s
sys 0m0.022s

2.065.0
real0m0.426s
user0m0.357s
sys 0m0.065s

2.066.1
real0m0.470s
user0m0.397s
sys 0m0.064s

2.067.0
real0m0.510s
user0m0.435s
sys 0m0.074s

It might not be fair to compare with D1 since it's not exactly the same 
code.


--
/Jacob Carlborg


Re: DMD compilation speed

2015-03-30 Thread lobo via Digitalmars-d

On Monday, 30 March 2015 at 22:39:51 UTC, lobo wrote:
On Sunday, 29 March 2015 at 23:14:31 UTC, Martin Krejcirik 
wrote:
It seems like every DMD release makes compilation slower. This 
time I see 10.8s vs 7.8s on my little project. I know this is 
generally least of concern, and D1's lighting-fast times are 
long gone, but since Walter often claims D's superior 
compilation speeds, maybe some profiling is in order ?


I'm finding memory usage the biggest problem for me. 3s speed 
increase is not nice but an increase of 500MB RAM usage with 
DMD 2.067 over 2.066 means I can no longer build one of my 
projects.


bye,
lobo


I should add that I am on a 32-bit machine with 4GB RAM. I just 
ran some tests measuring RAM usage:


DMD 2.067 ~4.2GB (fails here so not sure of the full amount 
required)

DMD 2.066 ~3.7GB (maximum)
DMD 2.065 ~3.1GB (maximum)

It was right on the edge with 2.066 anyway but this trend of more 
RAM usage seems to also be occurring with each DMD release.


bye,
lobo


Re: DMD compilation speed

2015-03-30 Thread Mathias Lang via Digitalmars-d
2015-03-31 0:53 GMT+02:00 H. S. Teoh via Digitalmars-d 
digitalmars-d@puremagic.com:


 Yeah, dmd memory consumption is way off the charts, because under the
 pretext of compile speed it never frees allocated memory. Unfortunately,
 the assumption that not managing memory == faster quickly becomes untrue
 once dmd runs out of RAM and the OS starts thrashing. Compile times
 quickly skyrocket exponentially as everything gets stuck on I/O.

 This is one of the big reasons I can't use D on my work PC, because it's
 an older machine with limited RAM, and when DMD is running the whole box
 slows down to an unusable crawl.

 This is not the first time this issue was brought up, but it seems
 nobody in the compiler team cares enough to do anything about it. :-(


 T

 --
 Lottery: tax on the stupid. -- Slashdotter


I can relate. DMD compilation speed was nothing but a myth to me until I
migrated from 4GBs to 8 GBs. And everytime I compiled something, my
computer froze for a few seconds (or a few minutes, depending of the
project).


Re: DMD compilation speed

2015-03-30 Thread H. S. Teoh via Digitalmars-d
On Mon, Mar 30, 2015 at 10:39:50PM +, lobo via Digitalmars-d wrote:
 On Sunday, 29 March 2015 at 23:14:31 UTC, Martin Krejcirik wrote:
 It seems like every DMD release makes compilation slower. This time I
 see 10.8s vs 7.8s on my little project. I know this is generally
 least of concern, and D1's lighting-fast times are long gone, but
 since Walter often claims D's superior compilation speeds, maybe some
 profiling is in order ?
 
 I'm finding memory usage the biggest problem for me. 3s speed increase
 is not nice but an increase of 500MB RAM usage with DMD 2.067 over
 2.066 means I can no longer build one of my projects.
[...]

Yeah, dmd memory consumption is way off the charts, because under the
pretext of compile speed it never frees allocated memory. Unfortunately,
the assumption that not managing memory == faster quickly becomes untrue
once dmd runs out of RAM and the OS starts thrashing. Compile times
quickly skyrocket exponentially as everything gets stuck on I/O.

This is one of the big reasons I can't use D on my work PC, because it's
an older machine with limited RAM, and when DMD is running the whole box
slows down to an unusable crawl.

This is not the first time this issue was brought up, but it seems
nobody in the compiler team cares enough to do anything about it. :-(


T

-- 
Lottery: tax on the stupid. -- Slashdotter


Re: DMD compilation speed

2015-03-30 Thread Andrei Alexandrescu via Digitalmars-d

On 3/30/15 3:47 PM, lobo wrote:

On Monday, 30 March 2015 at 22:39:51 UTC, lobo wrote:

On Sunday, 29 March 2015 at 23:14:31 UTC, Martin Krejcirik wrote:

It seems like every DMD release makes compilation slower. This time I
see 10.8s vs 7.8s on my little project. I know this is generally
least of concern, and D1's lighting-fast times are long gone, but
since Walter often claims D's superior compilation speeds, maybe some
profiling is in order ?


I'm finding memory usage the biggest problem for me. 3s speed increase
is not nice but an increase of 500MB RAM usage with DMD 2.067 over
2.066 means I can no longer build one of my projects.

bye,
lobo


I should add that I am on a 32-bit machine with 4GB RAM. I just ran some
tests measuring RAM usage:

DMD 2.067 ~4.2GB (fails here so not sure of the full amount required)
DMD 2.066 ~3.7GB (maximum)
DMD 2.065 ~3.1GB (maximum)

It was right on the edge with 2.066 anyway but this trend of more RAM
usage seems to also be occurring with each DMD release.

bye,
lobo


Part of our acceptance tests should be peak memory, object file size, 
executable file size, and run time for building a few test programs 
(starting with hello, world). Any change in these must be 
investigated, justified, and documented. -- Andrei




Re: DMD compilation speed

2015-03-30 Thread lobo via Digitalmars-d

On Sunday, 29 March 2015 at 23:14:31 UTC, Martin Krejcirik wrote:
It seems like every DMD release makes compilation slower. This 
time I see 10.8s vs 7.8s on my little project. I know this is 
generally least of concern, and D1's lighting-fast times are 
long gone, but since Walter often claims D's superior 
compilation speeds, maybe some profiling is in order ?


I'm finding memory usage the biggest problem for me. 3s speed 
increase is not nice but an increase of 500MB RAM usage with DMD 
2.067 over 2.066 means I can no longer build one of my projects.


bye,
lobo





Re: DMD compilation speed

2015-03-30 Thread w0rp via Digitalmars-d

On Monday, 30 March 2015 at 22:55:50 UTC, H. S. Teoh wrote:
On Mon, Mar 30, 2015 at 10:39:50PM +, lobo via 
Digitalmars-d wrote:
On Sunday, 29 March 2015 at 23:14:31 UTC, Martin Krejcirik 
wrote:
It seems like every DMD release makes compilation slower. 
This time I
see 10.8s vs 7.8s on my little project. I know this is 
generally
least of concern, and D1's lighting-fast times are long gone, 
but
since Walter often claims D's superior compilation speeds, 
maybe some

profiling is in order ?

I'm finding memory usage the biggest problem for me. 3s speed 
increase
is not nice but an increase of 500MB RAM usage with DMD 2.067 
over

2.066 means I can no longer build one of my projects.

[...]

Yeah, dmd memory consumption is way off the charts, because 
under the
pretext of compile speed it never frees allocated memory. 
Unfortunately,
the assumption that not managing memory == faster quickly 
becomes untrue
once dmd runs out of RAM and the OS starts thrashing. Compile 
times

quickly skyrocket exponentially as everything gets stuck on I/O.

This is one of the big reasons I can't use D on my work PC, 
because it's
an older machine with limited RAM, and when DMD is running the 
whole box

slows down to an unusable crawl.

This is not the first time this issue was brought up, but it 
seems
nobody in the compiler team cares enough to do anything about 
it. :-(



T


I sometimes think DMD's memory should be... garbage collected. I 
used the forbidden phrase!


Seriously though, allocating a bunch of memory until you hit some 
maximum threshold, possibly configured, and freeing unreferenced 
memory at that point, pausing compilation while that happens? 
This is GC. I wonder if someone enterprising enough would be 
willing to try it out with DDMD by swapping malloc calls with 
calls to D's GC or something.


Re: DMD compilation speed

2015-03-30 Thread weaselcat via Digitalmars-d

On Monday, 30 March 2015 at 23:28:50 UTC, w0rp wrote:
Seriously though, allocating a bunch of memory until you hit 
some maximum threshold, possibly configured, and freeing 
unreferenced memory at that point, pausing compilation while 
that happens? This is GC. I wonder if someone enterprising 
enough would be willing to try it out with DDMD by swapping 
malloc calls with calls to D's GC or something.


has anyone tried using boehm with dmd? I'm pretty sure it has a 
way of being LD_PRELOADed to override malloc IIRC.


Re: DMD compilation speed

2015-03-30 Thread deadalnix via Digitalmars-d

On Monday, 30 March 2015 at 23:28:50 UTC, w0rp wrote:
I sometimes think DMD's memory should be... garbage collected. 
I used the forbidden phrase!




Yes, set an initial heap size of 100Mb or something and the GC 
won't kick in for scripts.


Also, free after CTFE !


Re: DMD compilation speed

2015-03-30 Thread Andrei Alexandrescu via Digitalmars-d

On 3/30/15 4:28 PM, w0rp wrote:

I sometimes think DMD's memory should be... garbage collected. I used
the forbidden phrase!


Compiler workloads are a good candidate for GC. -- Andrei



Re: DMD compilation speed

2015-03-30 Thread Vladimir Panteleev via Digitalmars-d
On Monday, 30 March 2015 at 22:51:43 UTC, Andrei Alexandrescu 
wrote:
Part of our acceptance tests should be peak memory, object file 
size, executable file size, and run time for building a few 
test programs (starting with hello, world). Any change in 
these must be investigated, justified, and documented. -- Andrei


I have filed this issue today:

https://issues.dlang.org/show_bug.cgi?id=14381


Re: DMD compilation speed

2015-03-30 Thread Daniel Murphy via Digitalmars-d

deadalnix  wrote in message news:uwajsjgcjtzfeqtqo...@forum.dlang.org...


On Tuesday, 31 March 2015 at 04:56:13 UTC, Daniel Murphy wrote:
 I've used D's GC with DDMD.  It works*, but you're trading better memory 
 usage for worse allocation speed.  It's quite possible we could add a 
 switch to ddmd to enable the GC.



That is not accurate. For small programs, yes. For anything non trivial, 
the amount of memory in the working is set become so big that I doubt 
there is any advantage of doing so.


I don't see how it's inaccurate.  Many projects fit into the range where 
they do not exhaust physical memory, and the slower allocation speed can 
really hurt.


It's worth noting that 'small' doesn't mean low number of lines of code, but 
low number of instantiated templates and ctfe calls. 



Re: DMD compilation speed

2015-03-30 Thread ketmar via Digitalmars-d
On Tue, 31 Mar 2015 05:21:13 +, deadalnix wrote:

 On Tuesday, 31 March 2015 at 04:56:13 UTC, Daniel Murphy wrote:
 I've used D's GC with DDMD.  It works*, but you're trading better
 memory usage for worse allocation speed.  It's quite possible we could
 add a switch to ddmd to enable the GC.


 That is not accurate. For small programs, yes. For anything non trivial,
 the amount of memory in the working is set become so big that I doubt
 there is any advantage of doing so.

i think that DDMD can start with GC turned off, and automatically turn it 
on when RAM consumption goes over 1GB, for example. this way small-sized 
(and even middle-sized) projects without heavy CTFE will still enjoy 
nofree is fast strategy, and big projects will not eat the whole box' 
RAM.

signature.asc
Description: PGP signature


Re: DMD compilation speed

2015-03-30 Thread deadalnix via Digitalmars-d

On Tuesday, 31 March 2015 at 04:56:13 UTC, Daniel Murphy wrote:
I've used D's GC with DDMD.  It works*, but you're trading 
better memory usage for worse allocation speed.  It's quite 
possible we could add a switch to ddmd to enable the GC.




That is not accurate. For small programs, yes. For anything non 
trivial, the amount of memory in the working is set become so big 
that I doubt there is any advantage of doing so.


Re: DMD compilation speed

2015-03-30 Thread deadalnix via Digitalmars-d
On Tuesday, 31 March 2015 at 00:54:08 UTC, Andrei Alexandrescu 
wrote:

On 3/30/15 4:28 PM, w0rp wrote:
I sometimes think DMD's memory should be... garbage collected. 
I used

the forbidden phrase!


Compiler workloads are a good candidate for GC. -- Andrei


Yes, compiler to perform significantly better with GC than with 
other memory management strategy. Ironically, I think that 
weighted a bit too much in favor of GC for language design in the 
general case.


Re: DMD compilation speed

2015-03-30 Thread Daniel Murphy via Digitalmars-d

w0rp  wrote in message news:leajtjgremulowqox...@forum.dlang.org...

I sometimes think DMD's memory should be... garbage collected. I used the 
forbidden phrase!


Seriously though, allocating a bunch of memory until you hit some maximum 
threshold, possibly configured, and freeing unreferenced memory at that 
point, pausing compilation while that happens? This is GC. I wonder if 
someone enterprising enough would be willing to try it out with DDMD by 
swapping malloc calls with calls to D's GC or something.


I've used D's GC with DDMD.  It works*, but you're trading better memory 
usage for worse allocation speed.  It's quite possible we could add a switch 
to ddmd to enable the GC.


* Well actually it currently segfaults, but not because of anything 
fundamentally wrong with the approach.


After switching to DDMD we will have a HUGE number of options readily 
available for reducing memory usage, such as using allocation-free range 
code and enabling the GC. 



Re: DMD compilation speed

2015-03-30 Thread Martin Krejcirik via Digitalmars-d

Here is one example:

Orange d5b2e0127c67f50bd885ee43a7dd61dd418b1661
https://github.com/jacob-carlborg/orange.git
make

2.065.0
real0m9.028s
user0m7.972s
sys 0m0.940s

2.066.1
real0m10.796s
user0m9.629s
sys 0m1.056s

2.067.0
real0m13.543s
user0m12.097s
sys 0m1.348s


Re: DMD compilation speed

2015-03-29 Thread weaselcat via Digitalmars-d

On Monday, 30 March 2015 at 00:12:09 UTC, Walter Bright wrote:

On 3/29/2015 4:14 PM, Martin Krejcirik wrote:
It seems like every DMD release makes compilation slower. This 
time I see 10.8s
vs 7.8s on my little project. I know this is generally least 
of concern, and
D1's lighting-fast times are long gone, but since Walter often 
claims D's

superior compilation speeds, maybe some profiling is in order ?


Sigh. Two things happen constantly:

1. object file sizes creep up
2. compilation speed slows down

It's like rust on your car. Fixing it requires constant 
vigilance.


would having benchmarks help keep this under control/make 
regressions easier to find?


Re: DMD compilation speed

2015-03-29 Thread Walter Bright via Digitalmars-d

On 3/29/2015 4:14 PM, Martin Krejcirik wrote:

It seems like every DMD release makes compilation slower. This time I see 10.8s
vs 7.8s on my little project. I know this is generally least of concern, and
D1's lighting-fast times are long gone, but since Walter often claims D's
superior compilation speeds, maybe some profiling is in order ?


Sigh. Two things happen constantly:

1. object file sizes creep up
2. compilation speed slows down

It's like rust on your car. Fixing it requires constant vigilance.


Re: DMD compilation speed

2015-03-29 Thread Walter Bright via Digitalmars-d

On 3/29/2015 5:14 PM, weaselcat wrote:

would having benchmarks help keep this under control/make regressions easier to
find?


benchmarks would help.