Re: Increasing D Compiler Speed by Over 75%

2013-07-31 Thread Walter Bright

On 7/30/2013 11:40 PM, dennis luehring wrote:

currently the vc builded dmd is about 2 times faster in compiling


That's an old number now. Someone want to try it with the current HEAD?



Re: Increasing D Compiler Speed by Over 75%

2013-07-31 Thread dennis luehring

Am 31.07.2013 09:00, schrieb Walter Bright:

On 7/30/2013 11:40 PM, dennis luehring wrote:

currently the vc builded dmd is about 2 times faster in compiling


That's an old number now. Someone want to try it with the current HEAD?



tried to but failed

downloaded dmd-master.zip (from github)
downloaded dmd.2.063.2.zip

buidl dmd-master with vs2010
copied the produces dmd_msc.exe to dmd.2.063.2\dmd2\windows\bin

dmd.2.063.2\dmd2\src\phobos..\..\windows\bin\dmd.exe std\algorithm 
-unittest -main


gives

Error: cannot read file ûmain.d (what is this û in front of main.d?)

dmd.2.063.2\dmd2\src\phobos..\..\windows\bin\dmd_msc.exe std\algorithm 
-unittest -main


gives

std\datetime.d(31979): Error: pure function 
'std.datetime.enforceValid!hours.enforceValid' cannot call impure 
function 'core.time.TimeException.this'
std\datetime.d(13556): Error: template instance 
std.datetime.enforceValid!hours error instantiating
std\datetime.d(31984): Error: pure function 
'std.datetime.enforceValid!minutes.enforceValid' cannot call impure 
function 'core.time.TimeException.this'
std\datetime.d(13557): Error: template instance 
std.datetime.enforceValid!minutes error instantiating
std\datetime.d(31989): Error: pure function 
'std.datetime.enforceValid!seconds.enforceValid' cannot call impure 
function 'core.time.TimeException.this'
std\datetime.d(13558): Error: template instance 
std.datetime.enforceValid!seconds error instantiating

std\datetime.d(33284):called from here: (TimeOfDay __ctmp1990;
 , __ctmp1990).this(0, 0, 0)
std\datetime.d(33293): Error: CTFE failed because of previous errors in this
std\datetime.d(31974): Error: pure function 
'std.datetime.enforceValid!months.enforceValid' cannot call impure 
function 'core.time.TimeException.this'
std\datetime.d(8994): Error: template instance 
std.datetime.enforceValid!months error instantiating
std\datetime.d(32012): Error: pure function 
'std.datetime.enforceValid!days.enforceValid' cannot call impure 
function 'core.time.TimeException.this'
std\datetime.d(8995): Error: template instance 
std.datetime.enforceValid!days error instantiating

std\datetime.d(33389):called from here: (Date __ctmp1999;
 , __ctmp1999).this(-3760, 9, 7)
std\datetime.d(33458): Error: CTFE failed because of previous errors in this
Error: undefined identifier '_xopCmp'

and a compiler crash


my former benchmark where done the same way and it worked without any 
problems - this master seems to have problems







Re: Article: Increasing the D Compiler Speed by Over 75%

2013-07-31 Thread Dmitry Olshansky

30-Jul-2013 22:22, Walter Bright пишет:

On 7/30/2013 11:02 AM, Dmitry Olshansky wrote:




What bothers me is that while I've been hacking at this I couldn't
shake off the
feeling that AA code assumes NO FULL HASH COLLISIONS at all?


I don't know what you mean, as it has a collision resolution system. See
embedded code below.


Yes but it does so using full _hash_ alone.
Basically Key is size_t, if we store strings in this AA and they hash to 
exactly the same size_t key then you'll never find one of them.




Value _aaGetRvalue(AA* aa, Key key)
{
 //printf(_aaGetRvalue(key = %p)\n, key);
 if (aa)
 {
 size_t i;
 size_t len = aa-b_length;
 if (len == 4)
 i = (size_t)key  3;
 else if (len == 31)
 i = (size_t)key % 31;
 else
 i = (size_t)key % len;
 aaA* e = aa-b[i];


***   ^^^ obviously key is only a hash value ***


 while (e)
 {
 if (key == e-key)
 return e-value;
 e = e-next;


 ^^^ collision resolution code ^^^ *



Here key is 32 bits. Surely 2 strings can hash to the exact same 32 bit 
value. This resolves only slot collision. It doesn't resolve full hash 
collision.





 }
 }
 return NULL;// not found
}






--
Dmitry Olshansky


Re: Article: Increasing the D Compiler Speed by Over 75%

2013-07-31 Thread Walter Bright

On 7/31/2013 1:49 AM, Dmitry Olshansky wrote:

Here key is 32 bits. Surely 2 strings can hash to the exact same 32 bit value.


No, they cannot. The hash value is a pointer to the string. The strings are 
already inserted into another hash table, so all strings that are the same are 
combined. Therefore, all unique strings hash to unique values.



This resolves only slot collision. It doesn't resolve full hash collision.


If it was broken the compiler wouldn't work at all :-)


Re: Emacs D Mode version 2.0.6 released

2013-07-31 Thread finalpatch
Russel Winder rus...@winder.org.uk writes:

 The title says it all really.

 Version 2.0.6 has been released. Hopefully Arch, MacPorts, Debian,
 Fedora, etc. will look to package this.

 Alternatively for Emacs 24+ folk use packaging, put MELPA in the path
 and get the latest version from GitHub automatically. That's
 2.0.7-SNAPSHOT now :-)

There are a few things not fontified correctly in my Emacs (24.3.50.1
git master of 2013-6-12).  I just checked with the latest github version
and they are still not fixed:

* The first member variable or function name under a protection level
  label (public/protected/private etc.) is not fontified.

* Types with namespaces (.) or are template instances(!) are not
  fontified.

* auto/immutable variables are not fontified with the correct face. They
  should be in variable face but are displaced in type name face
  instead.

I have a hacked version in my site-lisp directory that fixes most of
these issues for me but because I'm not familiar with the CC-mode
codebase my solutions are very rough and hacky.

-- 
finalpatch


Re: Article: Increasing the D Compiler Speed by Over 75%

2013-07-31 Thread Dmitry Olshansky

31-Jul-2013 13:17, Walter Bright пишет:

On 7/31/2013 1:49 AM, Dmitry Olshansky wrote:

Here key is 32 bits. Surely 2 strings can hash to the exact same 32
bit value.


No, they cannot. The hash value is a pointer to the string. The
strings are already inserted into another hash table,


The StringTable ? Then I have to look somewhere else entirely.


so all strings
that are the same are combined. Therefore, all unique strings hash to
unique values.


Now that sets things straight ... if they ain't hashes then it isn't a 
hash table in the general sense :)


At least that means that contrary to my naive guess calcHash has no 
effect whatsoever on the distribution of keys in this AA. The real hash 
function could be rather biased. I've got to dig a bit deeper into the 
code then.





This resolves only slot collision. It doesn't resolve full hash
collision.


If it was broken the compiler wouldn't work at all :-)


I had a feeling that it can't be that bad :)

--
Dmitry Olshansky


Re: Article: Increasing the D Compiler Speed by Over 75%

2013-07-31 Thread Dmitry Olshansky

31-Jul-2013 19:04, Dmitry Olshansky пишет:

31-Jul-2013 13:17, Walter Bright пишет:

On 7/31/2013 1:49 AM, Dmitry Olshansky wrote:

[snip]

so all strings
that are the same are combined. Therefore, all unique strings hash to
unique values.


Now that sets things straight ... if they ain't hashes then it isn't a
hash table in the general sense :)

At least that means that contrary to my naive guess calcHash has no
effect whatsoever on the distribution of keys in this AA. The real hash
function could be rather biased.


Ouch... to boot it's always aligned by word size, so
key % sizeof(size_t) == 0
...
rendering lower 2-3 bits useless, that would make straight slice lower 
bits approach rather weak :)


 I've got to dig a bit deeper into the

code then.




--
Dmitry Olshansky


Re: Article: Increasing the D Compiler Speed by Over 75%

2013-07-31 Thread Andrei Alexandrescu

On 7/30/13 2:48 PM, Bill Baxter wrote:

On Tue, Jul 30, 2013 at 12:05 PM, Andrei Alexandrescu
seewebsiteforem...@erdani.org mailto:seewebsiteforem...@erdani.org
wrote:

On 7/30/13 11:13 AM, Walter Bright wrote:

On 7/30/2013 2:59 AM, Leandro Lucarella wrote:

I just want to point out that being so much people getting
this wrong
(and even fighting to convince other people that the wrong
interpretation is right) might be an indication that the
message you
wanted to give in that blog is not extremely clear :)


It never occurred to me that anyone would have any difficulty
understanding the notion of speed. After all, we deal with it
every
day when driving.


Yeh sure.  Like I made the trip to grandmother's house in 0.25
trips/hour!.  That's 25% faster than last week when I only drove at 0.2
trips/hour.
I say that all the time.  ;-)

--bb


One does say miles per hour or kilometers per hour, which is the same 
exact notion.


Andrei


Re: Article: Increasing the D Compiler Speed by Over 75%

2013-07-31 Thread Bill Baxter
On Wed, Jul 31, 2013 at 10:12 AM, Andrei Alexandrescu 
seewebsiteforem...@erdani.org wrote:

 On 7/30/13 2:48 PM, Bill Baxter wrote:

 On Tue, Jul 30, 2013 at 12:05 PM, Andrei Alexandrescu
 seewebsiteforem...@erdani.org 
 mailto:SeeWebsiteForEmail@**erdani.orgseewebsiteforem...@erdani.org
 
 wrote:

 On 7/30/13 11:13 AM, Walter Bright wrote:

 On 7/30/2013 2:59 AM, Leandro Lucarella wrote:

 I just want to point out that being so much people getting
 this wrong
 (and even fighting to convince other people that the wrong
 interpretation is right) might be an indication that the
 message you
 wanted to give in that blog is not extremely clear :)


 It never occurred to me that anyone would have any difficulty
 understanding the notion of speed. After all, we deal with it
 every
 day when driving.


 Yeh sure.  Like I made the trip to grandmother's house in 0.25
 trips/hour!.  That's 25% faster than last week when I only drove at 0.2
 trips/hour.
 I say that all the time.  ;-)

 --bb


 One does say miles per hour or kilometers per hour, which is the same
 exact notion.


That's more analogous to something like MIPS than inverse program run time.

--bb


Re: Article: Increasing the D Compiler Speed by Over 75%

2013-07-31 Thread Walter Bright

On 7/31/2013 8:26 AM, Dmitry Olshansky wrote:

Ouch... to boot it's always aligned by word size, so
key % sizeof(size_t) == 0
...
rendering lower 2-3 bits useless, that would make straight slice lower bits
approach rather weak :)


Yeah, I realized that, too. Gotta shift it right 3 or 4 bits.



Re: DScanner is ready for use

2013-07-31 Thread Rory McGuire
Any chance of you turning this into a daemon? Something likt margo or
gocode?
On 29 Jul 2013 11:05, qznc q...@web.de wrote:

 On Saturday, 27 July 2013 at 22:27:35 UTC, Brian Schott wrote:

 DScanner is a tool for analyzing D source code. It has the following
 features:

 * Prints out a complete AST of a source file in XML format.
 * Syntax checks code and prints warning/error messages
 * Prints a listing of modules imported by a source file
 * Syntax highlights code in HTML format
 * Provides more meaningful line of code count than wc
 * Counts tokens in a source file

 The lexer/parser/AST are located in the std/d directory in the
 repository. These files should prove useful to anyone else working on D
 tooling.

 https://github.com/**Hackerpilot/Dscannerhttps://github.com/Hackerpilot/Dscanner

 Aside: the D grammar that I reverse-engineered can be located here:
 https://rawgithub.com/**Hackerpilot/DGrammar/master/**grammar.htmlhttps://rawgithub.com/Hackerpilot/DGrammar/master/grammar.html


 Dscanner looks like a good starting point for a code formatting tool (like
 gofmt). However, there seems to be a tradeoff with performance involved.
 For compilation you want a fast lexer and parser. For formatting you need
 to preserve comments, though.

 For example, convert this from source to AST to source without losing the
 comments:

 void /*hello*/ /*world*/ main () { }



Re: Article: Increasing the D Compiler Speed by Over 75%

2013-07-31 Thread Walter Bright

On 7/31/2013 11:13 AM, Bill Baxter wrote:

That's more analogous to something like MIPS than inverse program run time.


If you increase the speed 100%, then the elapsed time is cut by 50%.

This is a grammar school concept. It does not require an ivy league physics 
degree to understand. It is not obfuscated, confusing, or misleading. It doesn't 
rely on some rarely known formal definition of speed. I expect an audience of 
programmers to understand it without needing a sidebar.


We talk about speed of programs all the time, including compiler speed. I 
previously posted google searches you can try to verify it for yourself.


I.e. I'm being trolled here :-)



Re: DScanner is ready for use

2013-07-31 Thread Justin Whear
On Wed, 31 Jul 2013 20:30:17 +0200, Rory McGuire wrote:

 Any chance of you turning this into a daemon? Something likt margo or
 gocode?

The author has another project here: https://github.com/Hackerpilot/DCD


Re: DScanner is ready for use

2013-07-31 Thread Brian Schott

On Wednesday, 31 July 2013 at 18:41:17 UTC, Justin Whear wrote:

On Wed, 31 Jul 2013 20:30:17 +0200, Rory McGuire wrote:

Any chance of you turning this into a daemon? Something likt 
margo or

gocode?


The author has another project here: 
https://github.com/Hackerpilot/DCD


I wouldn't bother trying to use that yet. Maybe next week, but 
not now. When I get it working there will be a thread on 
D.announce.


Re: Increasing D Compiler Speed by Over 75%

2013-07-31 Thread Rainer Schuetze



On 31.07.2013 09:00, Walter Bright wrote:

On 7/30/2013 11:40 PM, dennis luehring wrote:

currently the vc builded dmd is about 2 times faster in compiling


That's an old number now. Someone want to try it with the current HEAD?



I have just tried yesterdays dmd to build Visual D (it builds some 
libraries and contains a few short non-compiling tasks in between):


Debug build dmd_dmc: 23 sec, std new 43 sec
Debug build dmd_msc: 19 sec, std new 20 sec

std new is the version without the block allocator.

Release build dmd_dmc: 3 min 30, std new 5 min 25
Release build dmd_msc: 1 min 32, std new 1 min 40

The release builds use -release -O -inline and need a bit more than 1 
GB memory for two of the libraries (I still had to patch dmd_dmc to be 
large-address-aware).


This shows that removing most of the allocations was a good optimization 
for the dmc-Runtime, but does not have a large, but still notable impact 
on a faster heap implementation (the VS runtime usually maps directly to 
the Windows API for non-Debug builds). I suspect the backend and the 
optimizer do not use new a lot, but plain malloc calls, so they 
still suffer from the slow runtime.


Re: Article: Increasing the D Compiler Speed by Over 75%

2013-07-31 Thread Bill Baxter
Are you serious that you can't fathom how it could be confusing to someone
than talking about differences in run times?
If you say something is faster than something else you want the two numbers
to be something you can relate to.  Like MPH.  Everyone has a clear concept
of what MPH is.  We use it every day.  So to say 25 MPH is 25% faster than
20 MPH is perfectly clear.  But nobody talks about program execution speed
in terms of programs per second.  So I think it's pretty clear why that
would be harder for people to grok than changes in car speeds or run times.

Anyway, congrats on the speed improvements!  When I was using D a lot, the
compile times for heavily templated stuff were definitely starting to get
to me.

--bb


On Wed, Jul 31, 2013 at 11:36 AM, Walter Bright
newshou...@digitalmars.comwrote:

 On 7/31/2013 11:13 AM, Bill Baxter wrote:

 That's more analogous to something like MIPS than inverse program run
 time.


 If you increase the speed 100%, then the elapsed time is cut by 50%.

 This is a grammar school concept. It does not require an ivy league
 physics degree to understand. It is not obfuscated, confusing, or
 misleading. It doesn't rely on some rarely known formal definition of
 speed. I expect an audience of programmers to understand it without needing
 a sidebar.

 We talk about speed of programs all the time, including compiler speed. I
 previously posted google searches you can try to verify it for yourself.

 I.e. I'm being trolled here :-)




Re: Article: Increasing the D Compiler Speed by Over 75%

2013-07-31 Thread Walter Bright

On 7/31/2013 2:40 PM, Bill Baxter wrote:

Are you serious that you can't fathom how it could be confusing to someone than
talking about differences in run times?


Yes.

And no, I'm not talking about confusing to someone who lives in an undiscovered 
stone age tribe in the Amazon. I'm talking about computer programmers.




If you say something is faster than something else you want the two numbers to
be something you can relate to.  Like MPH.  Everyone has a clear concept of what
MPH is.  We use it every day.  So to say 25 MPH is 25% faster than 20 MPH is
perfectly clear.  But nobody talks about program execution speed in terms of
programs per second.


Yes, they do, and certainly in lines per second. Google it and see for 
yourself. And as you well understand, from using the same program to compile, 
the number of lines cancels out when comparing speeds.


There is nothing mysterious or confusing about this. Seriously.



So I think it's pretty clear why that would be harder for
people to grok than changes in car speeds or run times.


To be blunt, Baloney!



Anyway, congrats on the speed improvements!  When I was using D a lot, the
compile times for heavily templated stuff were definitely starting to get to me.


Thanks!



Re: Article: Increasing the D Compiler Speed by Over 75%

2013-07-31 Thread John Colvin

On Wednesday, 31 July 2013 at 21:40:45 UTC, Bill Baxter wrote:
Are you serious that you can't fathom how it could be confusing 
to someone

than talking about differences in run times?
If you say something is faster than something else you want the 
two numbers
to be something you can relate to.  Like MPH.  Everyone has a 
clear concept
of what MPH is.  We use it every day.  So to say 25 MPH is 25% 
faster than
20 MPH is perfectly clear.  But nobody talks about program 
execution speed
in terms of programs per second.  So I think it's pretty clear 
why that
would be harder for people to grok than changes in car speeds 
or run times.


It's a quite impressively unbalanced education that provides 
understanding of memory allocation strategies, hashing and the 
performance pitfalls of integer division, but not something as 
basic as a speed.


Re: Article: Increasing the D Compiler Speed by Over 75%

2013-07-31 Thread Walter Bright

On 7/31/2013 3:58 PM, John Colvin wrote:

It's a quite impressively unbalanced education that provides understanding of
memory allocation strategies, hashing and the performance pitfalls of integer
division, but not something as basic as a speed.


Have you ever seen those cards that some electrical engineers carry around, 
with the following equations on them:


V = I * R
R = V / I
I = V / R

?

I found it: 
https://docs.google.com/drawings/d/1StlhTYjiUEljnfVtFjP1BXLbixO30DIkbw-DpaYJoA0/edit?hl=enpli=1


Unbelievable. The author of it writes:

I'm going to explain to you how to use this cheat sheet in case you've never 
seen this before.


http://blog.ricardoarturocabral.com/2010/07/electronic-electrical-cheat-sheets.html

Makes you want to cry.


Re: Increasing D Compiler Speed by Over 75%

2013-07-31 Thread Walter Bright

Thanks for doing this, this is good information.

On 7/31/2013 2:24 PM, Rainer Schuetze wrote:

I have just tried yesterdays dmd to build Visual D (it builds some libraries and
contains a few short non-compiling tasks in between):

Debug build dmd_dmc: 23 sec, std new 43 sec
Debug build dmd_msc: 19 sec, std new 20 sec


That makes it clear that the dmc malloc() was the dominator, not code gen.


std new is the version without the block allocator.

Release build dmd_dmc: 3 min 30, std new 5 min 25
Release build dmd_msc: 1 min 32, std new 1 min 40

The release builds use -release -O -inline and need a bit more than 1 GB
memory for two of the libraries (I still had to patch dmd_dmc to be
large-address-aware).

This shows that removing most of the allocations was a good optimization for the
dmc-Runtime, but does not have a large, but still notable impact on a faster
heap implementation (the VS runtime usually maps directly to the Windows API for
non-Debug builds). I suspect the backend and the optimizer do not use new a
lot, but plain malloc calls, so they still suffer from the slow runtime.


Actually, dmc still should give a better showing. All the optimizations I've put 
into dmd also went into dmc, and do result in significantly better code speed. 
For example, the hash modulus optimization has a significant impact, but I 
haven't released that dmc yet.


Optimized builds have an entirely different profile than debug builds, and I 
haven't investigated that.




Re: Increasing D Compiler Speed by Over 75%

2013-07-31 Thread dennis luehring

Am 31.07.2013 23:24, schrieb Rainer Schuetze:



On 31.07.2013 09:00, Walter Bright wrote:

On 7/30/2013 11:40 PM, dennis luehring wrote:

currently the vc builded dmd is about 2 times faster in compiling


That's an old number now. Someone want to try it with the current HEAD?



I have just tried yesterdays dmd to build Visual D (it builds some
libraries and contains a few short non-compiling tasks in between):


can you also give us also timings for

(dmd_dmc|dmd_msc) std\algorithm -unittest -main