Re: DScanner is ready for use

2013-07-29 Thread qznc

On Saturday, 27 July 2013 at 22:27:35 UTC, Brian Schott wrote:
DScanner is a tool for analyzing D source code. It has the 
following features:


* Prints out a complete AST of a source file in XML format.
* Syntax checks code and prints warning/error messages
* Prints a listing of modules imported by a source file
* Syntax highlights code in HTML format
* Provides more meaningful line of code count than wc
* Counts tokens in a source file

The lexer/parser/AST are located in the std/d directory in 
the repository. These files should prove useful to anyone else 
working on D tooling.


https://github.com/Hackerpilot/Dscanner

Aside: the D grammar that I reverse-engineered can be located 
here: 
https://rawgithub.com/Hackerpilot/DGrammar/master/grammar.html


Dscanner looks like a good starting point for a code formatting 
tool (like gofmt). However, there seems to be a tradeoff with 
performance involved. For compilation you want a fast lexer and 
parser. For formatting you need to preserve comments, though.


For example, convert this from source to AST to source without 
losing the comments:


void /*hello*/ /*world*/ main () { }


Re: Article: Increasing the D Compiler Speed by Over 75%

2013-07-29 Thread JS

On Thursday, 25 July 2013 at 21:27:47 UTC, Walter Bright wrote:

On 7/25/2013 11:49 AM, Dmitry S wrote:
I am also confused by the numbers. What I see at the end of 
the article is
21.56 seconds, and the latest development version does it in 
12.19, which is

really a 43% improvement. (Which is really great too.)


21.56/12.19 is 1.77, i.e. a 75% improvement in speed.

A reduction in time would be the reciprocal of that.



Actually, it is a 43% speed improvement. 0.43*21.56 = 9.27s


So if it is 43% faster, it means it's reduced the time by 9.27s 
or, 21.56 - 9.27 = 12.28 seconds total.


Now, if we started at 12.28 seconds and it jumped to 21.56 then 
it would be 21.56/12.19 = 1.77 == 77% longer.


21.56/12.19 != 12.19/21.56.

The order matters.

To make it obvious. Suppose the running time is 20 seconds. You 
optimize it, it is 100% **faster**(= 1.0*20 = 20s seconds), then 
it takes 0 seconds(20 - 20).


Suppose the running time is 20 seconds, you screw it up, it takes 
40 seconds, now it is 100% slower(1.0*20 = 20, and 20 + 20 = 40).


In both cases there is a difference of 20 seconds BUT they mean 
very different things.


A 20% increase is not calculated the same as a 20% decrease.

That is,

(A - B)/A != (A - B)/B.

The LHS is relative to A and the RHS is relative to B.

So

(21.56 - 12.19)/21.56 = 9.37/21.56 = 43%

or

1 - 12.19/21.56 = 1 - 0.57 = 0.43

To make the numbers simple,

20 second original, 10 second new.

How much faster is the new version? it is 10 seconds faster, or 
in percent, 1 - 10/20 = 0.5% (BUT if we started with 10 seconds 
then it would be increase of 100%)


The numbers are very close to the original, but not very close to 
75%.



Basically you are calculating the percentage as if you slowed 
down the program... but it is not the same.


Another example will suffice:

Suppose you have 1000$. You lose 10% of it, or 100$. You now have 
900$. You gain 10% of it, or 90$. You now have 990$! (Where did 
the 10$ go?)


This is why the stock market and economy is much worse than 2007 
even though the numbers look the same. Easier: Suppose you have 
1000$ loose 99% then gain 99%, you have only (1000*0.01)*1.99 = 
10*1.99 = 19.9... no where near your original amount. (Even 
though the DIJA isn't a percentage this issue does creep into the 
calculation due to inflation and other factors)





Re: Article: Increasing the D Compiler Speed by Over 75%

2013-07-29 Thread JS
On Monday, 29 July 2013 at 11:46:05 UTC, Leandro Motta Barros 
wrote:

This may be off topic, but here I go anyway...

Back in the school days, I joked that the Halting Problem is 
actually

easy to solve with a Turing Machine. Since a Turing Machine is a
theoretical device that exists only in our imagination, we can 
just
suppose that it is infinitely fast. So, we simply have to load 
our

program in the machine and run it. If the machine doesn't stop
immediately, it means that it will run forever.

And what does this have to do with DMD?

Well, I kinda have the same feeling when using it. For my 
~10kloc
project, I still haven't felt a real need to use a real build 
system.

I just dmd *.d. If any measurable time passes without any error
message appearing in the console, I know that my compiled 
successfully

(and it is the linker that is running now).

BTW, 10kloc is not such a large codebase, but this is with DMD 
2.063

anyhow, before those improvents ;-)

LMB





The halting problem isn't about something taking an infinite 
amount of time but about the decidability of it.


For example, we can write a program that will take forever but if 
it is known to do so and will never halt, then there is nothing 
special about it.


For example, for(;;); in an infinite loop and will never 
halt(except when you turn the power off ;) but it's halting state 
is completely known.


Halting problems are much more complex.

Even something like

for(;;)
{
   if (random() == 3) break;
}

is decidable(it will halt after some time).

I would write a program that is undecidable but the margin is too 
short! ;)


Re: Article: Increasing the D Compiler Speed by Over 75%

2013-07-29 Thread John Colvin

On Monday, 29 July 2013 at 10:15:31 UTC, JS wrote:

On Thursday, 25 July 2013 at 21:27:47 UTC, Walter Bright wrote:

On 7/25/2013 11:49 AM, Dmitry S wrote:
I am also confused by the numbers. What I see at the end of 
the article is
21.56 seconds, and the latest development version does it in 
12.19, which is

really a 43% improvement. (Which is really great too.)


21.56/12.19 is 1.77, i.e. a 75% improvement in speed.

A reduction in time would be the reciprocal of that.



Actually, it is a 43% speed improvement. 0.43*21.56 = 9.27s


So if it is 43% faster, it means it's reduced the time by 9.27s 
or, 21.56 - 9.27 = 12.28 seconds total.


Now, if we started at 12.28 seconds and it jumped to 21.56 then 
it would be 21.56/12.19 = 1.77 == 77% longer.


21.56/12.19 != 12.19/21.56.

The order matters.

To make it obvious. Suppose the running time is 20 seconds. You 
optimize it, it is 100% **faster**(= 1.0*20 = 20s seconds), 
then it takes 0 seconds(20 - 20).


That is how you fail a physics class.

s = d/t=  t = d/s

100% increase in s = 2*s
let s_new = 2*s

t_new = d / s_new

let d = 1 program  (s is measured in programs / unit time_

therefore: t_new = 1 / s_new  =  1 / (2 * s)  =  0.5 * 1/s
 = 0.5 * t


Seriously... Walter wouldn't have got his mechanical engineering 
degree if he didn't know how to calculate a speed properly.


Re: Article: Increasing the D Compiler Speed by Over 75%

2013-07-29 Thread John Colvin

On Monday, 29 July 2013 at 12:35:59 UTC, John Colvin wrote:

On Monday, 29 July 2013 at 12:17:22 UTC, JS wrote:

Even something like

for(;;)
{
  if (random() == 3) break;
}

is decidable(it will halt after some time).


That program has a finite average runtime, but its maximum 
runtime is unbounded. You can't actually say it *will* halt. 
For any given input (in this case 0 inputs) one cannot tell 
whether the program will eventually halt, therefore it is 
undecidable.


I have formal background in CS so I might have got that totally 
wrong.


sorry, that should be I have NO formal background in CS


Re: Article: Increasing the D Compiler Speed by Over 75%

2013-07-29 Thread JS

On Monday, 29 July 2013 at 12:36:36 UTC, John Colvin wrote:

On Monday, 29 July 2013 at 12:35:59 UTC, John Colvin wrote:

On Monday, 29 July 2013 at 12:17:22 UTC, JS wrote:

Even something like

for(;;)
{
 if (random() == 3) break;
}

is decidable(it will halt after some time).


That program has a finite average runtime, but its maximum 
runtime is unbounded. You can't actually say it *will* halt. 
For any given input (in this case 0 inputs) one cannot tell 
whether the program will eventually halt, therefore it is 
undecidable.


I have formal background in CS so I might have got that 
totally wrong.


sorry, that should be I have NO formal background in CS


No, again, it isn't about infinite run time.

decidability != infinite run time.

to simplify, let's look at the program,

for(;;) if (random() == 0) break;

where random() returns a random number, not necessarily uniform, 
between 0 and 1.


Same problem just easier to see.

Since there must be a chance for 0 to occur, the program must 
halt, regardless of how long it takes, even if it takes an 
infinite amount of time.


That is, the run time of the program may approach infinity BUT it 
will halt at some point because by the definition of random, 0 
must occur... else it's not random.


So, by the fact that random, must cover the entire range, even if 
it takes it an infinitely long time(so to speak), we know that 
the program must halt. We don't care how long it will take but 
just that we can decide that it will.


The only way you could be right is if random wasn't random and 0 
was never returned... in that case the program would not halt... 
BUT then we could decide that it never would halt...


In both cases, we can decide the outcome... if random is known to 
produce 0 then it will halt, if it can't... then it won't.


But random must produce a 0 or not a 0 in an infinite amount of 
time. (either 0 is in the range of random or not).


That is, the halting state of the program is not random even 
though it looks like it. (again, it's not how long it takes but 
if we can decide the outcome... which, in this case, rests on the 
decidability of random)


Another way to see this, flipping a fair coin has 0 probability 
of producing an infinite series of tails.


Why?

After N flips, the probability of flipping exactly N tails is 1/N 
- 0.


Re: Article: Increasing the D Compiler Speed by Over 75%

2013-07-29 Thread John Colvin

On Monday, 29 July 2013 at 13:05:10 UTC, JS wrote:

On Monday, 29 July 2013 at 12:36:36 UTC, John Colvin wrote:

On Monday, 29 July 2013 at 12:35:59 UTC, John Colvin wrote:

On Monday, 29 July 2013 at 12:17:22 UTC, JS wrote:

Even something like

for(;;)
{
if (random() == 3) break;
}

is decidable(it will halt after some time).


That program has a finite average runtime, but its maximum 
runtime is unbounded. You can't actually say it *will* halt. 
For any given input (in this case 0 inputs) one cannot tell 
whether the program will eventually halt, therefore it is 
undecidable.


I have formal background in CS so I might have got that 
totally wrong.


sorry, that should be I have NO formal background in CS


No, again, it isn't about infinite run time.

decidability != infinite run time.

to simplify, let's look at the program,

for(;;) if (random() == 0) break;

where random() returns a random number, not necessarily 
uniform, between 0 and 1.


Same problem just easier to see.

Since there must be a chance for 0 to occur, the program must 
halt, regardless of how long it takes, even if it takes an 
infinite amount of time.


That is, the run time of the program may approach infinity BUT 
it will halt at some point because by the definition of random, 
0 must occur... else it's not random.


So, by the fact that random, must cover the entire range, even 
if it takes it an infinitely long time(so to speak), we know 
that the program must halt. We don't care how long it will take 
but just that we can decide that it will.


The only way you could be right is if random wasn't random and 
0 was never returned... in that case the program would not 
halt... BUT then we could decide that it never would halt...


In both cases, we can decide the outcome... if random is known 
to produce 0 then it will halt, if it can't... then it won't.


But random must produce a 0 or not a 0 in an infinite amount of 
time. (either 0 is in the range of random or not).


That is, the halting state of the program is not random even 
though it looks like it. (again, it's not how long it takes but 
if we can decide the outcome... which, in this case, rests on 
the decidability of random)


Another way to see this, flipping a fair coin has 0 probability 
of producing an infinite series of tails.


Why?

After N flips, the probability of flipping exactly N tails is 
1/N - 0.


Ok, I think I get what you mean now. The 2 states of interest for 
the halting problem are, for a give input:


1) program *can* stop
2) program *will not* stop

is that correct?


Re: Article: Increasing the D Compiler Speed by Over 75%

2013-07-29 Thread JS

On Monday, 29 July 2013 at 14:39:02 UTC, John Colvin wrote:

On Monday, 29 July 2013 at 13:05:10 UTC, JS wrote:

On Monday, 29 July 2013 at 12:36:36 UTC, John Colvin wrote:

On Monday, 29 July 2013 at 12:35:59 UTC, John Colvin wrote:

On Monday, 29 July 2013 at 12:17:22 UTC, JS wrote:

Even something like

for(;;)
{
if (random() == 3) break;
}

is decidable(it will halt after some time).


That program has a finite average runtime, but its maximum 
runtime is unbounded. You can't actually say it *will* halt. 
For any given input (in this case 0 inputs) one cannot tell 
whether the program will eventually halt, therefore it is 
undecidable.


I have formal background in CS so I might have got that 
totally wrong.


sorry, that should be I have NO formal background in CS


No, again, it isn't about infinite run time.

decidability != infinite run time.

to simplify, let's look at the program,

for(;;) if (random() == 0) break;

where random() returns a random number, not necessarily 
uniform, between 0 and 1.


Same problem just easier to see.

Since there must be a chance for 0 to occur, the program must 
halt, regardless of how long it takes, even if it takes an 
infinite amount of time.


That is, the run time of the program may approach infinity BUT 
it will halt at some point because by the definition of 
random, 0 must occur... else it's not random.


So, by the fact that random, must cover the entire range, even 
if it takes it an infinitely long time(so to speak), we know 
that the program must halt. We don't care how long it will 
take but just that we can decide that it will.


The only way you could be right is if random wasn't random and 
0 was never returned... in that case the program would not 
halt... BUT then we could decide that it never would halt...


In both cases, we can decide the outcome... if random is known 
to produce 0 then it will halt, if it can't... then it won't.


But random must produce a 0 or not a 0 in an infinite amount 
of time. (either 0 is in the range of random or not).


That is, the halting state of the program is not random even 
though it looks like it. (again, it's not how long it takes 
but if we can decide the outcome... which, in this case, rests 
on the decidability of random)


Another way to see this, flipping a fair coin has 0 
probability of producing an infinite series of tails.


Why?

After N flips, the probability of flipping exactly N tails is 
1/N - 0.


Ok, I think I get what you mean now. The 2 states of interest 
for the halting problem are, for a give input:


1) program *can* stop
2) program *will not* stop

is that correct?


A program will either halt or not halt, the question is, can we 
decide. Rather:


A program will either halt or not halt, or be impossible to tell.

We'd like to think we know if a program will stop or not but we 
can't always know that... there are just some strange programs 
that we can't figure out. The program is sort of a superposition 
of halting and not halting... sort of like schrodingers cat.


For example, it is impossible to know if schrodingers cat is 
alive or dead until we open the box(but suppose we never get to 
open the box).


Here is another example:

main() { readln(); }

Does the program halt or not?

Yes! Just because it depends on what the user does, does not mean 
the program change the fact that the program halts or not.


Basically the halting problem deals with the structural aspect of 
the program itself and not the inputs on it. (this does not mean 
that the input is not required)


Here is the only example that comes to mind.

main(S) { for(;;) if (S subset of S) halt; }


Ok? easy program right. Just checks if S is a subset of itself 
and halts, else doesn't.


But what happens when we call it with the set of all sets?

Such a program is indeterminate. Why? Because the set of all sets 
that do not contain themselves both contains itself and doesn't 
contain itself.  (the if statement can't be computed)


i.e., Let S = Set of all sets that do not contain themselves.

If S doesn't contain itself in, by definition, S is a subset of 
itself... which is contradictory.


If S contains itself, then again, by definition, S is a set that 
does not contain itself, which is contradictory.



Hence, the program give above's halting state can't be known.. 
or, can't be determined, or is undecidable.


All you have to do is ask yourself if it halts(for the given 
input)? You can't even reason about it. Because if it does halt 
then it doesn't halt.











Re: Article: Increasing the D Compiler Speed by Over 75%

2013-07-29 Thread Walter Bright

On 7/29/2013 5:28 AM, John Colvin wrote:

Seriously... Walter wouldn't have got his mechanical engineering degree if he
didn't know how to calculate a speed properly.


It's a grade school concept :-)

A college freshman physics problem would be calculating the delta V of a rocket 
fired in space given the fuel weight, rocket empty weight, thrust, etc.


Re: Article: Increasing the D Compiler Speed by Over 75%

2013-07-29 Thread Walter Bright

On 7/29/2013 4:45 AM, Leandro Motta Barros wrote:

Well, I kinda have the same feeling when using it. For my ~10kloc
project, I still haven't felt a real need to use a real build system.
I just dmd *.d. If any measurable time passes without any error
message appearing in the console, I know that my compiled successfully
(and it is the linker that is running now).


That goes back to the interesting effect that every order of magnitude 
improvement in compile speed has a transformative effect on development procedure.


(For example, if it took overnight to compile, Brad Roberts' autotester would be 
nigh unusable in its existing form.)


Re: Article: Increasing the D Compiler Speed by Over 75%

2013-07-29 Thread JS

On Monday, 29 July 2013 at 12:28:16 UTC, John Colvin wrote:

On Monday, 29 July 2013 at 10:15:31 UTC, JS wrote:

On Thursday, 25 July 2013 at 21:27:47 UTC, Walter Bright wrote:

On 7/25/2013 11:49 AM, Dmitry S wrote:
I am also confused by the numbers. What I see at the end of 
the article is
21.56 seconds, and the latest development version does it 
in 12.19, which is

really a 43% improvement. (Which is really great too.)


21.56/12.19 is 1.77, i.e. a 75% improvement in speed.

A reduction in time would be the reciprocal of that.



Actually, it is a 43% speed improvement. 0.43*21.56 = 9.27s


So if it is 43% faster, it means it's reduced the time by 
9.27s or, 21.56 - 9.27 = 12.28 seconds total.


Now, if we started at 12.28 seconds and it jumped to 21.56 
then it would be 21.56/12.19 = 1.77 == 77% longer.


21.56/12.19 != 12.19/21.56.

The order matters.

To make it obvious. Suppose the running time is 20 seconds. 
You optimize it, it is 100% **faster**(= 1.0*20 = 20s 
seconds), then it takes 0 seconds(20 - 20).


That is how you fail a physics class.

s = d/t=  t = d/s

100% increase in s = 2*s
let s_new = 2*s

t_new = d / s_new

let d = 1 program  (s is measured in programs / unit time_

therefore: t_new = 1 / s_new  =  1 / (2 * s)  =  0.5 * 1/s
 = 0.5 * t


Seriously... Walter wouldn't have got his mechanical 
engineering degree if he didn't know how to calculate a speed 
properly.


I'm sorry but a percentage is not related to distance, speed, or 
time.


A percentage if a relative quantity that depends on a base for 
reference. Speed, time, nor distance are relative.



let d = 1 program  (s is measured in programs / unit time_


which is nonsense... programs / unit time?

Trying to use distance and speed as a measure of performance of a 
program is just ridiculous. The only thing that has any meaning 
is the execution time and the way to compare them is taking the 
ratio of the old to new. Which gives a percentage change. If the 
change  1 then it is an increase, if  1 then it is a decrease.


Btw, it should be

t_new = d_new/s_new

and the proper way to calculate a percentage change in time would 
be


t_new/t_old = d_new/s_new*s_old/d_old = d_new/d_old / 
(s_new/s_old)




If we assume the distance is constant, say it is the distance 
the program must travel from start to finish, then d_new = d_old 
and


t_new/t_old = s_old/s_new

or

p = t_new/t_old = s_old/s_new is the percentage change of the 
program.


Note that speed is the reciprocal of the time side, if you 
interpret it wrong for the program(it's not time) then you'll get 
the wrong answer).



21.56/12.19 = 1.77 == 77% (if you dump the 1 for some reason)
12.19/21.56 = 0.56 == 56%

but only one is right... Again, it should be obvious:

Starting at 21.56, let's round that to 20s. Ended at 12.19s, 
let's round that to 10s. 10 seconds is half of 20s, not 75%(or 
25%). Note how close 50% is to 56% with how close the rounding 
is. It's no coincidence...


It seems some people have to go back to kindergarten and study 
percentages!


(again, if we started with 12 second and went to 21 seconds, it 
would be a near 75% increase. But a 75% increase is not a 75% 
decrease)


Please study up on basic math before building any bridges. I know 
computers have made everyone dumb


Re: Article: Increasing the D Compiler Speed by Over 75%

2013-07-29 Thread John Colvin

On Monday, 29 July 2013 at 18:34:16 UTC, Walter Bright wrote:

On 7/29/2013 5:28 AM, John Colvin wrote:
Seriously... Walter wouldn't have got his mechanical 
engineering degree if he

didn't know how to calculate a speed properly.


It's a grade school concept :-)

A college freshman physics problem would be calculating the 
delta V of a rocket fired in space given the fuel weight, 
rocket empty weight, thrust, etc.


Physics graduate / soon to be PhD student here :) It's sad how 
few rockets were involved in my degree...


Re: Article: Increasing the D Compiler Speed by Over 75%

2013-07-29 Thread Walter Bright

On 7/29/2013 12:08 PM, JS wrote:

Trying to use distance and speed as a measure of performance of a program is
just ridiculous.


If you google program execution speed you'll find it's a commonly used term. 
Lines per second is a common measure of compiler execution speed - google 
compiler lines per second and see.




(again, if we started with 12 second and went to 21 seconds, it would be a near
75% increase. But a 75% increase is not a 75% decrease)


Speed is the reciprocal of time, meaning a decrease in time is an increase in 
speed.



Re: Article: Increasing the D Compiler Speed by Over 75%

2013-07-29 Thread John Colvin

On Monday, 29 July 2013 at 19:08:28 UTC, JS wrote:

On Monday, 29 July 2013 at 12:28:16 UTC, John Colvin wrote:

On Monday, 29 July 2013 at 10:15:31 UTC, JS wrote:
On Thursday, 25 July 2013 at 21:27:47 UTC, Walter Bright 
wrote:

On 7/25/2013 11:49 AM, Dmitry S wrote:
I am also confused by the numbers. What I see at the end of 
the article is
21.56 seconds, and the latest development version does it 
in 12.19, which is

really a 43% improvement. (Which is really great too.)


21.56/12.19 is 1.77, i.e. a 75% improvement in speed.

A reduction in time would be the reciprocal of that.



Actually, it is a 43% speed improvement. 0.43*21.56 = 9.27s


So if it is 43% faster, it means it's reduced the time by 
9.27s or, 21.56 - 9.27 = 12.28 seconds total.


Now, if we started at 12.28 seconds and it jumped to 21.56 
then it would be 21.56/12.19 = 1.77 == 77% longer.


21.56/12.19 != 12.19/21.56.

The order matters.

To make it obvious. Suppose the running time is 20 seconds. 
You optimize it, it is 100% **faster**(= 1.0*20 = 20s 
seconds), then it takes 0 seconds(20 - 20).


That is how you fail a physics class.

s = d/t=  t = d/s

100% increase in s = 2*s
let s_new = 2*s

t_new = d / s_new

let d = 1 program  (s is measured in programs / unit time_

therefore: t_new = 1 / s_new  =  1 / (2 * s)  =  0.5 * 1/s
= 0.5 * t


Seriously... Walter wouldn't have got his mechanical 
engineering degree if he didn't know how to calculate a speed 
properly.


I'm sorry but a percentage is not related to distance, speed, 
or time.


A percentage if a relative quantity that depends on a base for 
reference. Speed, time, nor distance are relative.



let d = 1 program  (s is measured in programs / unit time_


which is nonsense... programs / unit time?

Trying to use distance and speed as a measure of performance of 
a program is just ridiculous. The only thing that has any 
meaning is the execution time and the way to compare them is 
taking the ratio of the old to new. Which gives a percentage 
change. If the change  1 then it is an increase, if  1 then 
it is a decrease.


Btw, it should be

t_new = d_new/s_new

and the proper way to calculate a percentage change in time 
would be


t_new/t_old = d_new/s_new*s_old/d_old = d_new/d_old / 
(s_new/s_old)




If we assume the distance is constant, say it is the distance 
the program must travel from start to finish, then d_new = 
d_old and


t_new/t_old = s_old/s_new

or

p = t_new/t_old = s_old/s_new is the percentage change of the 
program.


Note that speed is the reciprocal of the time side, if you 
interpret it wrong for the program(it's not time) then you'll 
get the wrong answer).



21.56/12.19 = 1.77 == 77% (if you dump the 1 for some reason)
12.19/21.56 = 0.56 == 56%

but only one is right... Again, it should be obvious:

Starting at 21.56, let's round that to 20s. Ended at 12.19s, 
let's round that to 10s. 10 seconds is half of 20s, not 75%(or 
25%). Note how close 50% is to 56% with how close the rounding 
is. It's no coincidence...


It seems some people have to go back to kindergarten and study 
percentages!


(again, if we started with 12 second and went to 21 seconds, it 
would be a near 75% increase. But a 75% increase is not a 75% 
decrease)


Please study up on basic math before building any bridges. I 
know computers have made everyone dumb


And again:

speed of original = f_old = 1 / 21.56 compilations per second
speed of new  = f_new = 1 / 12.19 compilations per second

It's a frequency really, so I'm using f

change in speed = delta_f = f_new - f_old = (1 / 12.19) - (1 / 
21.56)


proportional change in speed = deltaf / f_old = (f_new / f_old) 
- 1

 = ((1 / 12.19) / (1 / 21.56)) - 1
 = 0.769

percentage change in speed = 100 * 0.769 = 76.9%


If something does the same work in 25% of the time, it is 1/0.25 
= 4 times faster, i.e. a 300% increase in speed.
After Walter's optimisations, dmd did the same work in 56.5% of 
the time, which is 1/0.565 = 1.769 times faster, representing a 
76.9% increase in speed.



The definition it's all coming from:
percentage change = 100*(new - old)/old


Re: Article: Increasing the D Compiler Speed by Over 75%

2013-07-29 Thread Ali Çehreli

On 07/29/2013 12:08 PM, JS wrote:

 It seems some people have to go back to kindergarten and study 
percentages!


Do you seriously think people who follow this forum need to relearn what 
a percentage is? :)


 (again, if we started with 12 second and went to 21 seconds, it would be
 a near 75% increase. But a 75% increase is not a 75% decrease)

Everyone knows that.

 I know computers have made everyone dumb

Not me.

Ali



Re: Article: Increasing the D Compiler Speed by Over 75%

2013-07-29 Thread JS

On Monday, 29 July 2013 at 19:38:51 UTC, Walter Bright wrote:

On 7/29/2013 12:08 PM, JS wrote:
Trying to use distance and speed as a measure of performance 
of a program is

just ridiculous.


If you google program execution speed you'll find it's a 
commonly used term. Lines per second is a common measure of 
compiler execution speed - google compiler lines per second 
and see.



(again, if we started with 12 second and went to 21 seconds, 
it would be a near

75% increase. But a 75% increase is not a 75% decrease)


Speed is the reciprocal of time, meaning a decrease in time is 
an increase in speed.


You are right, sorry. There is no difference.

I think the issue is interpretation. When I read X% increase in 
speed I think X% faster [in time].


Since you are using speed in a technical way, then it works. I 
think it is deceptive, in some sense... although not necessarily 
intentional.


The reason is very few people measure performance of a program in 
any other way than the time it takes to execute the program. That 
is all that matters in most cases... and in most cases lines per 
second mean nothing... but I guess in compilers it is more useful.



What I'm now wondering is why you chose to use % increase in 
speed rather than % decrease in time? Is it because it is a 
larger number and looks more impressive?


It think 99.% of people using D only care about the absolute 
time it takes to compile their code, and giving a number that 
they can actually use directly(instead of having to calculate 
first) seems more useful.


By knowing you *sped* up the compiler so it is 43% faster lets me 
know that I should expect compilation time of my code to be 
approximately cut in half.


When you say 75% increase in speed I have to actually do some 
calculation and hopefully also interpret speed properly.


Nowhere in the article do you refer to the lines per second or 
any technical definition of speed.


It's a somewhat informal article but you are using a rather 
formal definition of speed and it also does not directly give the 
user an obvious metric as just giving them the percentage change 
of time.






Re: Article: Increasing the D Compiler Speed by Over 75%

2013-07-29 Thread monarch_dodra

On Monday, 29 July 2013 at 20:19:34 UTC, John Colvin wrote:

On Monday, 29 July 2013 at 19:08:28 UTC, JS wrote:
Please study up on basic math before building any bridges. I 
know computers have made everyone dumb


And again:



Honestly, I don't know why you are still trying... At this point, 
it's not the math that's a problem anymore, it's basic 
communication.


Back to the main subject: Congrats Walter! Those are some 
incredible numbers ;)


Re: GoingNative 2013

2013-07-29 Thread Brad Anderson

On Monday, 29 July 2013 at 22:12:40 UTC, Walter Bright wrote:

http://channel9.msdn.com/Events/GoingNative/2013

The last one was a lot of fun, so I signed up for this one, 
too. Note that Andrei is a speaker! Recommended. See y'all 
there!


(P.S. It's entirely possible that I may get my mythical '72 
Dodge running in time for this, and I can drive it to the 
conference. It blarts out enough CO2 to melt at least 3 
Priuses.)


They said they may not live stream it this year like they did 
last year.  That's a shame because it was a lot of fun heckling 
Andrei in #d :P.


Re: GoingNative 2013

2013-07-29 Thread Manu
On 30 July 2013 08:12, Walter Bright newshou...@digitalmars.com wrote:

 http://channel9.msdn.com/**Events/GoingNative/2013http://channel9.msdn.com/Events/GoingNative/2013

 The last one was a lot of fun, so I signed up for this one, too. Note that
 Andrei is a speaker! Recommended. See y'all there!

 (P.S. It's entirely possible that I may get my mythical '72 Dodge running
 in time for this, and I can drive it to the conference. It blarts out
 enough CO2 to melt at least 3 Priuses.)


On a possibly related note, the north pole is actually a lake this
summer... (mouse-swipe the picture left)
http://www.huffingtonpost.com/2013/07/25/north-pole-melting-leaves_n_3652373.html


Re: GoingNative 2013

2013-07-29 Thread Walter Bright

On 7/29/2013 7:25 PM, Manu wrote:

On 30 July 2013 08:12, Walter Bright newshou...@digitalmars.com
mailto:newshou...@digitalmars.com wrote:

http://channel9.msdn.com/__Events/GoingNative/2013
http://channel9.msdn.com/Events/GoingNative/2013

The last one was a lot of fun, so I signed up for this one, too. Note that
Andrei is a speaker! Recommended. See y'all there!

(P.S. It's entirely possible that I may get my mythical '72 Dodge running in
time for this, and I can drive it to the conference. It blarts out enough
CO2 to melt at least 3 Priuses.)


On a possibly related note, the north pole is actually a lake this summer...
(mouse-swipe the picture left)
http://www.huffingtonpost.com/2013/07/25/north-pole-melting-leaves_n_3652373.html


Not my fault, I've put maybe 6 miles on the car in 25 years!