Re: gdc or ldc for faster programs?

2022-01-29 Thread Siarhei Siamashka via Digitalmars-d-learn

On Saturday, 29 January 2022 at 18:28:06 UTC, Ali Çehreli wrote:
(And now we know gdc can go about 7% faster with additional 
command line switches.)


No, we don't know this yet ;-) That's just what I said and I may 
be bullshitting. Or the configuration of my computer is 
significantly different from yours and the exact speedup/slowdown 
number may be different. So please verify it yourself. You can 
edit your `dub.json` file to add the following line to it:


"dflags-gdc": ["-fno-weak-templates"],

Then rebuild your spellout test program with gdc (just like you 
did before), run benchmarks and report results. The 
'-fno-weak-templates' option should show up in the gdc invocation 
command line.


Re: gdc or ldc for faster programs?

2022-01-29 Thread max haughton via Digitalmars-d-learn

On Saturday, 29 January 2022 at 18:28:06 UTC, Ali Çehreli wrote:

On 1/29/22 10:04, Salih Dincer wrote:

> Could you also try the following code with the same
configurations?

The program you posted with 2 million random values:

ldc 1.9 seconds
gdc 2.3 seconds
dmd 2.8 seconds

I understand such short tests are not definitive but to have a 
rough idea between two programs, the last version of my program 
that used sprintf with 2 million numbers takes less time:


ldc 0.4 seconds
gdc 0.5 seconds
dmd 0.5 seconds

(And now we know gdc can go about 7% faster with additional 
command line switches.)


Ali


You need to be compiling with PGO to test the compilers optimizer 
to the maximum. Without PGO they have to assume a fairly 
conservative flow through the code which means things like 
inlining and register allocation are effectively flying blind.




Re: gdc or ldc for faster programs?

2022-01-29 Thread Ali Çehreli via Digitalmars-d-learn

On 1/29/22 10:04, Salih Dincer wrote:

> Could you also try the following code with the same configurations?

The program you posted with 2 million random values:

ldc 1.9 seconds
gdc 2.3 seconds
dmd 2.8 seconds

I understand such short tests are not definitive but to have a rough 
idea between two programs, the last version of my program that used 
sprintf with 2 million numbers takes less time:


ldc 0.4 seconds
gdc 0.5 seconds
dmd 0.5 seconds

(And now we know gdc can go about 7% faster with additional command line 
switches.)


Ali



Re: gdc or ldc for faster programs?

2022-01-29 Thread Salih Dincer via Digitalmars-d-learn

On Wednesday, 26 January 2022 at 18:00:41 UTC, Ali Çehreli wrote:



For completeness (and noise :/) here is the final version of 
the program:




Could you also try the following code with the same 
configurations?


```d
struct LongScale {
  struct ShortStack {
short[] stack;
size_t index;

@property back() {
  return this.stack[0];
}

@property push(short data) {
  this.stack ~= data;
  this.index++;
}

@property pop() {
 return this.stack[--this.index];
}
  }

  ShortStack stack;

  this(long i) {
long s, t = i;
for(long e = 3; e <= 18; e += 3) {
  s = 10^^e;
  stack.push = cast(short)((t % s) / (s/1000L));
  t -= t % s;
}
stack.push = cast(short)(t / s);
  }

  string toString() {
string[] scale = [" zero", "thousand", "million",
"billion", "trillion", "quadrillion", "quintillion"];
string r;
for(long e = 6; e > 0; e--) {
  auto t = stack.pop;
  r ~= t > 1 ? " " ~to!string(t) : t ? " one" : "";
  r ~= t ? " " ~scale[e] : "";
}
r ~= stack.back ? " " ~to!string(stack.back) : "";
return r.length ? r : scale[0];
  }
}

import std.conv, std.stdio;
void main()
{
  long[] inputs = [ 741, 1_500, 2_001,
  5_005, 1_250_000, 3_000_042, 10_000_000,
  1_000_000, 2_000_000, 100_000, 200_000,
  10_000, 20_000, 1_000, 2_000, 74, 7, 0,
  1_999_999_999_999];

  foreach(long i; inputs) {
auto OUT = LongScale(i);
auto STR = OUT.toString[1..$];
writefln!"%s"(STR);
  }
}
```