I apologize for digressing a little bit further - just to share
insights to other learners.
I had the question, why my binary was so big (> 4M), discovered
the
`gdc -Wall -O2 -frelease -shared-libphobos` options (now >200K).
Then I tried to avoid GC, just learnt about this: The GC in the
Leibnitz code is there only for the writeln. With a change to
(again standard C) printf the
`@nogc` modifier can be applied, the binary then gets down to
~17K, a comparable size of the C counterpart.
Another observation regarding precision:
The iteration proceeds in the wrong order. Adding small
contributions first and bigger last leads to less loss when
summing up the small parts below the final real/double LSB limit.
So I'm now at this code (abolishing the avarage of 20 interations
as unnesseary)
```d
// import std.stdio; // writeln will lead to the garbage
collector to be included
import core.stdc.stdio: printf;
import std.datetime.stopwatch;
const int ITERATIONS = 1_000_000_000;
@nogc pure double leibniz(int it) { // sum up the small values
first
double n = 0.5*((it%2) ? -1.0 : 1.0) / (it * 2.0 + 1.0);
for (int i = it-1; i >= 0; i--)
n += ((i%2) ? -1.0 : 1.0) / (i * 2.0 + 1.0);
return n * 4.0;
}
@nogc void main() {
double result;
double total_time = 0;
auto sw = StopWatch(AutoStart.yes);
result = leibniz(ITERATIONS);
sw.stop();
total_time = sw.peek.total!"nsecs";
printf("%.16f\n", result);
printf("Execution time: %f\n", total_time / 1e9);
}
```
result:
```
3.1415926535897931
Execution time: 1.068111
```