On Thu, Jan 24, 2013 at 03:18:01PM -0800, Walter Bright wrote:
On 1/24/2013 1:13 PM, H. S. Teoh wrote:
>On Thu, Jan 24, 2013 at 12:15:07PM -0800, Walter Bright
>wrote:
>>On 1/24/2013 8:36 AM, H. S. Teoh wrote:
>>>Nevertheless, I also have made the same observation that
>>>>>>code
>>>produced by gdc consistently outperforms code produced by
>>>>>>dmd.
>>>Usually by about 20-30%, sometimes as much as 50-60%,
>>>IME. >>>That's a
>>>pretty big discrepancy for me, esp. when I'm doing
>>>compute-intensive
>>>geometric computations.
>>
>>Do you mean floating point code? 32 or 64 bit?
>
>Floating-point, 64-bit, tested on dmd -O vs. gdc -O3.
Next, are you using floats, doubles, or reals?
Both reals and floats. Well, let's get some real
measurements. Here's a
quick run-through of various test programs I have lying
around:
Test program #1 (iterating 2-variable function over grid),
uses reals:
- Test case with n=400:
Using DMD: ~8 seconds (consistently)
Using GDC: ~6 seconds (consistently)
* So the DMD version is 33% slower than the GDC
version.
(That is, 8/6*100 = 133%, so 33% slower.)
- Test case with n=600:
Using DMD: ~27 seconds (consistently)
Using GDC: ~19 seconds (consistently)
* So the DMD version is 42% slower than the GDC
version.
Test program #2 (terrain generation simulator), uses floats:
(The running time of this one depends on the RNG, so I fixed
the seed
value in order to make a fair comparison.)
- Test case with seed=380170304, n=20 with water & wind
simulation:
Using DMD: ~10 seconds (consistently)
Using GDC: ~7 seconds (consistently)
* So the DMD version is 42% slower than the GDC
version.
- Test case with seed=380170304, n=25 with water & wind
simulation:
Using DMD: ~14 seconds (consistently)
Using GDC: ~9 seconds (consistently)
* So the DMD version is 55% slower than the GDC
version.
Test program #3 (enumeration of coordinates of n-dimensional
polytopes),
uses reals:
- All permutations and changes of sign of <1,2,3,4,5,6,7>:
Using DMD: ~4 seconds (consistently)
Using GDC: ~3 seconds (consistently)
* So the DMD version is 33% slower than the GDC
version.
- All permutations and changes of sign of <1,2,3,4,5,6,7,7>:
Using DMD: ~41 seconds (consistently)
Using GDC: ~27 seconds (consistently)
* So the DMD version is 51% slower than the GDC
version.
- Even permutations and all changes of sign of
<1,2,3,4,5,6,7,8>:
Using DMD: ~40 seconds (consistently)
Using GDC: ~27 seconds (consistently)
* So the DMD version is 48% slower than the GDC
version.
All test programs were compiled with dmd -O for the DMD
version, and gdc
-O3 for the GDC version. The source code is unchanged between
the two
compilers, and there are no version()'s that depend on a
particular
compiler. The measurements stated above are averages of about
3-4 runs.
As you can see, the performance difference is between the two
is pretty
clear. I'm pretty sure this isn't only because of floating
point
operations, because the above test programs all use a lot of
inner
loops, and GDC does some pretty sophisticated loop unrolling
and other
such optimizations.
T