On Friday, 8 November 2013 at 04:39:11 UTC, Andrej Mitrovic wrote:
On 11/8/13, Andrej Mitrovic <andrej.mitrov...@gmail.com> wrote:
Anyway in -release -inline -O -noboundscheck mode the sample now works
perfectly smooth!

Well, as long as you use float and not double via
-version=CHIP_USE_DOUBLES . Chipmunk actually uses doubles by default, although I'm not sure whether it uses reals for computations (more specifically, whether VC/C++ uses reals). So there's a difference
there.

I've done some experiments regarding dmd/ldc comparison.

Machine: Ubuntu 12.04 (x86_64), Intel® Core™ i5-3470 CPU @ 3.20GHz × 4 Compilers: DMD64 D Compiler v2.064, LDC - the LLVM D compiler (0.12.0):
  based on DMD v2.063.2 and LLVM 3.3.1
  Default target: x86_64-unknown-linux-gnu
  Host CPU: core-avx-i

I've made 2 builds:
$ dub --build=release
$ dub --build=release --compiler=ldc2

And 2 runs of
new_demo -bench -trial
(note, I've modified the source to make both keys usable simultaneously) It runs a 1000 iteration for every demo in 'bench' set and prints it's time in ms.

DMD output:
5105.89
2451.94
477.079
12709.9
4259.14
775.686
8842.77
4233.86
784.804
939.7
1643.85
1589.28
5368.47
11042.3
380.893
740.671
9.53658

LDC output:
4645.74
2236.77
434.833
10483.6
3577.5
693.307
7339.49
3445.02
627.396
856.486
1291.23
1333.11
4831.46
9002.18
361.624
605.19
9.64545

So, the ratio is something like 0.81-0.83 in favor of ldc.

Reply via email to