On 12/22/2010 11:04 PM, Andreas Mayer wrote:
To see what performance advantage D would give me over using a scripting 
language, I made a small benchmark. It consists of this code:

    auto L = iota(0.0, 10000000.0);
    auto L2 = map!"a / 2"(L);
    auto L3 = map!"a + 2"(L2);
    auto V = reduce!"a + b"(L3);

It runs in 281 ms on my computer.

The same code in Lua (using LuaJIT) runs in 23 ms.

That's about 10 times faster. I would have expected D to be faster. Did I do 
something wrong?

The first Lua version uses a simplified design. I thought maybe that is unfair 
to ranges, which are more complicated. You could argue ranges have more 
features and do more work. To make it fair, I made a second Lua version of the 
above benchmark that emulates ranges. It is still 29 ms fast.

The full D version is here: http://pastebin.com/R5AGHyPx
The Lua version: http://pastebin.com/Sa7rp6uz
Lua version that emulates ranges: http://pastebin.com/eAKMSWyr

Could someone help me solving this mystery?

Or is D, unlike I thought, not suitable for high performance computing? What 
should I do?


I changed the code to this:

    auto L = iota(0, 10000000);
    auto L2 = map!"a / 2.0"(L);
    auto L3 = map!"a + 2"(L2);
    auto V = reduce!"a + b"(L3);

and ripped the caching out of std.algorithm.map. :-)

This made it go from about 1.4 seconds to about 0.4 seconds on my machine. Note that I did no rigorous or scientific testing.

Also, if you really really need the performance you can change it all to lower level code, should you want to.

Reply via email to