Hi all,

A trivial little benchmark for comparing the performance of a function using
if(!booleanValue) as a conditional, versus one using if(isNan(x)).

Run with dmd -O -release -inline -noboundscheck on my system, I find the two are
virtually equivalent, with it taking about 30ms for 1 million if(!boolean)
versus about 31 for a million if(isNaN(floatingpoint)).

With ldmd2 and the same optimizations, I get about 25 compared to 27-28 ms for
the two cases, again with a million realizations of each.

Finally, with gdmd I get 30ms for a million if(!boolean) versus about 150ms for
a million if(isNaN(floatingpoint)).

Code attached, feel free to critique.  Generally I think these results show that
the difference between checking a boolean and checking isNan are negligible, but
I'm surprised at the relatively bad performance with gdmd and I wonder if
there's some particular reason for this.

The relevance is to a fix I'm preparing for this bug:
http://d.puremagic.com/issues/show_bug.cgi?id=10322

See also: http://forum.dlang.org/post/lcrwybqszclitzrav...@forum.dlang.org

Thanks in advance for any feedback :-)

Best wishes,

    -- Joe
import std.datetime, std.range, std.stdio;
import std.math: isNaN;

void main()
{
    auto boolArr = new bool[1_000_000];
    auto doubArr = new double[1_000_000];
    StopWatch watch;
    size_t i = 0;

    foreach(b; boolArr)
    {
        watch.start();
        if(!b)
        {
            ++i;
        }
        watch.stop();
    }

    writeln("Time for ", i, " if(false)'s: ", watch.peek.usecs, " microseconds.");

    watch.reset;
    i = 0;

    foreach(d; doubArr)
    {
        watch.start();
        if(isNaN(d))
        {
            ++i;
        }
        watch.stop();
    }

    writeln("Time for ", i, " isNaN's: ", watch.peek.usecs, " microseconds.");
}

Reply via email to