On Sunday, 27 August 2017 at 19:47:59 UTC, Alex wrote:
Hi, all.
Can anybody explain to me why

void main()
{
        import std.numeric;
        assert(gcd(0.5,32) == 0.5);
        assert(gcd(0.2,32) == 0.2);
}

fails on the second assert?

I'm aware, that calculating gcd on doubles is not so obvios, as on integers. But if the library accepts doubles, and basically the return is correct occasionally, why it is not always the case?

If the type isn't a builtin integral and can't be bit shifted, the gcd algorithm falls back to using the Euclidean algorithm in order to support custom number types, so the above gdc in the above reduces to:

---
double gcd(double a, double b)
{
    while (b != 0)
    {
        auto t = b;
        b = a % b;
        a = t;
    }
    return a;
}
---

The issue boils down to the fact that `32 % 0.2` yield `0.2` instead of `0.0`, so the best answer I can give is "because floating points calculations are approximations". I'm actually not sure if this is a bug in fmod or expected behaviour, but I'd tend to the latter.

Is there a workaround, maybe?

If you know how many digits of precision after the decimal dot you can multiply beforehand, gcd in integer realm, and div afterwards (be warned, the below is only an example implementation for readability, it does not do the required overflow checks for the double -> ulong conversion!):

---
import std.traits : isFloatingPoint;
T gcd(ubyte precision, T)(T a, T b) if (isFloatingPoint!T)
{
    import std.numeric : _gcd = gcd;
    immutable T coeff = 10 * precision;
    return (cast(T) _gcd(cast(ulong) (a * coeff),
                         cast(ulong) (b * coeff))) / coeff;
}
---

Reply via email to