On Jan 14, 2008 7:57 PM, Andrew Lentvorski <[EMAIL PROTECTED]> wrote:
> > How the floating point is implemented isn't really important, both a
> > hardware floating point processor and an emulation library are going to
> > give the same result (or at least should).
>
> Wrong.  Not even wrong.  Worse than wrong.
>
> Almost all of the idiocies with floating point arithmetic are with
> "limited precision *binary* floating point".
>
> If we were using "limited precision *decimal* floating point", 95% of
> the stupid problems would go away.  It might be higher than 95%, but the
> important bit is the fact that almost all of our intuition about
> arithmetic would be correct if we were using decimal floating point.
> Sure, 1/3 + 2/3 might still have some issues, but most people who know
> about repeating decimals would get why.  However, things like having
> .1+.1+.1+.1+.1+.1+.1+.1+.1+.1 != 1.0 would not crop up.  (For those not
> in the know, the issue is that 1/10 is a repeating fraction in binary
> and, as such, is not exactly representable in binary.  Consequently 1/10
> in binary is slightly larger or slightly smaller than a true decimal 0.1 .)
>
> That's important.  There is no reason for binary floating point to be
> the default in a programming language in this day and age.  In fact, the
> law prevents its use in the financial sector.
>
> I'm not saying we should throw away binary floating point or the ability
> to access it, but it should no longer be the default.
>
> Correct before fast.

Right on. I've been thinking the same thing for years and consequently
in Cobra decimal is the default type for "0.1", not float. As you say,
correct before fast.

One of my test cases even reads:

    assert .1+.1+.1+.1+.1+.1+.1+.1+.1+.1 == 1.0

Which fails in most languages including Python.

-Chuck

-- 
[email protected]
http://www.kernel-panic.org/cgi-bin/mailman/listinfo/kplug-lpsg

Reply via email to