On 02/21/2011 10:17 PM, so wrote:
If one doesn't know what floating point is and insists on using it, it is
his own responsibility to face the consequences.

I don't buy this argument.

Why not? A logical flaw on my part or the statement being somewhat harsh?
Because i don't think it is the former, i will give an example for the latter.
I am a self-taught programmer and i too made big mistakes when using FP,
probably i still do since it is a strange beast to deal with even if you know
all about it.
For this reason it is somewhat understandable for people like me failing this
kind of traps but can we say same thing for others?
Do they have this excuse? Not knowing the fundamental thing about FP and use it?

I understand your position. But I don't share it.
The problem is binary floating point representations of numbers introduce a little set of traps due to the fact that they are implicitely supposed to be the type representing "fractional" or real numbers, but they cannot do that in a way that matches our (widely unconscious) intuitions in the domain, themselves (I guess) a by-product of years of manipulation at school. The simple notation "1.1" comes with a huge baggage of pre-existing knowledge by everyone (programmers).

We should not blindly make abstraction of that fact, and put the burden on the users' shoulders, without even a honest trial in solving the issue.
When a programmer writes:
        x = 1.1;
or
        x = 1/3.0;
the meaning is just that, and refers to all this knowledge. There is in most cases no intention (not even implicite) to use binary floating point numbers, instead to have a representation for the plain arithmetics values as written down. Hope I'm clear. Very few people (ones that have repetedly been caught by the conceptual traps) will have "alarm bell" ring when writing this, and consequently take all appropriate precautions required to avoid subsequenty falling into the traps and... have buggy code.

This may be a bit different if the notation did not recall all of that 
knowledge:
        x = 1.1f; // alarm bell!?

I would advocate that a general-purpose language should use a different representation for ordinary need of fractional/real numbers, one that does not introduce such traps and/or warns about them (via warnongs/errors, like Bearophile's proposal). One possibility may be to use fixed point with a /decimal/ scale factor (the mantissa beeing binary or decimal integer). Other possibilities: true rationals, decimal 'reals'. The issue is indeed any of those solutions has a cost (even more with modern machines having native float artithmetic). But numeric computation intensive apps would just use floats, /explicitely/ (and hopefully correctly).

A similar issue is the one of signed <--> unsigned ints. I would just make as default the range of unsigned ints be a subset of signed ones. And let the full range available for people who really need it, to be used explicitely.

Anyway, such solutions may not be good for D, beeing (also) a systems programming language in the C line. Still, I think it's worth exploring the topic instead of letting bug-prone features silently sit at the deep core of the language.

Denis
--
_________________
vita es estrany
spir.wikidot.com

Reply via email to