On 6/22/05, Michael McLay <[EMAIL PROTECTED]> wrote:
> This idea is dead on arrival. The change would break many applications and
> modules. A successful proposal cannot break backwards compatibility. Adding a
> dpython interpreter to the current code base is one possiblity.

Is there actually much code around that relies on the particular
precision of 32- or 64-bit binary floats for arithmetic, and ceases
working when higher precision is available? Note that functions like
struct.pack would be unaffected. If compatibility is a problem, this
could still be a possibility for Python 3.0.

In either case, compatibility can be ensured by allowing both n-digit
decimal and hardware binary precision for floats, settable via a float
context. Then the backwards compatible binary mode can be default, and
"decimal mode" can be set with one line of code. d-suffixed literals
create floats with decimal precision.

There is the alternative of providing decimal literals by using
separate decimal and binary float base types, but in my eyes this
would be redundant. The primary use of binary floats is performance
and compatibility, and both can be achieved with my proposal without
sacrificing the simplicity and elegance of having a single type to
represent non-integral numbers. It makes more sense to extend the
float type with the power and versatility of the decimal module than
to have a special type side by side with a default type that is less
capable.

Fredrik
_______________________________________________
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com

Reply via email to