Raymond Hettinger wrote:
The question of
where to stack decimals in the hierarchy was erroneously being steered by the concept that both decimal and binary floats
are intrinsically inexact.  But that would be incorrect, inexactness
is a taint, the numbers themselves are always exact.

I don't think that's correct. "Numbers are always exact" is
a simplification due to choosing not to attach an inexactness
flag to every value. Without such a flag, we don't really know
whether any given value is exact or not, we can only guess.

The reason for regarding certain types as "implicitly inexact"
is something like this: If you start with exact ints, and do
only int operations with them, you must end up with exact
ints. But the same is not true of float or Decimal: even if
you start with exact values, you can end up with inexact ones.

I really like Guido's idea of a context flag to control whether
mixing of decimal and binary floats will issue a warning.

Personally I feel that far too much stuff concerning decimals
is controlled by implicit context parameters. It gives me the
uneasy feeling that I don't know what the heck any given
decimal operation is going to do. It's probably justified in
this case, though.

--
Greg

_______________________________________________
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com

Reply via email to