Adam Olsen wrote:
> I guess my confusion revolves around float to Decimal. Is lossless
> conversion a good thing in python, or is prohibiting float to Decimal
> conversion just a fudge to prevent people from initializing a Decimal
> from a float when they really want a str?
The general rule is that a lossy conversion is fine, so long as the programmer
explicitly requests it.
float to Decimal is a special case, which has more to do with the nature of
Decimal and the guarantees it provides, than to do with general issues of
lossless conversion.
Specifically, what does Decimal(1.1) mean? Did you want Decimal("1.1") or
Decimal("1.100000001")? Allowing direct conversion from float would simply
infect the Decimal type with all of the problems of binary floating point
representations, without providing any countervailing benefit.
The idea of providing a special notation or separate method for float
precision was toyed with, but eventually rejected in favour of the existing
string formatting notation and a straight up type error. Facundo included the
gory details in the final version of his PEP [1].
Cheers,
Nick.
[1] http://www.python.org/peps/pep-0327.html#from-float
--
Nick Coghlan | [EMAIL PROTECTED] | Brisbane, Australia
---------------------------------------------------------------
http://www.boredomandlaziness.org
_______________________________________________
Python-Dev mailing list
[email protected]
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe:
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com