On 27 February 2014 23:00, Mark H. Harris <harrismh...@gmail.com> wrote: > On Thursday, February 27, 2014 10:24:23 AM UTC-6, Oscar Benjamin wrote: > >>>>> from decimal import Decimal as D >> >>> D(0.1) >> Decimal('0.1000000000000000055511151231257827021181583404541015625') > > hi Oscar, well, that's not what I'm doing with my D()... I'm not just > making D() mimic Decimal... look inside it... there's a str() call.... > consider the following experiment and you'll see what I'm talking about...
I understood what your code is doing but I'm not sure if you do. Calling str on a float performs an inexact binary to decimal conversion. Calling Decimal on a float performs an exact binary to decimal conversion. Your reasoning essentially assumes that every float should be interpreted as an approximate representation for a nearby decimal value. This is probably true if the user wrote "a = 0.1" but is generally not true in the kind of numeric code that is likely to be using the transcendental functions defined in your dmath module. Calling Decimal(str(float)) introduces entirely avoidable inaccuracy in your code when the primary purpose of your code as accuracy! Oscar -- https://mail.python.org/mailman/listinfo/python-list