On 2009-06-25 18:43, Scott David Daniels wrote:
Robert Kern wrote:
... I wish people would stop representing decimal floating point
arithmetic as "more accurate" than binary floating point arithmetic.
It isn't. Decimal floating point arithmetic does have an extremely
useful niche: ...
Well, we don't actually have an arbitrary-precision, huge exponent
version of binary floating point. In that sense the Decimal floating
point beats it.

And while that's true, to a point, that isn't what Michael or the many others are referring to when they claim that decimal is more accurate (without any qualifiers). They are misunderstanding the causes and limitations of the example "3.2 * 3 == 9.6". You can see a great example of this in the comparison between new Cobra language and Python:

  http://cobra-language.com/docs/python/

In that case, they have a fixed-precision decimal float from the underlying .NET runtime but still making the claim that it is more accurate arithmetic. While you may make (completely correct) claims that decimal.Decimal can be more accurate because of its arbitrary precision capabilities, this is not the claim others are making or the one I am arguing against.

--
Robert Kern

"I have come to believe that the whole world is an enigma, a harmless enigma
 that is made terrible by our own mad attempt to interpret it as though it had
 an underlying truth."
  -- Umberto Eco

--
http://mail.python.org/mailman/listinfo/python-list

Reply via email to