On Wed, Apr 6, 2022 at 11:03 PM Greg Ewing <greg.ew...@canterbury.ac.nz>
wrote:

> Maybe the mistake was in thinking that we need variable
> precision at all.
>

I actually have almost the opposite position -- variable precision is the
most useful part of the Decimal type ;-)

Warning: rant ahead: skip to the end for the relevant bit:

I'm still completely confused as to why folks think floating point decimal
is any better than floating point binary: the only explanation is that :
"computers must provide an arithmetic that works in the same way as the
arithmetic that people learn at school". That is to say, people "expect" to
be able to exactly represent 1/10, but don't expect to be able to exactly
represent 1/3, or, indeed, any irrational number.

I'm pretty sure (I haven't thought out every edge case) that numbers using
binary internally that always rounded a little bit on display and
comparison would behave as people "expect" just as well.

Note the docs again: "End users typically would not expect 1.1 + 2.2 to
display as 3.3000000000000003 as it does with binary floating point." -- so
it's really about the display, not the precision or accuracy of the result.

NOTE: I'm not advocating a change, but while I understand why:

In [55]: repr(1.1 + 2.2)
Out[55]: '3.3000000000000003'

I don't get why:

In [54]: str(1.1 + 2.2)
Out[54]: '3.3000000000000003'

Isn't the point of __str__ to provide a more human-readable, but perhaps
not reproducable, representation?

Anyway...

And it's really a mistake to think that Decimal is inherently any better
suited to money.

>From the docs:
"""
The exactness carries over into arithmetic. In decimal floating point, 0.1
+ 0.1 + 0.1 - 0.3 is exactly equal to zero. In binary floating point, the
result is 5.5511151231257827e-017. While near to zero, the differences
prevent reliable equality testing and differences can accumulate. For this
reason, decimal is preferred in accounting applications which have strict
equality invariants.
"""

I'm no accountant, but this strikes me as quite dangerous -- sure decimal
fractions are exact, but who says you are only doing decimal arithmetic?
Calculating interest, inflation, who knows what could easily introduce
non-exactly-representable-in-decimal numbers. And do accounting systems
really use floating point decimal dollars, rather than, say fixed point or
integer cents? I also notice that in the financial world, there's a lot of
use of binary fractions: interest rates tend to be in eights of a percent,
not tenths of a percent, for example.

So what does Decimal provide? Two things that you can't do with the
built-in (hardware) float:

Variable precision
Control of rounding

Which does make it more suitable for accounting and other applications, but
not because the internal implementation is decimal rather than binary.

BTW: it seems a "round the least significant digit on comparison" mode
would be handy

End rant -- not really that relevant anyway.

The relevant bit -- it seems that someone could write an accounting module
that utilized Decimal to precisely follow a particular set of accounting
rules (it's probably been done). But in that case, you'd want to be darn
sure that the specific context was used in that package -- not any global
setting that a user of the package, or some other package, might mess with?

So what's the point of a global context? Isn't it an accident waiting to
happen?

-CHB

-- 
Christopher Barker, PhD (Chris)

Python Language Consulting
  - Teaching
  - Scientific Software Development
  - Desktop GUI and Web Development
  - wxPython, numpy, scipy, Cython
_______________________________________________
Python-ideas mailing list -- python-ideas@python.org
To unsubscribe send an email to python-ideas-le...@python.org
https://mail.python.org/mailman3/lists/python-ideas.python.org/
Message archived at 
https://mail.python.org/archives/list/python-ideas@python.org/message/YOPCYALJEQGOVVPULXIU3VYKLNMKE2K3/
Code of Conduct: http://python.org/psf/codeofconduct/

Reply via email to