On Thu, Jun 26, 2014 at 12:56 PM, Steven D'Aprano <st...@pearwood.info> wrote:
> That's a myth. decimal.Decimal *is* a floating point value, and is
> subject to *exactly* the same surprises as binary floats, except for one:
> which Decimal, you can guarantee that any decimal string you enter will
> appear exactly the same (up to the limit of the current precision).
The important difference is that the issues with decimal floats come
where humans are comfortable seeing them. If you divide 1 by 3, you
get 0.333333333 and can understand that adding three of those together
won't quite make 1.0, because you can see that you shortened it. If
you divide 11 by 10, it's not obvious that that repeats.