[Tim Peters] ... >|> Well, just about any technical statement can be misleading if not >|> qualified to such an extent that the only people who can still >|> understand it knew it to begin with <0.8 wink>. The most dubious >|> statement here to my eyes is the intro's "exactness carries over >|> into arithmetic". It takes a world of additional words to explain >|> exactly what it is about the example given (0.1 + 0.1 + 0.1 - 0.3 = >|> 0 exactly in decimal fp, but not in binary fp) that does, and does >|> not, generalize. Roughly, it does generalize to one important >|> real-life use-case: adding and subtracting any number of decimal >|> quantities delivers the exact decimal result, /provided/ that >|> precision is set high enough that no rounding occurs.
[Nick Maclaren] > Precisely. There is one other such statement, too: "Decimal numbers > can be represented exactly." What it MEANS is that numbers with a > short representation in decimal can be represented exactly in decimal, > which is tautologous, but many people READ it to say that numbers that > they are interested in can be represented exactly in decimal. Such as > pi, sqrt(2), 1/3 and so on .... Huh. I don't read it that way. If it said "numbers can be ..." I might, but reading that way seems to requires effort to overlook the "decimal" in "decimal numbers can be ...". [attribution lost] >|>> and how is decimal no better than binary? >|> Basically, they both lose info when rounding does occur. For >|> example, > Yes, but there are two ways in which binary is superior. Let's skip > the superior 'smoothness', as being too arcane an issue for this > group, With 28 decimal digits used by default, few apps would care about this anyway. > and deal with the other. In binary, calculating the mid-point > of two numbers (a very common operation) is guaranteed to be within > the range defined by those numbers, or to over/under-flow. > > Neither (x+y)/2.0 nor (x/2.0+y/2.0) are necessarily within the range > (x,y) in decimal, even for the most respectable values of x and y. > This was a MAJOR "gotcha" in the days before binary became standard, > and will clearly return with decimal. I view this as being an instance of "lose info when rounding does occur". For example, >>> import decimal as d >>> s = d.Decimal("." + "9" * d.getcontext().prec) >>> s Decimal("0.9999999999999999999999999999") >>> (s+s)/2 Decimal("1.000000000000000000000000000") >>> s/2 + s/2 Decimal("1.000000000000000000000000000") "The problems" there are due to rounding error: >>> s/2 # "the problem" in s/2+s/2 is that s/2 rounds up to exactly 1/2 Decimal("0.5000000000000000000000000000") >>> s+s # "the problem" in (s+s)/2 is that s+s rounds up to exactly 2 Decimal("2.000000000000000000000000000") It's always something ;-) -- http://mail.python.org/mailman/listinfo/python-list