On 04/16/2013 01:48 PM, Jim Mooney wrote:
I accidentally sent as HTML so this is a resend in case that choked the mailing prog ;')I was doing a simple training prog to figure monetary change, and wanted to avoid computer inaccuracy by using only two-decimal input and not using division or mod where it would cause error. Yet, on a simple subtraction I got a decimal error instead of a two decimal result, as per below. What gives? cost = float(input('How much did the item cost?: ')) paid = float(input('How much did the customer give you?: ')) change = paid - cost #using 22.89 as cost and 248.76 as paid twenties = int(change / 20) if twenties != 0: twentiesAmount = 20 * twenties change = change - twentiesAmount #change is 5.8700000000000045, not 5.87 - how did I get this decimal error when simply subtracting an integer from what should be a #two-decimal amount?
The other responses have been good. But let me point some things out. Binary floating point has been around as long as I've been in computers, which was in 1967. I saw a detailed rant about avoiding errors in manipulating floats, in that year.
The fact is that rounding errors can occur in lots of surprising places. For example, if you're using pencil and paper, and you divide 7 by 3, you get 2.333333 and at some point you'll stop writing the 3's. if you then multiply by 3, you get 6.99999 not 7. Now if you're doing it by hand, you just adjust it without thinking much. But a computer is a very literal thing, and fudging numbers isn't necessarily a good idea.
In the code above you divided a number by 20, then multiplied by 20 again. That's an operation that happens to come out even in decimal, because 20 is a factor of 100, the base squared. But just like 1/3 is a problem in a decimal system, so 1/20 is a problem in a binary one.
The chief advantage of decimal is NOT that it's more accurate, but that it gets the numbers wrong when YOU expect it, not by some other, more subtle rule. And the second advantage is that there are fewer places where the numbers need to be converted, and each conversion might produce an error.
It happens that binary floats, as used on Intel hardware, are accurate for integers up to a fairly large value. So some people will represent money in pennies, adjusting only when it's time to print. Others will simply use integers (which have no practical size limit on python 3.x) for the same purpose. That's frequently necessary when doing accounting or banking, since many transactions are rounded or truncated to the nearest penny at each step, according to some standardized rules, rather than keeping the numbers more "accurate."
Bottom line, you need to understand enough about what's going on to avoid getting burned.
And in case it wasn't obvious, it's not unique to Python. It's potentially a problem in any environment that doesn't have infinite precision.
-- DaveA _______________________________________________ Tutor maillist - [email protected] To unsubscribe or change subscription options: http://mail.python.org/mailman/listinfo/tutor
