On 2/26/2012 7:24 PM, John Ladasky wrote:
> I always found it helpful to ask someone who is confused by this issue
> to imagine what the binary representation of the number 1/3 would be.
>
> 0.011 to three binary digits of precision:
> 0.0101 to four:
> 0.01011 to five:
> 0.010101 to six:
> 0.0101011 to seven:
> 0.01010101 to eight:
>
> And so on, forever. So, what if you want to do some calculator-style
> math with the number 1/3, that will not require an INFINITE amount of
> time? You have to round. Rounding introduces errors. The more
> binary digits you use for your numbers, the smaller those errors will
> be. But those errors can NEVER reach zero in finite computational
> time.
Ditto for 1/3 in decimal.
...
0.33333333 to eitht
If ALL the numbers you are using in your computations are rational
numbers, you can use Python's rational and/or decimal modules to get
error-free results.
Decimal floats are about as error prone as binary floats. One can only
exact represent a subset of rationals of the form n / (2**j * 5**k). For
a fixed number of bits of storage, they are 'lumpier'. For any fixed
precision, the arithmetic issues are the same.
The decimal module decimals have three advantages (sometimes) over floats.
1. Variable precision - but there are multiple-precision floats also
available outside the stdlib.
2. They better imitate calculators - but that is irrelevant or a minus
for scientific calculation.
3. They better follow accounting rules for financial calculation,
including a multiplicity of rounding rules. Some of these are laws that
*must* be followed to avoid nasty consequences. This is the main reason
for being in the stdlib.
> Learning to use them is a bit of a specialty.
Definitely true.
--
Terry Jan Reedy
--
http://mail.python.org/mailman/listinfo/python-list