On Wed, Feb 22, 2012 at 10:13 AM, Alec Taylor <alec.tayl...@gmail.com> wrote:
> Simple mathematical problem, + and - only:
>
>>>> 1800.00-1041.00-555.74+530.74-794.95
> -60.950000000000045
>
> That's wrong.

Welcome to the world of finite-precision binary floating-point
arithmetic then! Reality bites.

> Proof
> http://www.wolframalpha.com/input/?i=1800.00-1041.00-555.74%2B530.74-794.95
> -60.95 aka (-(1219/20))
>
> Is there a reason Python math is only approximated?

Because vanilla floating-point numbers have a finite bit length (and
thus finite precision) but they try to represent a portion of the real
number line, which has infinitely many points. Some approximation
therefore has to occur. It's not a problem specific to Python; it's
inherent to your CPU's floating point numeric types.

Read http://docs.python.org/tutorial/floatingpoint.html
and http://docs.oracle.com/cd/E19957-01/806-3568/ncg_goldberg.html

Wolfram Alpha is either rounding off its answer to fewer decimal
places (thus merely hiding the imprecision), or using some different,
more computationally expensive arithmetic type(s) in its calculations,
hence why it gives the exact answer.

Alternatives to floats in Python include:
* Fractions: http://docs.python.org/library/fractions.html
* Arbitrary-precision decimal floating point:
http://docs.python.org/library/decimal.html
These aren't the default for both historical and performance reasons.

Cheers,
Chris
-- 
http://mail.python.org/mailman/listinfo/python-list

Reply via email to