[Steven D'Aprano <st...@pearwood.info>]

> Thanks Tim!
>

You're welcome ;-)


> Reading your digressions on the minutia of floating point maths is
> certainly an education. It makes algebra and real-valued mathematics
> seem easy in comparison.
>

Hard to say, really.  The problem with floating point is that it's so
God-awful lumpy - special cases all over the place.  Signaling and quiet
NaNs; signed infinities; signed zeroes; normal finites all with the same
number of bits, but where the gap between numbers changes abruptly at
power-of-2 boundaries; subnormals where the gap remains the same across
power-of-2 boundaries, but the number of _bits_ changes abruptly; all "the
rules" break down when you get too close to overflow or underflow; four
rounding modes to worry about; and a whole pile of technically defined
exceptional conditions and related traps & flags.

Ignoring all that, though, it's pretty easy ;-)  754 was dead serious about
requiring results act is if a single rounding is done to the infinitely
precise result, and that actually allows great simplification in reasoning.

The trend these days appears to be using automated theorem-proving systems
to keep track of the mountain of interacting special cases.  Those have
advanced enough that we may even be on the edge of getting
provably-correctly-rounded transcendental functions with reasonable speed.
Although it's not clear people will be able to understand the proofs ;-)

I still haven't got over Mark Dickinson's demonstration a few years
> back that under Decimal floating point, but not binary, it is possible
> for the ordinary arithmetic average (x+y)/2 to be outside of the
> range [x, y]:
>
> py> from decimal import getcontext, Decimal
> py> getcontext().prec = 3
> py> x = Decimal('0.516')
> py> y = Decimal('0.518')
> py> (x + y) / 2
> Decimal('0.515')
>

Ya, decimal fp doesn't really solve anything except the shallow surprise
that decimal fractions generally aren't exactly representable as binary
fractions.  Which is worth a whole lot for casual users, but doesn't
address any of the deep problems (to the contrary, it makes those a bit
worse).

I like to illustrate the above with 1-digit decimal fp, because it makes it
more apparent at once that - unlike as in binary fp - multiplication and
division by 2 may _not_ be exact in decimal fp.  We can't even average a
number "with itself" reliably:

>>> import decimal
>>> decimal.getcontext().prec = 1
>>> x = y = decimal.Decimal(8); (x+y)/2 # 10 is much bigger than 8
Decimal('1E+1')
>>> x = y = decimal.Decimal(7); (x+y)/2 # 5 is much smaller than 7
Decimal('5')

But related things _can_ happen in binary fp too!  You have to be near the
edge of representable non-zero finites though:

>>> x = y = 1e308
>>> x
1e+308
>>> (x+y)/2
inf

Oops.  So rewrite it:

>>> x/2 + y/2
1e+308

Better!  But then:

>>> x = y = float.fromhex("3p-1074")
>>> x
1.5e-323
>>> x/2 + y/2
2e-323

Oops.  A math library has to deal with everything "correctly".  Believe it
or not, this paper

    "How do you compute the midpoint of an interval?"
    https://hal.archives-ouvertes.fr/hal-00576641v1/document

is solely concerned with computing the average of two IEEE doubles, yet
runs to 29(!) pages.  Almost everything you try fails for _some_ goofy
cases.

I personally write it as (x+y)/2 anyway ;-)
_______________________________________________
Python-ideas mailing list
Python-ideas@python.org
https://mail.python.org/mailman/listinfo/python-ideas
Code of Conduct: http://python.org/psf/codeofconduct/

Reply via email to