Tim Peters <t...@python.org> added the comment:

"Denormal" and "subnormal" mean the same thing. The former is probably still in 
more common use, but all the relevant standards moved to "subnormal" some years 
ago.

Long chains of floating mults can lose precision too, but hardly anyone bothers 
to do anything about that.  Unlike sums, they're just not common or in critical 
cores.  For example, nobody ever computes the determinant of a 
million-by-million triangular matrix by producting ;-) the diagonal.

I only recall one paper devoted to this, using "doubled precision" tricks like 
fsum employs (but much more expensive unless hardware fused mul-add is 
available):

"Accurate Floating Point Product and Exponentiation"
Stef Graillat

However, that does nothing to prevent spurious overflow or underflow.

Much simpler:  presumably pairwise products can enjoy lower accumulated error 
for essentially the same reasons pairwise summation "works well". Yet, far as I 
know, nobody bothers.

Here's a cute program:

if 1:
    from decimal import Decimal
    from random import random, shuffle, seed

    def pairwise(xs, lo, hi):
        n = hi - lo
        if n == 1:
            return xs[lo]
        elif n == 2:
            return xs[lo] * xs[lo + 1]
        else:
            mid = (lo + hi) // 2
            return pairwise(xs, lo, mid) * pairwise(xs, mid, hi)

    N = 4000
    print(f"using {N=:,}")
    for s in (153,
              53,
              314,
              271828,
              789):
        print("\nwith aeed", s)
        seed(s)
        xs = [random() * 10.0 for i in range(N)]
        xs.extend(1.0 / x for x in xs[:])
        shuffle(xs)
        print("prod   ", math.prod(xs))
        print("product", product(xs)) # the code attached to this report
        print("frexp  ", prod2(xs))   # straightforward frexp
        print("Decimal", float(math.prod(map(Decimal, xs))))
        print("pair   ", pairwise(xs, 0, len(xs)))

By construction, the product of xs should be close to 1.0.  With N=4000 as 
given, out-of-range doesn't happen, and all results are pretty much the same:

using N=4,000

with aeed 153
prod    0.9999999999999991
product 0.9999999999999991
frexp   0.9999999999999991
Decimal 1.0000000000000042
pair    1.0000000000000016

with aeed 53
prod    1.0000000000000056
product 1.0000000000000056
frexp   1.0000000000000056
Decimal 1.0000000000000002
pair    0.9999999999999997

with aeed 314
prod    1.0000000000000067
product 1.0000000000000067
frexp   1.0000000000000067
Decimal 1.0000000000000082
pair    1.0000000000000002

with aeed 271828
prod    0.9999999999999984
product 0.9999999999999984
frexp   0.9999999999999984
Decimal 1.0000000000000004
pair    1.0000000000000064

with aeed 789
prod    0.9999999999999994
product 0.9999999999999994
frexp   0.9999999999999994
Decimal 1.0
pair    1.0000000000000069

But boost N so that out-of-range is common, and only frexp and Decimal remain 
reliable. Looks almost cetain that `product()` has serious bugs:

using N=400,000

with aeed 153
prod    0.0
product 1980.1146715391837
frexp   0.999999999999969
Decimal 1.000000000000027
pair    nan

with aeed 53
prod    0.0
product 6.595056534948324e+24
frexp   1.0000000000000484
Decimal 1.0000000000000513
pair    nan

with aeed 314
prod    0.0
product 6.44538471095855e+60
frexp   0.9999999999999573
Decimal 0.999999999999934
pair    nan

with aeed 271828
prod    inf
product 2556126.798990014
frexp   0.9999999999999818
Decimal 0.9999999999999885
pair    nan

with aeed 789
prod    0.0
product 118772.89118349401
frexp   0.9999999999999304
Decimal 1.0000000000000053
pair    nan

----------

_______________________________________
Python tracker <rep...@bugs.python.org>
<https://bugs.python.org/issue41458>
_______________________________________
_______________________________________________
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com

Reply via email to