Tim Peters <[email protected]> added the comment:
Or, like I did, they succumbed to an untested "seemingly plausible" illusion ;-)
I generated 1,000 random vectors (in [0.0, 10.0)) of length 100, and for each
generated 10,000 permutations. So that's 10 million 100-element products
overall. The convert-to-decimal method was 100% insensitive to permutations,
generating the same product (default decimal prec result rounded to float) for
each of the 10,000 permutations all 1,000 times.
The distributions of errors for the left-to-right and pairing products were
truly indistinguishable. They ranged from -20 to +20 ulp (taking the decimal
result as being correct). When I plotted them on the same graph, I thought I
had made an error, because I couldn't see _any_ difference on a 32-inch
monitor! I only saw a single curve. At each ulp the counts almost always
rounded to the same pixel on the monitor, so the color of the second curve
plotted almost utterly overwrote the first curve.
As a sanity check, on the same vectors using the same driver code I compared
sum() to a pairing sum. Pairing sum was dramatically better, with a much
tighter error distribution with a much higher peak at the center ("no error").
That's what I erroneously expected to see for products too - although, in
hindsight, I can't imagine why ;-)
----------
_______________________________________
Python tracker <[email protected]>
<https://bugs.python.org/issue41458>
_______________________________________
_______________________________________________
Python-bugs-list mailing list
Unsubscribe:
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com