Tim Peters <t...@python.org> added the comment:

My apologies if nobody cares about this ;-) I just think it's helpful if we all 
understand what really helps here.

> Pretty much the whole trick relies on computing
> "sumsq - result**2" to greater than basic machine
> precision.

But why is that necessary at all?  Because we didn't compute the sqrt of the 
sum of the squares to begin with - the input to sqrt was a _rounded_-to-double 
approximation to the sum of squares. The correction step tries to account for 
the sum-of-squares bits sqrt() didn't see to begin with.

So, e.g., instead of doing

    result = sqrt(to_float(parts))
    a, b = split(result)
    parts = add_on(-a*a, parts)
    parts = add_on(-b*b, parts)
    parts = add_on(-2.0*a*b, parts)
    x = to_float(parts)
    result += x / (2.0 * result)

it works just as well to do just

    result = float((D(parts[0] - 1.0) + D(parts[1])).sqrt())
    
where D is the default-precision decimal.Decimal constructor. Then sqrt sees 
all the sum-of-squares bits we have (although the "+" rounds away those 
following the 28th decimal digit).

----------

_______________________________________
Python tracker <rep...@bugs.python.org>
<https://bugs.python.org/issue41513>
_______________________________________
_______________________________________________
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com

Reply via email to