Mark Dickinson added the comment: Here's the error analysis, for the record. It's crude, but effective. Assume our decimal context precision is p (so p = 20 for the above code).

1. d is represented exactly with value in [1.0, 2.0). (Conversions from float to Decimal are exact.) 2. c.ln(d) < 1.0, and Decimal's ln operation is correctly rounded, so the *absolute* error in c.ln(d) is at most 0.5 * 10**-p. 3. Similarly, LOG2 has an absolute error of at most 0.5 * 10**-p. 4. Let q be the unique integer satisfying 10**q <= n < 10**(q+1). 5. r * LOG2 < n < 10**(q+1), so ulp(r * LOG2) <= 10**(q + 1 - p), and the rounding error in computing r * LOG2 is at most 0.5 * 10**(q + 1 - p). We also inherit the error from LOG2, scaled up by r (which is less than 10**(q+1)), so this inherited error is also bounded by 0.5 * 10**(q + 1 - p). Our total absolute error is now bounded by 10**(q + 1 - p). 6. The result of the addition of d to r * LOG2 is also bounded by n < 10**(q+1), so introduces a new rounding error of up to 0.5 * 10**(q + 1 - p) on top of the errors in d and r * LOG2 (steps 2 and 5). Our total error is now bounded by 1.5*10**(q + 1 - p). 7. Division by n produces a result <= 1.0. It divides our absolute error so far by n (>= 10**q), giving a new error of at most 15 * 10**-p, and introduces a new rounding error of at most 0.5 * 10**-p. Bound so far is 15.5 * 10**-p. 8. The exponential operation gives a result in the range [1.0, 2.0], and converts our absolute error of 15.5 * 10**-p into a relative error of at most 15.5 * 10**-p (let's call it 16 * 10**-p to be safe), which since our result is bounded by 2.0 amounts to an absolute error of at most 32 * 10**-p. We get a rounding error on the result of at most 5 * 10**-p (like ln, the decimal module's "exp" operation is also correctly rounded), making our total error bounded by 37 * 10**-p. Finally, we want to express the error bound above in terms of ulps of the original IEEE 754 binary64 format that float (almost certainly) uses. Since our result (before the ldexp) is in the range [1.0, 2.0], one ulp for the binary format is 2**-52. So the error bound in step 8, expressed in binary64 ulps, is 37 / 2 **-52 * 10**-p < 2 * 10**(17-p) ulps. Rounding from the final Decimal value to float also incurs an error of at most 0.5 ulps, so we end up with an error bound of final_error <= 0.5 + 2 * 10**(17-p) ulps. For p = 20, this is 0.502 ulps. ---------- _______________________________________ Python tracker <rep...@bugs.python.org> <http://bugs.python.org/issue27761> _______________________________________ _______________________________________________ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com