On May 14, 2009, at 8:10 PM, Lisandro Dalcin wrote:

> On Thu, May 14, 2009 at 10:12 PM, Carl Witty <[email protected]>  
> wrote:
>> On Thu, May 14, 2009 at 4:48 PM, Lisandro Dalcin  
>> <[email protected]> wrote:
>>> However, interestingly enough, my C++ <complex> header (from GCC
>>> 4.3.2) seems to implements the naive way (although warned in the
>>> comment)... Am I missing something here??
>>
>> GSL (the GNU Scientific Library) also uses the naive algorithm.
>> Perhaps everybody else has decided that speed is more important than
>> accuracy, or perhaps they haven't thought of the issues at all?
>>
>
> I bet that the second one is the case ;-)
>
>>
>> How much difference in accuracy are we talking about?
>>
>
> Basically, we have two issues: overflow and cancelation.
>
> * Overflow will occur if the real or imaginary part of the denominator
> has exponent > 1/2 max_exponent; for example real|imag parts are above
> 1e20 for float or 1e155 for double

Yep.

> * Cancellation can occur on subtraction when computing the  
> imaginary component.

Or, of course, on addition when computing the real part.

The smart algorithm has exactly the same problem. I am at a loss to  
explain why the smart algorithm is better 60% of the time, and the  
naive only 25% of the time. (I tried various other distributions, the  
results are all the same.)

> See yourself the effects of all this in single precision:

[...]

> z1 = (1+2j)
> z2 = (3+4j)
> z1/z2 (naive) = (0.439999997616+0.0799999982119j)
> z1/z2 (smart) = (0.44+0.08j)
>
> z1 = 1e+20j
> z2 = (1e+20+0j)
> z1/z2 (naive) = nan*j
> z1/z2 (smart) = 1j
>
> z1 = (1+1.0000001j)
> z2 = (1+1j)
> z1/z2 (naive) = (1+5.96046447754e-08j)
> z1/z2 (smart) = (1.00000005+5.00000000292e-08j)
>
> In the second bit, overflow generated a NaN. In the third bit,
> cancellation produced an imaginary component 20% larger.

Of course the comparison is a bit unfair here, as your smart method  
is actually using doubles. Re-running this with all doubles and the  
naive algorithm gets all the same answers (up to the precision  
printed at least). Note that the smart algorithm gives an answer  
that's pretty far off too (though not quite as bad):

sage: complex(1+1.000000000000001j)/complex(1+1j)
(1.0000000000000004+5.5511151231257827e-16j)

This isn't to say that the smart algorithm shouldn't be used, but if  
its two or three times slower, I'm dubious that it's worth it for a  
slightly smaller average rounding error.

- Robert

_______________________________________________
Cython-dev mailing list
[email protected]
http://codespeak.net/mailman/listinfo/cython-dev

Reply via email to