There are algorithms that have errors along the lines of 
     n (log(n) eps)^i
where I assumed that |x_i| ~ 1, and \sum x_i ~ 1, which is what I was 
looking into. With this I can go to n >> \eps^{-1}.

But maybe something like BigFloat would be more practical. I'll look into 
that as well - thanks.

Christoph


     

On Wednesday, 4 February 2015 00:04:13 UTC, Steven G. Johnson wrote:
>
> On Tuesday, February 3, 2015 at 4:48:20 AM UTC-5, Christoph Ortner wrote: 
>>
>> For my own applications, I really need something much better than 
>> pairwise summation, which has ~\eps \sum |x[i]| error, so I will try this, 
>> but I'm afraid your syntax goes over my head.  I suppose, this would 
>> require to first rewrite sum_kbn?
>>
>
> My understanding is that every summation algorithm (in bounded precision) 
> has error that grows as sum |x[i]| multiplied by some prefactor, including 
> compensated/Kahan summation.   For naive summation the prefactor grows as 
> O(n), for pairwise the prefactor is O(log n), and for Kahan the prefactor 
> is O(1). 
>
> See, e.g. Highams "Accuracy of floating-point summation" article, equation 
> (3.11), for the forward error bound on compensated summation.
>
> The basic reason for this is that the forward error is constrained by the 
> condition number of the summation function (which is a property of the 
> mathematical function in infinite precision, not a property of 
> floating-point arithmetic or of any particular algorithm).
>
> If you need more accuracy than that, you need to go to arbitrary precision 
> (either via BigFloat or via something more fancy/adaptive like Shewchuk's 
> algorithms).
>

Reply via email to