W dniu piątek, 21 marca 2014 13:44:17 UTC+1 użytkownik Steven G. Johnson 
napisał:
>
>
> It would be better to avoid the NaNs in the first place (which come when 
> you multiply 0 * Inf, from an underflow times an overflow).  For one thing, 
> floating-point exceptions are slow.  For another thing, it's possible that 
> the final result of the computation is not an underflow but is rather 
> something that is non-negligible.
>
> If you are really sure that the NaNs correspond to negliglbe contributions 
> in infinite precision, it would be better to just avoid computing those n's 
> entirely; if you can be a bit more clever how you choose which n's to sum, 
> you could try to sum only the non-negligible contributions. 
>

It turns out, that only about 80 last terms of the sum have any chance to 
contribute to the result.  This follows directly from asymptotic behavior 
of Laguerre polynomials for large n.  As Steve suggested, by truncating the 
series I also avoid all computations with NaN results.  With this 
modification, I can actually effortlessly compute Q for absurdly large 
t=10^6 or even larger in a matter of seconds!  Thats a tremendous 
improvement.  I also no longer need the BigFloat and I get no NaNs so I can 
safely use the GSL version of Laguerre polynomials, which is even faster 
than my own implementation and further extends the range arguments to 
t=10^13 and above.  So I have run a full circle and ended up with my first 
implementation with a minor modification (added a lower bound).

using GSL

function Q(t::Number)
    lbnd=max(0,int(t-80))
    ubnd=int(floor(t))
    s=sum([sf_laguerre_n(n,1,t-n)*exp(-(t-n))*(t-n)/(n+1) for n=lbnd:ubnd])-
exp(-t-1)
    return(s)
end

I doubt there is no room for further improvement here:-).

Reply via email to