On Wed, Sep 8, 2010 at 6:39 PM, KvS <keesvansch...@gmail.com> wrote:
> Thanks all, however I am not very successful so far :(.
>
> I tried both options mentioned before:
>
> - only optimize the loops in Cython and keep using symbolic
> expressions/infinite precision, but this is unfortunately rather
> slow;

What do you mean by symbolic? Are you actually using maxima and the
various calculus modules?

> - fully optimize in Cython by turning to doubles everywhere, although
> it gets very fast here happens what I was already afraid of: the
> algorithm goes belly up after 15 steps :( (I would need a lot more
> steps). In step 14 values like 2.51885860e-34 appear and in
> combination with the numerical noise gathered this turns out to
> produce very wrong values at the next step.
>
> @Robert: I'm trying to implement an algorithm that computes a sequence
> of functions v_k according to the rule v_k(x) = \max \{ K-e^x, \int
> v_{k-1}(y) p(x+y) dy \}, v_0(x)=K-e^x, where p is a prob. density. The
> background is an optimal stopping problem in stochastics. So at each
> step I essentially first compute the integral and then determine where
> the integral and x -> K-e^x intersect, this just by the basic
> find_root function in Sage.
>
> It's not difficult (however very annoying...) to write down a formula
> for the integral (given the specific density p I'm using) and work out
> formulae for all the coefficients that appear in this formula for the
> integral in terms of those appearing in v_{k-1}. The number of
> coefficients (and hence computations) involved in the formula for v_k
> increases very quickly with k, that explains why the algorithm gets
> slow in 'symbolic mode'. However 'double mode' is not good enough as
> mentioned above.

It sounds like you're trying too extreems--how about using your
"double mode" algorithm, but instead using elements of RealField(N)
where N > 53. This should be much faster than "symbolic" (if I
understand you right, calling find_root and integrate) but have higher
precision than using raw doubles. This may be enough, without getting
your hands on MPFR directly yourself.

You might also consider looking at how the dickman rho function is
implemented 
http://hg.sagemath.org/sage-main/file/b5dab6864f35/sage/functions/transcendental.py#l464
which sounds similar in spirit.

> I will try your suggestion for using mpfr, thanks Robert and Greg!
> (however it looks a bit scary to me ;)). I need to store the huge
> amounts of coefficients somewhere, I guess numpy is my best guess even
> though I have to declare the dtype as object then, no?

What do you mean by huge? Would a list suffice? Or if it's a
polynomial, what about a polynomial in RR? For high precision, the
per-element overhead is constant, so I don't see an advantage to using
NumPy here just as a storage divice if the dtype is object.

- Robert

-- 
To post to this group, send email to sage-support@googlegroups.com
To unsubscribe from this group, send email to 
sage-support+unsubscr...@googlegroups.com
For more options, visit this group at 
http://groups.google.com/group/sage-support
URL: http://www.sagemath.org

Reply via email to