On 13 Jul., 17:32, 8fjm39j <[email protected]> wrote:
> One minor question:  in your definition of the gradient function,
>     sage: gradfun = lambda x:np.array(map(lambda f:f(*x),
> eq.gradient()))
> should that be 'eq.gradient()' or should it be 'req.gradient()' ?
> (recall that req = eq.change_ring(RDF))

I only did this through a feeling, but benchmarking it revealed that
it doesn't matter.
You can do this via %timeit

Here is an significant improvement for the gradient:

sage: gradfun = lambda x:np.array(map(lambda f:f(*x), eq.gradient()))
sage: grad = [fast_float(g) for g in eq.gradient() ]
sage: gradfun2 = lambda x:np.array(map(lambda f:f(*x), grad))

same output:

sage: all(gradfun2([1]*30) == gradfun([1]*30))
True

but

sage: %timeit gradfun([1]*30)
125 loops, best of 3: 4.7 ms per loop
sage: %timeit gradfun2([1]*30)
625 loops, best of 3: 110 µs per loop

~ 42x faster!

Maybe there is an even faster way ...? -> cython

Also, there are different solvers wrapped, e.g. conjugate gradients.

sage: minimize(lambda x : eq(*x), [0]*eq.parent().ngens(),
gradient=gradfun2, algorithm="cg")

or look directly into the scipy.optimize module.

H

-- 
To post to this group, send email to [email protected]
To unsubscribe from this group, send email to 
[email protected]
For more options, visit this group at 
http://groups.google.com/group/sage-support
URL: http://www.sagemath.org

Reply via email to