On Jul 13, 3:07 pm, 8fjm39j <dfrie...@gmail.com> wrote:
> Any help would be much appreciated.

I'm not sure if problems of this size work. Also, you should add the
gradient to the minimize method. Here is a snippet that might help
you. You do not need the SR as far as i can see.


sage: RQ = PolynomialRing(QQ, 30, 'x', sparse=True)
sage: RQ.inject_variables()
Defining x0, x1, x2, x3, x4, x5, x6, x7, x8, x9, x10, x11, x12, x13,
x14, x15, x16, x17, x18, x19, x20, x21, x22, x23, x24, x25, x26, x27,
x28, x29
sage: eq = sum([ (v + random())^2 for v in RQ.gens() ])
sage: req = eq.change_ring(RDF)
sage: import numpy as np
sage: gradfun = lambda x:np.array(map(lambda f:f(*x), eq.gradient()))
sage: minimize(lambda x : req(*x), [0]*req.parent().ngens(),
gradient=gradfun)
Optimization terminated successfully.
         Current function value: 0.000000
         Iterations: 2
         Function evaluations: 4
         Gradient evaluations: 4
(-0.55788239962, -0.0493798356231, -0.593303577877, -0.339802652733,
-0.00394559417147, -0.178836124785, -0.343306688157, -0.126282234205,
-0.642885679398, -0.27541451953, -0.689213436111, -0.41996375463,
-0.602566339938, -0.626694430444, -0.771426488128, -0.0283310587547,
-0.913384222525, -0.128570101865, -0.75252338794, -0.834385792852,
-0.658475228648, -0.266546504385, -0.683600111652, -0.063955541513,
-0.790083400019, -0.0634933885369, -0.136504640143, -0.978047564451,
-0.743009613932, -0.276400559549)

H

-- 
To post to this group, send email to sage-support@googlegroups.com
To unsubscribe from this group, send email to 
sage-support+unsubscr...@googlegroups.com
For more options, visit this group at 
http://groups.google.com/group/sage-support
URL: http://www.sagemath.org

Reply via email to