On Jul 16, 11:14 pm, Daniel Friedan dfrie...@gmail.com wrote:
an update:
minimize() is much improved using Harald Schilly's suggestion to
provide an explicit gradient function defined with fast_float().
succeeded:
minimizing a quartic polynomial in 100 variables containing
1,100,411
an update:
minimize() is much improved using Harald Schilly's suggestion to
provide an explicit gradient function defined with fast_float().
succeeded:
minimizing a quartic polynomial in 100 variables containing
1,100,411 terms
(in 22329s of cpu time using 100% of one stock Intel
On Jul 13, 3:07 pm, 8fjm39j dfrie...@gmail.com wrote:
Any help would be much appreciated.
I'm not sure if problems of this size work. Also, you should add the
gradient to the minimize method. Here is a snippet that might help
you. You do not need the SR as far as i can see.
sage: RQ =
Following up with another data point:
Sage 4.3.4 under some version of Redhat Linux on an intel computer
with 64GB RAM.
The case that failed on the 4GB macbookpro succeeds here: minimize()
returns a value.
A larger case fails in the same manner, minimize() returning without
a value:
S is a
Thanks for your suggestion. I've now running a worksheet using your
method to minimize a large polynomial. So far, minimize() has not yet
returned.
One minor question: in your definition of the gradient function,
sage: gradfun = lambda x:np.array(map(lambda f:f(*x),
eq.gradient()))
should
On 13 Jul., 17:32, 8fjm39j dfrie...@gmail.com wrote:
One minor question: in your definition of the gradient function,
sage: gradfun = lambda x:np.array(map(lambda f:f(*x),
eq.gradient()))
should that be 'eq.gradient()' or should it be 'req.gradient()' ?
(recall that req =
cvvcv
2010/7/13 8fjm39j dfrie...@gmail.com
Following up with another data point:
Sage 4.3.4 under some version of Redhat Linux on an intel computer
with 64GB RAM.
The case that failed on the 4GB macbookpro succeeds here: minimize()
returns a value.
A larger case fails in the same manner,