PS. You can probably save a few more function evaluations if you
check for repeated calls with the same arguments. The reason is
that, on the line search, SLSQP often calls your function twice at
the end of the line search: once without the gradient, and then once
more (when it realizes ex post facto that the line search is done)
with the gradient.
Yes, I have noticed that. Unfortunately, I cannot make use of
this fact. I use automatic differentiation for the gradients, so
computing the derivative involves computing the function anyway.

An alternative interface would be to have separate function calls for
the objective and its gradient, but this is problematic because the
value and gradient usually share many calculations.
Exactly! I think the interface is perfect as is, given that the gradient computation is really avoided whenever possible.

In fact, for realistic parameter sets, computing the gradients (with
some automatic differentiation data type) consumes about 90% of the
total runtime of my program. Evaluating the function without differentiation (data type double) is quite cheap.

It probably wouldn't be too hard to modify the LBFGS implementation
to do something similar.
That would be great! Perhaps LBFGS would be competitive then (as long as
I have no non-linear constraints).


Best regards,
Peter

_______________________________________________
NLopt-discuss mailing list
[email protected]
http://ab-initio.mit.edu/cgi-bin/mailman/listinfo/nlopt-discuss

Reply via email to