On Mar 8, 2011, at 12:07 PM, [email protected] wrote:
I agree with that. In the NLopt implementation, however, SLSQP seems to
perform a line search, while LBFGS does not (?). LBFGS evaluates the
gradient every time the function is called. This also affects
convergence. For a very small, randomly picked test with my application,
I obtain the following results:
SLSQP: 54 gradients, 100 function evaluations, 28 seconds,
LBFGS: 116 gradients, 116 function evaluations, 67 seconds.
Perhaps, I am missing something here? According to the manual, the
gradient is required if and only if the gradient is not empty().

They both perform line searches (essentially any quasi-Newton method must do this), but you are right in that SLSQP's line-search implementation tries to avoid evaluating the gradient if it does not need it. (The gradient is needed only for the first and last step of the line search.)

It probably wouldn't be too hard to modify the LBFGS implementation to do something similar.

Steven

PS. You can probably save a few more function evaluations if you check for repeated calls with the same arguments. The reason is that, on the line search, SLSQP often calls your function twice at the end of the line search: once without the gradient, and then once more (when it realizes ex post facto that the line search is done) with the gradient. (An alternative interface would be to have separate function calls for the objective and its gradient, but this is problematic because the value and gradient usually share many calculations.)

_______________________________________________
NLopt-discuss mailing list
[email protected]
http://ab-initio.mit.edu/cgi-bin/mailman/listinfo/nlopt-discuss

Reply via email to