You need to realize that SLSQP is a quasi-Newton method, not a Newton
method: it is sequential quadratic programming with an *approximate*
Hessian (2nd derivative) that is built up iteratively.
[snip]
In particular, for the very first step the Hessian is initialized to the
identity matrix (divided by 2)
You are, of course, right. Thanks for pointing this out.

SLSQP is a BFGS method.
So, if you have no nonlinear constraints I'm not sure why SLSQP would
converge significantly differently than any of the other quasi-Newton
methods.
I agree with that. In the NLopt implementation, however, SLSQP seems to
perform a line search, while LBFGS does not (?). LBFGS evaluates the
gradient every time the function is called. This also affects
convergence. For a very small, randomly picked test with my application,
I obtain the following results:
SLSQP: 54 gradients, 100 function evaluations, 28 seconds,
LBFGS: 116 gradients, 116 function evaluations, 67 seconds.
Perhaps, I am missing something here? According to the manual, the
gradient is required if and only if the gradient is not empty().

Apart from that, I would love to see the SQP method being reliable and
stable for all cases. At some point, I will have to add non-linear
constraints to my problem and then SQP will definitely be the method of
choice.


Best regards,
Peter

_______________________________________________
NLopt-discuss mailing list
[email protected]
http://ab-initio.mit.edu/cgi-bin/mailman/listinfo/nlopt-discuss

Reply via email to