I think supplying a constant gradient will lead to some convergence
issues as the " landscape" (hence the gradient) of your objective
function changes with each iteration in your optimization. So if you
are better off using a derivative free algorithm (such as BOBYQA) if
there are issues with your gradient.

Regards,

Ngoy


Sent from my Windows Phone
From: Steven G. Johnson
Sent: 2014/01/23 09:14 PM
To: [email protected]
Subject: Re: [NLopt-discuss] Nonlinear constrained optimization using
SLSQP returns exception "nlopt roundoff-limited"

On Jan 23, 2014, at 11:07 AM, Tobias Schmidt
<[email protected]> wrote:
> OK, in my understanding the optimization should be executable with a 
> gradient-based algorithm if I set *grad* to a constant value instead of a 
> calculated one.


I don't think that most of the algorithms will converge with an
incorrect gradient.  Certainly the convergence proofs assume a correct
gradient.

What is the point of using a gradient-based algorithm if you don't
supply the gradient?
_______________________________________________
NLopt-discuss mailing list
[email protected]
http://ab-initio.mit.edu/cgi-bin/mailman/listinfo/nlopt-discuss

_______________________________________________
NLopt-discuss mailing list
[email protected]
http://ab-initio.mit.edu/cgi-bin/mailman/listinfo/nlopt-discuss

Reply via email to