On Jan 23, 2014, at 11:07 AM, Tobias Schmidt <[email protected]> 
wrote:
> OK, in my understanding the optimization should be executable with a 
> gradient-based algorithm if I set *grad* to a constant value instead of a 
> calculated one.


I don't think that most of the algorithms will converge with an incorrect 
gradient.  Certainly the convergence proofs assume a correct gradient.

What is the point of using a gradient-based algorithm if you don't supply the 
gradient?
_______________________________________________
NLopt-discuss mailing list
[email protected]
http://ab-initio.mit.edu/cgi-bin/mailman/listinfo/nlopt-discuss

Reply via email to