Hi!

Do I understand correctly that all of the gradient-based optimization
methods included in nlopt do not have built-in finite differences or
automatic differentiation approximators, such as, for instance NumPy
fmin_l_bfgs_b and one has to either provide gradients by programming the
analytic formulas, or perform numeric approximations on his own?

In such a case, what would be the recommended code to use for Python? 

Is it possible to include such a tool in nlopt in the future?
 
-- 
Sincerely yours,
Yury V. Zaytsev


_______________________________________________
NLopt-discuss mailing list
[email protected]
http://ab-initio.mit.edu/cgi-bin/mailman/listinfo/nlopt-discuss

Reply via email to