On Tue, Mar 8, 2011 at 2:05 PM,  <[email protected]> wrote:
>> PS. You can probably save a few more function evaluations if you
>> check for repeated calls with the same arguments. The reason is
>> that, on the line search, SLSQP often calls your function twice at
>> the end of the line search: once without the gradient, and then once
>> more (when it realizes ex post facto that the line search is done)
>> with the gradient.
>
> Yes, I have noticed that. Unfortunately, I cannot make use of
> this fact. I use automatic differentiation for the gradients, so
> computing the derivative involves computing the function anyway.

Perhaps a dumb question— but why are you using automatic differentiation
instead of using a gradient free algorithm?

Are the gradient free algorithims in NLOPT really so poor for your
application that
automatic differentiation + a with-gradient algorithm is superior?

_______________________________________________
NLopt-discuss mailing list
[email protected]
http://ab-initio.mit.edu/cgi-bin/mailman/listinfo/nlopt-discuss

Reply via email to