On Feb 28, 2012, at 11:53 AM, Sascha Merz wrote:
Thanks a lot for providing this great optimization library. One question that I have is if there is a certain reason that gradients are computed in the MMA inner iterations.

Two reasons:

First, if it is the last inner iteration, then I would have to compute the gradients anyway for the next outer iteration, so I'd have to call the objective function again. So, there is a tradeoff here, but if computing the gradient is cheap compared to computing the objective function (as is often the case) then it seemed worth it to compute it in each inner iteration just in case that inner iteration is the last.

Second, I use a slight modification of the Svanberg algorithm, in that if an inner iteration finds a better feasible point but doesn't satisfy the termination conditions for the inner iteration, then I continue the inner iterations with a new approximant based on the new feasible point.

However, it would be straightforward to modify the code to change the above two behaviors, going back to the original Svanberg algorithm and not computing the gradient during the inner iterations (but recomputing the objective after the inner iterations complete).

_______________________________________________
NLopt-discuss mailing list
[email protected]
http://ab-initio.mit.edu/cgi-bin/mailman/listinfo/nlopt-discuss

Reply via email to