> On Mar 19, 2015, at 7:55 AM, Joshua N Pritikin <[email protected]> wrote: > > Digging through the source code, I found this comment: > > /* SGJ 2010: optimizing for the common case where the inexact line > search succeeds in one step, use special mode = -2 here to > eliminate a a subsequent unnecessary mode = -1 call, at the > expense of extra gradient evaluations when more than one inexact > line-search step is required */ > > This avoids a few function evaluations if the line search succeeds in > one step, however, it can result in 2 gradient evaluations per major > iteration. If the gradient is cheap, I suppose this is an acceptable > cost. However, in our application, the gradient is approximated by > finite differences. Can we switch this "optimization" off somehow?
You can certainly switch it off just by editing that line in the code. However, my general feeling is that, if you have a finite-difference gradient, you should really be using a derivative-free optimization algorithm. (Or you should compute the gradient analytically, typically by an adjoint method.) _______________________________________________ NLopt-discuss mailing list [email protected] http://ab-initio.mit.edu/cgi-bin/mailman/listinfo/nlopt-discuss
