On Aug 17, 2010, at 5:33 PM, Greg Nicholas wrote:\
However, to make it more clear where my desire for a dichotomy between "exploratory" and "trajectory" evaluations comes from, consider a gradient-based algorithm that uses a simple approximation of the gradient by probing the score at +/- epsilon around an anchor point along each dimension.
Note that NLopt currently doesn't use any algorithm that employs finite-difference approximations for the gradients at each step. (Of course, the user can always do this herself in lieu of an analytical gradient, but in that case she can avoid reporting the +/- epsilon points in the "trajectory"). Nor do I currently envision ever including such an algorithm.
If you want to use a gradient-based algorithm in NLopt, you are generally well-advised to only do so if you can compute the gradient analytically (or via automatic differentiation); otherwise, you should use an algorithm that only requires you to supply function values. Note that finite-difference approximations to gradients are very tricky to compute accurately without roundoff error killing you.
Some non-gradient algorithms like COBYLA and BOBYQA internally construct an approximate gradient as they proceed, but they do *not* do so by finite-difference approximations with some tiny epsilon. Instead, they use the actual iteration points to construct linear/ quadratic approximations as they go. Hence, there is no distinction between the "trajectory" points and the "gradient" points.
Steven _______________________________________________ NLopt-discuss mailing list [email protected] http://ab-initio.mit.edu/cgi-bin/mailman/listinfo/nlopt-discuss
