On Sep 8, 2010, at 6:17 AM, Zvi Tarem wrote:
Many algorithms produce an estimate of the Hessian as part of their progress, and in other cases it is possible to estimate the matrix in a way similar to
estimating the gradient when analytic derivatives are not available.

It would be useful to add a function to the nlopt library to extract the Hessian
(or its inverse) at the solution.

Thanks for the suggestion. Unfortunately, it seems fairly tricky to implement right now.

Many of the algorithms, in fact, generate nothing like an approximate Hessian. Approximate Hessians are only generated in BFGS and related sequential quadratic methods: SLSQP, LBFGS and the variable-metric/ Newton methods, and BOBYQA/NEWUOA. Moreover, in the low-storage versions of these methods the approximate Hessian is not stored explicitly, but is rather stored as an approximate sparse factorization, in order to avoid the O(n^2) storage and O(n^3) time that would be required to store and solve the Hessian explicitly in n dimensions.

In non-quadratic methods (all of the others in NLopt), there is little or no "memory" of previous gradients, if the gradient (or an approximation thereof) is even computed at all, so there is no direct second-derivative information available. Even if an analytical gradient is supplied by the user, computing the Hessian by finite differences as you suggest would be quite expensive in high dimensions: O(n) function evaluations and O(n^2) time ... and without analytical gradients it would require O(n^2) function evaluations.

Steven

_______________________________________________
NLopt-discuss mailing list
[email protected]
http://ab-initio.mit.edu/cgi-bin/mailman/listinfo/nlopt-discuss

Reply via email to