Hello,
I'm using NLopt to solve a convex inequality bounded optimization
problem in hundreds of dimensions using LBFGS. However, NLopt often
crashes with
RuntimeError: nlopt failure
when I request ftol of ~10^-15, but if I reduce the precision to, say,
~10^-12, the optimization succeeds.
I noticed, however, that when this happens, the gradients are ~10^-4,
and this made me think that the optimizer doesn't like it when the
gradients are computed much more precisely than the objective function.
So I tried to improve on the precision of the objective function, and
now most optimizations succeed with ftol of ~10^-15, whereas the
gradients got down to ~10^-10, but still, some fail and there is little
I can do to improve the precision of the objective function any further.
Is it maybe possible to put a termination condition on the gradients so
that the optimization stops when they are <= X, instead of using the
function value? I couldn't find anything like that in the manual.
Are there any other ways I can profit with NLopt from the fact that my
problem is convex (although the objective function itself is *very*
difficult to compute), but I have the analytical expressions for the
gradient, and possibly the Hessian?
Thanks!
--
Sincerely yours,
Yury V. Zaytsev
_______________________________________________
NLopt-discuss mailing list
[email protected]
http://ab-initio.mit.edu/cgi-bin/mailman/listinfo/nlopt-discuss