OK, in my understanding the optimization should be executable with a
gradient-based algorithm if I set *grad* to a constant value instead of
a calculated one. A short try also fails with the nlopt roundoff-limited
exception after the first optimization step.
Do I misunderstand something?
Am 23.01.2014 16:27, schrieb Steven G. Johnson:
On Jan 22, 2014, at 10:18 AM, Julius Ziegler <[email protected]> wrote:
Your way of computing the gradients looks strange. It looks like you
are basing the difference quotient on the last value of x passed into
the function ("x_pred"). I do not know if this is a good idea.
This is not a gradient at all. The gradient is a *partial* derivative with
respect to each variable, which means you have to change one variable at a time
and see how the function changes if you want to use a finite-difference
approximation … i.e. you need to evaluate your objective n times in n
dimensions. If you only look at the previous function value and the previous
x point, the most you can get is a directional derivative.
If you aren't willing to do a full finite-difference approximation (at least n
extra objective evaluations) and can't compute the derivative analytically (by
adjoint methods or whatever), you should use one of the derivative-free
optimization algorithms. The gradient-based algorithms are unlikely to
converge with a wrong gradient.
_______________________________________________
NLopt-discuss mailing list
[email protected]
http://ab-initio.mit.edu/cgi-bin/mailman/listinfo/nlopt-discuss