Hi,

-- sorry for the double mail please ignore/delete the previous.. --

First of all many thanks for the very nice library you've created.
I wrote a small program to do calculate a steepest descent path/saddlepoint
(nudge elastic band) on an energy landscape defined by a set of
multidimensional Gaussian distributions, something like this
http://www.petveturas.com/img/pyneb_test_asym.png
and I hope I can ask for some advice.
Initially I implemented this with steepest descent + step size update or a
velocity verlet integrator, but I wanted to rewrite this with nlopt because
of the large choice of algorithms. This the code
petveturas.com/prog/neb-cpp/nlopt_neb.cpp
petveturas.com/prog/neb-cpp/Makefile

and the input
http://petveturas.com/prog/neb-cpp/gaussians
http://petveturas.com/prog/neb-cpp/pos.dat

However I think I have problem: the forces in this method are
non-conservative so that I cannot define a Hamiltonian. So I only have the
gradients, not the function value at each step in the optimization. This
exact problem has been considered before in literature, e.g.
http://scitation.aip.org/journals/doc/JCPSA6-ft/vol_119/iss_24/12708_1.html

For now I return the norm of the gradients as function value, which works
more or less using the derivative-based optimizers
11 = Limited-memory BFGS (L-BFGS) (local, derivative-based)
13 = Limited-memory variable-metric, rank 1 (local, derivative-based)
14 = Limited-memory variable-metric, rank 2 (local, derivative-based)
15 = Truncated Newton (local, derivative-based)
16 = Truncated Newton with restarting (local, derivative-based)
17 = Preconditioned truncated Newton (local, derivative-based)
18 = Preconditioned truncated Newton with restarting (local,
derivative-based)
24 = Method of Moving Asymptotes (MMA) (local, derivative)
31 = Augmented Lagrangian method (local, derivative)
33 = Augmented Lagrangian method for equality constraints (local,
derivative)
40 = Sequential Quadratic Programming (SQP) (local, derivative)
41 = CCSA (Conservative Convex Separable Approximations) with simple
quadratic approximations (local, derivative)

It seems to me these methods should be most effective but they are somehow
using the function value (line-search for step-size?), and (therefore?) get
stuck at some point.
In some cases this ends in the following error
terminate called after throwing an instance of 'std::runtime_error'
  what():  nlopt failure


So I hope somebody can help me answer if NLOpt is the right library for
such a problem.
- which algorithm, if any, I could possibly use for this? Do all algorithms
assume that grad = d(objective)/d(x) ?
- if none, would it be difficult to modify the existing code to my problem?

Hopefully someone with more experience on this can point me in the right
direction.

Many thanks.

Best,
Jaap




On 17 April 2013 16:33, Jaap Kroes <[email protected]> wrote:

> Hi,
>
> First of all many thanks for the very nice library you've created.
> I wrote a small program to do calculate a steepest descent
> path/saddlepoint (nudge elastic band) on an energy landscape defined by a
> set of multidimensional Gaussian distributions, something like this
> http://www.petveturas.com/img/pyneb_test_asym.png
> and I hope I can ask for some advice.
> Initially I implemented this with steepest descent + step size update or a
> velocity verlet integrator, but I wanted to rewrite this with nlopt because
> of the large choice of algorithms. This the code
> petveturas.com/prog/neb-cpp/nlopt_neb.cpp
> petveturas.com/prog/neb-cpp/Makefile
>
> and the input
> http://petveturas.com/prog/neb-cpp/gaussians
> http://petveturas.com/prog/neb-cpp/pos.dat
>
> However I think I have problem: the forces in this method are
> non-conservative so that I cannot define a Hamiltonian. So I only have the
> gradients, not the function value at each step in the optimization. This
> exact problem has been considered before in literature, e.g.
> http://scitation.aip.org/journals/doc/JCPSA6-ft/vol_119/iss_24/12708_1.html
>
> For now I return the norm of the gradients as function value, which works
> more or less using the derivative-based optimizers
>
>
> But most gradient-dependent optimizers get stuck at some point, and some
> methods (BFGS/Newton
> terminate called after throwing an instance of 'std::runtime_error'
>   what():  nlopt failure
>
> It seems to me the quasi-newton methods (which should be most effective)
> typically use some line-search based on the function value for evaluation
> of the step-size, and (therefore?) get stuck at some point.
> In some cases this ends in the following error
> terminate called after throwing an instance of 'std::runtime_error'
>   what():  nlopt failure
>
>
> So I hope somebody can help me answer if NLOpt is the right library for
> such a problem.
> - which algorithm, if any, I could possibly use for this? Do all
> algorithms assume that grad = d(objective)/d(x) ?
> - if none, would it be difficult to modify the existing code to
>
> Hopefullying you perhaps have more experience with this type of
> optimization problem and can point me in the right direction.
>
> Many thanks.
>
> Best,
> Jaap
>
_______________________________________________
NLopt-discuss mailing list
[email protected]
http://ab-initio.mit.edu/cgi-bin/mailman/listinfo/nlopt-discuss

Reply via email to