On Nov 15, 2010, at 5:29 AM, Waters, Anthony wrote:
I was planning on testing NLopt on a set of test problems, with
known results, that I have found:
www.ai7.uni-bayreuth.de/test_problem_coll.pdf
Unfortunately, it is struggling on the first one (see Problem 1,
page 19)
From what I understand, the only applicable methods are:
NLOPT_GN_ISRES
NLOPT_GN_ORIG_DIRECT
NLOPT_GN_ORIG_DIRECT_L
NLOPT_LD_MMA
NLOPT_LN_COBYLA
SLSQP
Problem 1 has only bound constraints, so
a) most of the NLopt algorithms are applicable, not just the ones you
list
b) the global-optimization methods are *not* applicable because the
optimization domain is unbounded. To make global optimization
applicable, you have to give it a bounded domain (or do a coordinate
transformation from the unbounded domain to a bounded one).
The first 3 of these I cannot get to work – they don’t do any
function evaluations, so I guess something is wrong.
It is returning an error code because you are giving an unbounded
region. It should be returning NLOPT_INVALID_ARGS. The return codes
are there to help you!
The next two behave as follows:
MMA: 2256 Function evaluations (xtol_rel=1.0*E-5)
You probably have an error in your gradient evaluation or something
like, because MMA with this tolerance converges just fine for me when
I tried it on this problem, in ~ 50 steps. (quasi-Newton methods,
below, converge a bit faster.)
COBYLA: 10829 Function evaluations (xtol_rel=1.0*E-5)
I got COBYLA to converge faster than that by setting a reasonable
initial step, but it is still a few thousand iterations. On the other
hand, BOBYQA converges in < 200 iterations.
SLSQP I cannot find as an NLopt algorithm. Is it implemented?
This is specified in NLopt as NLOPT_LD_SLSQP.
SLSQP, LBFGS, and the other quasi-Newton algorithms in NLopt converge
on this problem in 20-30 steps.
I notice that MMA and COBYLA take very small steps. Is there a way
to accelerate them?
In COBYLA you can change the initial step size as documented in the
manual. In MMA the step sizes are determined from the gradients, but
as I mentioned above you are problably doing something wrong since MMA
converges quickly on this problem for me.
Note, however, that as mentioned in the manual you have to be careful
about comparing the number of iterations to converge for different
algorithms, because different algorithms interpret the "tolerances" in
different ways. This is inevitable because the algorithms can only
estimate how far from the true optimum they are. See
http://ab-initio.mit.edu/wiki/index.php/NLopt_Algorithms#Comparing_algorithms
Note also that in this problem there is at least one local minimum, at
about (-1.328,1.7687). If you use a local optimization algorithm, it
is somewhat a matter of chance (along with the initial condition)
whether the optimization finds this minimum or the true global minimum
at (0,0). This is inherent to local optimization in general.
Steven
PS. Python implementation of your test problem:
import nlopt
from numpy import *
def myfunc(x, grad):
print ": eval f at ", x[0],x[1], 100*(x[1]-x[0]**2)**2 + (1-
x[0])**2
if grad.size > 0:
grad[0] = -400*x[0]*(x[1]-x[0]**2) - 2*x[0]*(1-x[0])
grad[1] = 200*(x[1]-x[0]**2)
return 100*(x[1]-x[0]**2)**2 + (1-x[0])**2
opt = nlopt.opt(nlopt.LD_SLSQP, 2)
opt.set_initial_step(1)
opt.set_lower_bounds([-float('inf'), -1.5])
opt.set_min_objective(myfunc)
opt.set_stopval(1e-4)
opt.set_xtol_rel(1e-5)
x = opt.optimize([-2, 1])
minf = opt.last_optimum_value()
print "optimum at ", x[0],x[1]
print "minimum value = ", minf
print "result = ", opt.last_optimize_result()
_______________________________________________
NLopt-discuss mailing list
[email protected]
http://ab-initio.mit.edu/cgi-bin/mailman/listinfo/nlopt-discuss