Hello everyone,
I have been trying to resolve with ISRES the benchmark g11 of the paper
(Runarsson,Yao) that originated that algorithm. For this benchmark, the
authors declare they have solved the problem within a couple of hundreds of
iterations, and they nail the result with high precision.
This is a 2d problem with equality constraints. I have set the tolerance
both on the constraint and the stopping criterion to e^-6, which seems
reasonable considering the bounds of the domain and the value of the
objective function.
To start with, I have set the intial guess to the solution, with 2%
perturbation on x1. The algorithm never reaches convergence, but it loops
until maxEvals is exceeded, no matter the value of maxEvals.
Note that the solution I get is not far from the correct solution.
I am surprised that the value of the objective function I get with some
cout in the objective function leads to an almost caothic plot.
NLOpt will only find a solution within a finite number of iteration if the
tolerance is set to --at least-- e^-1. In this case, the variability of
the results run after run is pretty high.
Below, the .cpp file of the C++ code I am running, is there anything wrong
in what I am doing?
Thanks,
Daniele
========
// number of iterations required for this run
int optIterations = 0;
// Set the objective function for tutorial g11
double Optimizer::myfunc_g11(unsigned n, const double *x, double *grad,
void *my_func_data) {
// Increment the number of iterations for each call of the objective
function
++optIterations;
// f(x) = (x1)^2 + ( x2 - 1 )^2
double f= ( x[0]*x[0] + (x[1]-1)*(x[1]-1) );
std::cout<<f<<std::endl;
return f;
}
// Set the constraint function for benchmark g11:
double Optimizer::myconstraint_g11(unsigned n, const double *x, double
*grad, void *data) {
// h(x)= x2 - (x1)^2 = 0
return x[1] - x[0] * x[0];
}
void Optimizer::run_g11() {
// Benchmark case 11:
// f(x) = (x1)^2 + ( x2 - 1 )^2
// Subjected to :
// h(x)= x2 - (x1)^2 = 0
//where −1 ≤ x1 ≤ 1 and −1 ≤ x2 ≤ 1
//The optimum solution is
// x = (±1/sqrt(2),1/2) where f(x) = 0.75
// Set the dimension of this problem (result vector size)
size_t dimension=2;
// Instantiate a NLOpobject and set the ISRES "Improved Stochastic
Ranking Evolution Strategy"
// algorithm for nonlinearly-constrained global optimization
///GN_ISRES
nlopt::opt opt(nlopt::GN_ISRES,dimension);
// Set the and apply the lower and the upper bounds
std::vector<double> lb(dimension),ub(dimension);
lb[0] = -1;
ub[0] = 1;
lb[1] = -1;
ub[1] = 1;
// Set the bounds for the constraints
opt.set_lower_bounds(lb);
opt.set_upper_bounds(ub);
// Set the objective function to be minimized (or maximized, using
set_max_objective)
opt.set_min_objective(myfunc_g11, NULL);
// Set the constraint equations
opt.add_equality_constraint(myconstraint_g11, NULL, 1e-1);
// Set the relative tolerance
opt.set_xtol_rel(1e-1);
// Set the max number of evaluations and the initial population
opt.set_maxeval(5000);
opt.set_population(1000);
// Set some initial guess. Make sure it is within the
// bounds that have been set
std::vector<double> xp(dimension);
xp[0]= 1./sqrt(2);
xp[1]= .51;
std::vector<double> dx(dimension);
opt.get_initial_step(xp, dx);
for(size_t i=0; i<dx.size(); i++){
std::cout<<"dx["<<i<<"]= "<<dx[i]<<std::endl;
}
// Launch the optimization; negative retVal implies failure
double minf;
nlopt::result result = opt.optimize(xp, minf);
if (result < 0) {
printf("nlopt failed!\n");
}
else {
printf("found minimum after %d evaluations\n", optIterations);
printf("found minimum at f(%g,%g) = %0.10g\n",
xp[0],xp[1],minf);
}
}
_______________________________________________
NLopt-discuss mailing list
[email protected]
http://ab-initio.mit.edu/cgi-bin/mailman/listinfo/nlopt-discuss