Hello everybody,

Thanks for making this great code available - I've been using the MMA
algorithm for metric-based mesh adaptation.



That being said, I was hoping somebody could clarify a behavior I've been
observing when using NLOPT's implementation of MMA to solve my optimization
problems.



The test objective function (significantly simpler and smaller than the
actual problem I'm trying to solve) is given as:

F(x1,x2,x3) = C1 * exp{ -1 * x1  + -1*x2 + -1*x3)

s.t bound constraints -1 <= x <= 1.



When I run with C1 = 1.0, I'm able to get the optimal solution in 5
function evaluations:

C1 = 1.0 - Final Error Estimate: 4.978706836786394E-02 in 5 function
evaluations



However, when I start decreasing C1, I start seeing more and more function
evaluations to obtain the same optimal x:

C1 = 1E-3 -  Final Error Estimate: 4.978706836786394E-05  in 9 function
evaluations

C1 = 1E-8 -  Final  Error Estimate: 4.978706836786395E-10  in 33 function
evaluations

C1 = 1E-12 - Final Error Estimate: 4.978706836786395E-12  in 1607 function
evaluations


What's the reason for this behavior?   From looking over Svanberg's paper,
the trust region, rho parameter, and the form of the MMA approximations
seem to be independent of the magnitude of the function and gradients.  I'd
like to think that the initial steps should then be independent of the
magnitude of the function/gradient.


Thanks for your help,

Jun
_______________________________________________
NLopt-discuss mailing list
[email protected]
http://ab-initio.mit.edu/cgi-bin/mailman/listinfo/nlopt-discuss

Reply via email to