UPDATE: I just found out that this seems related to setting the time limit.
If I comment out the line "gopt.set_maxtime(8.0)", optimization works as expected. Setting a maximum time limit is necessary as it is possible a solution will not be reached in an acceptable amount of time. Is this a bug? Thank you, David On Tue, Apr 26, 2016 at 7:57 PM, David Morris <[email protected]> wrote: > I am using NLOpt in a python application and am having problems when > setting an equality constraint when using LN_AUGLAG. > > If I use LN_COBYLA, optimization works perfectly. However, if I use > LN_AUGLAG and set LN_COBYLA as the local optimizer, the result is the exact > same as my initial guess. My goal is to use experiment with other > optimization algorithms (for example, LN_SBPLX), and use AUGLAG to provide > the quality constraint. > > Can anyone help determine why the equality constraint causes AUGLAG to > fail? > > Below is sample code showing how I use NLOpt. > > > > ################################################################################### > # CODE Sample: > > args = ( ... ) > kw = { ... } > > # Optimization Function for NLOpt > def optfunc(x,grad): > # Big complex function which calculates a single return value: > val = model_p_mixer(x,*args,**kw) > return val > > # Constraint Function for NLOpt > # x -> percentage of total content for each component > # sum(x) == 100 % > def opt_constraint(x, grad): > val = float(100.0 - x.sum()) > return val > > gopt = nlopt.opt(nlopt.LN_AUGLAG, len(guess)) > lopt = nlopt.opt(nlopt.LN_COBYLA, len(guess)) > > gopt.set_min_objective(optfunc) > > gopt.set_lower_bounds([98.62, 0.0, 0.0]) > gopt.set_upper_bounds([99.5, 1.0, 1.0]) > > gopt.add_equality_constraint(opt_constraint, 0.001) > > # Set tolerances to determine when the optimizer stops looking for > solutions > gopt.set_xtol_abs(1E-6) > gopt.set_ftol_abs(0.001) > lopt.set_xtol_abs(1E-6) > lopt.set_ftol_abs(0.001) > > # Set initial step size > gopt.set_initial_step(0.01) > > gopt.set_maxtime(8.0) > > gopt.set_local_optimizer(lopt) > > # Run the optimizer > mix = gopt.optimize([99.0, 0.5, 0.5]) > > # Initial Guess : [99.0, 0.5 , 0.5 ] > # Expected Result: [99.2, 0.35, 0.45] > # Actual Result: [99.0, 0.5 , 0.5 ] > > ################################################################################### > > > Thank you, > > David >
_______________________________________________ NLopt-discuss mailing list [email protected] http://ab-initio.mit.edu/cgi-bin/mailman/listinfo/nlopt-discuss
