Hello,

When using a gradient driven optimization (I am currently using LD_MMA and LD_SLSQP), is it a requirement that the start value of the optimisation satisfies the inequality constraints?

I currently observe that, if I initialise optimisation with an x which is "outside" the constraint area, the optimization seems to "jump" out of constraint mode (no calls at all to the constraint functions) and just solves the problem as if it was unconstrained.

My objective function is a quadratic form, and my constraints are non-linear and non-convex.

There is still the possibility that my gradients are wrong, but I would just like to sort this concern with the start value out, before digging deeper into my math (my brain hurts already)!

Thanks!

Julius




_______________________________________________
NLopt-discuss mailing list
[email protected]
http://ab-initio.mit.edu/cgi-bin/mailman/listinfo/nlopt-discuss

Reply via email to