Hi Philippe, To me it looks like any constrained optimization problem.
Implement objective f(a,b,c) and their partial derivatives assuming a, b and c are independent variables. This means that you have to extend the definition of mathematical object f to case where a+b+c not equal 1 in a differentiable manner. Implement then separately the constraints a+b+c=1 and their partial derivatives. I would use SQP (I believe it is called SLSQP in NLOPT). For SQP you also need the Hessian of the objective (you can approximate it from the gradients). Internally SQP will project the derivatives along the constraint. Alternatively you can reparametrize your objective in only a and b: g(a,b)=f(a,b,1-a-b) and get rid of the constraint. Regards > > On Tue, Jan 31, 2017 at 11:17 AM, philippe preux <[email protected]> > wrote: > >> Hi Grey, and others, >> >> I wish to consider a whole family of optimization problems which are >> defined as follows: >> >> the objective function f: [0,1]^n -> R >> the n parameters are partitioned in m sets so that each set of parameters >> represents a distribution of probability. >> So, for a very small example, let us suppose that n = 3, m = 1, and >> parameters are denoted a, b, and c. Then, we have the constraint that >> a+b+c=1. >> >> Performing a gradient algorithm, we need the gradient of f wrt each >> parameter. However, the gradients are not independent: to keep this a+b+c=1 >> relationship, df/da is not independent of df/db and df/dc. Simply updating >> one parameter (say a) using its gradient (df/da) is not correct: the space >> of parameters is not Euclidian because the variation of one parameter >> involves some variation on the other 2, to keep this a+b+c=1 constraint >> valid. >> >> So my question is whether the algorithms available in nlopt take care of >> this. I doubt it but I'd like to be sure. >> Then, the next question is: how to take care of this relationship? >> >> Thanks a lot, >> >> Philippe >> >> On 25/01/2017 18:52, Grey Gordon wrote: >> >> Hi Philippe, >> >> Is your problem to min_{a,b,c} f(a,b,c) s.t. a+b+c=1 for f:R^3 -> R? Do you >> mean your function is non-Euclidean because it is mapping to some space >> other than R? >> >> Perhaps more concretely explaining your problem would help. >> >> Best, >> Grey >> >> >> >> On Jan 25, 2017, at 11:18 AM, philippe preux <[email protected]> >> <[email protected]> wrote: >> >> Hi, >> I am optimizing a differentiable function defined over a probability >> distribution. That is, say the function to optimize has 3 parameters a, b >> and c each being a probability and such that a + b + c = 1. >> We know that optimizing each parameter independently from the other 2 is not >> the best way to go as we do not take the a+b+c=1 constraint into >> consideration. The solution is not to add this constraint to the problem via >> an equality constraint; the issue is that the space is not Euclidian and >> that whenever one computes the gradient wrt to a parameter (say a), the 2 >> others should also be considered, to take the shape of the manifold on which >> I optimize into consideration. It seems to me that directional derivatives, >> or natural gradients are needed here. >> So my question is: how to deal with such non Euclidian spaces with nlopt? >> Thanks for any help, >> Philippe >> >> >> _______________________________________________ >> NLopt-discuss mailing >> [email protected]http://ab-initio.mit.edu/cgi-bin/mailman/listinfo/nlopt-discuss >> >> >> >> _______________________________________________ >> NLopt-discuss mailing list >> [email protected] >> http://ab-initio.mit.edu/cgi-bin/mailman/listinfo/nlopt-discuss >> >> >
_______________________________________________ NLopt-discuss mailing list [email protected] http://ab-initio.mit.edu/cgi-bin/mailman/listinfo/nlopt-discuss
