Ahh, I understand!  That will work perfectly and I think I can use that to
simplify my specific problem at the same time.

Thank you for the idea, I'm off to do some experiments!

David

On Wed, Apr 27, 2016 at 10:16 PM, Grey Gordon <[email protected]> wrote:

> Instead of solving
>
> max f(x)
> subject to
> sum(x) = 1
> g(x) = 0
> h(x) <= 0
>
> where x is in R^n
>
> solve
>
> max f(x(y))
> subject to
> g(x(y)) = 0
> h(x(y)) <= 0
>
> where y is in R^(n-1) and x(y) = [1-sum(y) y(1) y(2) … y(n-1)]
>
> This way sum(x(y)) = 1 for all y.
>
>
> For a simple example
>
> max u(c1,c2)
> c1 + c2 = 1
> c1>= 0
> c2 >= 0
>
> becomes
>
> max u(1-c2, c2)
> 1-c2 >= 0
> c2 >= 0
>
> -Grey
>
> > On Apr 27, 2016, at 11:11 AM, David Morris <[email protected]> wrote:
> >
> > On Wed, Apr 27, 2016 at 9:44 PM, Grey Gordon <[email protected]>
> wrote:
> >
> > Instead of using sum(x) = 1 as an equality constraint, perhaps you can
> directly take x(1) = 1 - sum(x(2:end)) and substitute it into the problem
> directly.
> >
> > Grey, can you expand on this a bit?  Do you mean use that as a penalty
> in the optimization function, or something else?
> >
> > David
>
>
_______________________________________________
NLopt-discuss mailing list
[email protected]
http://ab-initio.mit.edu/cgi-bin/mailman/listinfo/nlopt-discuss

Reply via email to