On Sep 27, 2010, at 4:13 PM, Joachim Comes wrote:
first of all, thank you for NLopt!
If I use Vector-valued constraints, grad (dfci/dxj; 1<i<=m; 1<j<=N) is
stored in an array of dimension M x N.
If I use the "normal" method to add constraints, grad only has the
dimension N, but is calculated M times. Is there still internally a M x N matrix, which stores all this M grads of dimension N? Or is it really
possible to save place by abandon the Vector-valued constraints?
Thank you,


It depends on the algorithm. In the MMA algorithm, for example, O(NM) storage is required internally no matter how the gradient is computed. In the AUGLAG algorithm, on the other hand, only O(N) storage is required if you compute each gradient separately.

Computationally, computing the gradients together is only an advantage if they share some computation. However, even in this case you can sometimes use the separate-evaluation version, because NLopt always evaluates the gradient in order, from 1 to N, so you can compute the shared information on the first gradient and store it somewhere for access by the other gradients.

Steven


_______________________________________________
NLopt-discuss mailing list
[email protected]
http://ab-initio.mit.edu/cgi-bin/mailman/listinfo/nlopt-discuss

Reply via email to