On Jul 12, 2011, at 9:33 AM, Julio Rojas wrote:
As "n" grows, the number of terms of the gradient also grow by a factor of 8, so doing a gradient based method would be out of the question.

If I understand you correctly, you are falling prey to a common misconception.

Actually, there is a theorem that you can always compute the gradient of a function with effort within a constant factor of the effort to evaluate the function once.

To do this, however, you compute the gradient in a somewhat nonobvious manner, known as an "adjoint method" or "backwards differentiation" or "reverse-mode differentiation". (There are even "automatic differentiation" tools that will implement this for you automatically.)

Another difficulty with using gradient-based methods is that, the way you have defined your function, it is not differentiable (thanks to "if-then" and similar piecewise definitions). As mentioned in the NLopt introduction, however, it is usually possible to reformulate the problem in a differentiable manner by insertion of dummy variables. See:
        
http://ab-initio.mit.edu/wiki/index.php/NLopt_Introduction#Equivalent_formulations_of_optimization_problems

I decided to use "NLOPT_LN_BOBYQA", thinking it wouldn't be that hard to solve this problem. Nonetheless, I have found that the second constraint, which limits the maximum uncertainty of the solution, is repeatedly violated by the solver. How can I control this problem? Or is it a limitation inherent to the solver?

How do you even impose these constraints in BOBYQA? BOBYQA only supports simple bound constraints.

_______________________________________________
NLopt-discuss mailing list
[email protected]
http://ab-initio.mit.edu/cgi-bin/mailman/listinfo/nlopt-discuss

Reply via email to