You can return 0 from the objective function (if you are maximizing it) in
case of a crash, and let the optimization algorithm take care of the rest.
Or you could specify feasible domain at the start (if explicitly know), so
no point outside the feasible domain is passed to your objective function.

On Fri, Oct 5, 2012 at 11:51 AM, Ricardo Puente <[email protected]>wrote:

> Hello,
>
> I am solving some shape optimization problems using the gradient based
> nlopt
> algorithms.
>
> One possible scenario, specially at the first step, when the step size is a
> default one is that the design vector is an unfeasible one, which will
> lead to
> crashes in the geometry generation codes (which are called via system()).
>
> I also have a simple steepest descent algorithm which I use for trials and
> debugging, in which in case something crashes, I retrieve an error signal
> and
> try again reducing the step size.
>
> This is something I cannot do with the nlopt interface beacause, AFAIK, the
> objective fuction must be constructed declaring const the design vector.
> Thus,
> if I apply the step reducing mechanism within the function, the new design
> vector is not communicated outside.
>
> So right now, I have to abort if the objective evaluation fails.
>
> Is there a way to go around this issue?  Or perhaps it wouldn't be too
> difficult
> to have for a future version something like this, a function that modified
> the
> step if the evaluation failed.
>
> Regards,
>
> Ricardo
>
>
> _______________________________________________
> NLopt-discuss mailing list
> [email protected]
> http://ab-initio.mit.edu/cgi-bin/mailman/listinfo/nlopt-discuss
>
_______________________________________________
NLopt-discuss mailing list
[email protected]
http://ab-initio.mit.edu/cgi-bin/mailman/listinfo/nlopt-discuss

Reply via email to