I found out that if I null out the tolerances for my constraint functions, then
my exaggerated stopval of 1.0 suddenly is able to stop the optimization.

I'm using vector valued constraint functions and providing initialized tolerance
arrays for each constraint function.  When they're NULL, stopval works but the
constraint functions fail (obviously).  

Can you explain to me how my constraint tolerances might be conflicting with the
program's ability to use other stopping criteria?  Stopping criteria that I
would assume to take priority?

Also, could it be that I'm not using constraint tolerances properly?  For
constraints that need to be met exactly I have the tolerances at 0.0.  For
constraints that I'm willing to give some leeway, the tolerances are
substantially larger.  Are constraint tolerances of 0.0 to be avoided? Or
rather, can constraint tolerances of 0.0 somehow affect the algorithm's ability
to use other stopping criteria?

-Adam


_______________________________________________
NLopt-discuss mailing list
[email protected]
http://ab-initio.mit.edu/cgi-bin/mailman/listinfo/nlopt-discuss

Reply via email to