Hi,
I am new to nlopt (and haven't done any optimization for years and years...)
I also feel this is probably a FAQ, but I've had a bit of a search and cannot 
find anything...

I am doing all this in C++ but using the C opt interface as the C++ interface 
has some incompatibilities with other libraries my framework obliges me to 
use...

For the purposes of evaluating my objective function, I would like to display 
the evolving parameter set at intermediate points through the optimization 
process.
Can I do this by:- setting maxeval to a moderate number- when that terminates, 
take the current parameters and display them- then optimize again using the 
current parameters as a new start point...
The thing against this is that second and subsequent runs have presumably lost 
any working state that was created in previous runs, so my precise questions 
are:
1) can I optimize again on the same nlopt_opt object (I assume I can)
2) if I do so will I suffer from "starting again" each time?  how much might 
this impact performance?

3) is there a way to tell nlopt to "resume" rather than "start over"?
4) or alternately can I pull some working state out of the optimizer after one 
run and reinject it before the next(e.g. I have access to initial step sizes, 
so can I read out final step sizes and supply those to the next run, and there 
any point to this if there is other internal state than I cannot preserve?)


I do not know exactly what algorithm I will be using, but I intend to constrain 
the objective to forms I can easily evaluate gradients for, and a local minimum 
suited to my purposes (I'm designing the system under optimization and have 
some flexibility to try and design poorly-optimized minima out of it...)

Thanks in anticipation,
Ian
_______________________________________________
NLopt-discuss mailing list
NLopt-discuss@ab-initio.mit.edu
http://ab-initio.mit.edu/cgi-bin/mailman/listinfo/nlopt-discuss

Reply via email to