In most cases, putting the reporting code in the objective function and
only calling it when the new solution is better will do the practical
job of keeping track of the trajectory of the search. However, there are
a couple of cases off the top of my head that this won't be exact for,
neither of them involving nonlinear constraints.
1. Global optimization algorithms that don't guarantee monotonic
convergence to the minimum will not have each iteration tracked if only
best solutions are recorded. If all solutions are recorded, then any
"exploratory" calls to the objective function will end up becoming part
of the trajectory.
2. Consider a simple coordinate descent algorithm that takes a step to
its left, computes the gradient at this point (which also happens to be
the best solution so far, and is therefore recorded), then takes a step
to its right and throws an exception. The log of the trajectory will
suggest the search ended at the left-step point, whereas the current
solution never actually did reach that point.
And yes, I'm very aware that these cases are splitting hairs (especially
#2) and in practice, if a solution was tried for any reason and
determined to be feasible and minimal is all that matters. But darn it,
the obsessive compulsive part of me wants a a nice, pure log file, free
of clutter from extraneous states that were never officially a current
state in my search!
My motivation comes from an application I'm building that uses NLopt
algorithms interchangeably along with other already existing homegrown
optimizers. The homegrown ones are built to print out status at every
full state along the search path, so if I attempt to mirror this
behavior for the NLopt algorithms, I'll end up either printing
inter-iteration states or skipping reports for transitions to worse
states. Both of these aren't terribly appealing. Of course, in no way is
this essential functionality, but I wanted to inquire about it anyway.
Thanks for your time,
Greg
Steven G. Johnson wrote:
If the purpose is just to track whether the optimization is making
progress, and how fast, just using the objective function for this
purpose seems reasonable enough to me. After all, if you are tracking
progress, you probably want to know if the algorithm is just cycling
around some point for a long time without making much progress. And
if you want to filter it so that you only show points that improve
upon the objective, it is easy for the user to do that herself.
Can you more precisely define a situation in which the objective
function itself does not have enough information to do the desired
processing?
The only thing I can think of is if you have nonlinear constraints,
and you want to know the sequence of "best so far" points (which count
only feasible points, or points that at least improve feasibility), in
which case you would have to keep track of what both the objective
function and the constraints returned, which would be somewhat
annoying. I can certainly add another callback function that is
called for each "best so far" point. But this is different from the
"iteration" metric that you suggest (to the extent that "iterations"
is at all a well-defined concept).
Steven
_______________________________________________
NLopt-discuss mailing list
[email protected]
http://ab-initio.mit.edu/cgi-bin/mailman/listinfo/nlopt-discuss
_______________________________________________
NLopt-discuss mailing list
[email protected]
http://ab-initio.mit.edu/cgi-bin/mailman/listinfo/nlopt-discuss