On Wednesday, January 6, 2016 at 7:47:07 PM UTC-7, [email protected] 
wrote:
>
> Actually, I spoke to soon.
>
> While your suggestion is very effective for convergence routines that call 
> the objective function once at each step, it can lead to some pretty 
> confusing results if the objective function is called multiple times at 
> each step. For example, some of the routines for derivative-free 
> optimisation such as LN_COBYLA will do this in order to construct a linear 
> approximation. So for these routines, one has to fairly exhaustively filter 
> the printed output in order to get the actual objective function and 
> parameter values at a given step. It becomes pretty much impossible as the 
> dimension of the problem increases.
>

That's not how COBYLA works, except in the initialization phase.  After the 
first N+1 steps in N dimension, it uses the memory of the previous steps to 
update its first derivative approximation, rather than doing N+1 
evaluations on each step which would be very expensive.

(Avoiding the necessity of doing N+1 evaluations on each step is pretty 
much the whole point of using a specialized derivative-free algorithm 
rather than using a gradient-based algorithm where you use finite 
differences.)

In consequence, just printing the objective function usually gives a pretty 
good idea of how it is doing.  Often, I will print the both the current 
objective function and the best value found so far.

Reply via email to