That is very interesting, thanks for responding. And apologies for not
checking the source paper before posting.
However, given what you've said, I think there is something I am not
understanding about stopping criteria on the objective function. Is this an
appropriate forum to ask the question? If not, I'd be very grateful if you
could direct me to a web resource for asking questions about NLopt. If this
is an appropriate place, the question follows:
I've got an optimisation problem that I'm trying to solve with COBYLA. My
problem is a bit lengthy to reproduce here, but I can duplicate the issue
on a much simpler problem. If my understanding is correct (and it may not
be) then f_tol_rel and f_tol_abs don't appear to be working for this
algorithm. On my machine (Ubuntu 14.04, Julia 0.4, NLopt 0.2.3) the code at
the end of this post will print the following sequence of objective
function values as the final five steps in COBYLA:
1.161
1.074
1.004
1.017
1.038
Note that in the code, I have f_tol_rel set to 0.5 and f_tol_abs set to
0.5, so by my understanding, both stopping criteria should have halted the
algorithm at any of the above 4 steps. In fact, they should have kicked in
at any of the final 8 steps.
So the question is: what am I missing here?
Code follows:
using NLopt
function objective_function(param::Vector{Float64}, grad::Vector{Float64})
obj_func_value = param[1]^2 + param[2]^2 + 1.0
println("Objective func value = " * string(obj_func_value))
println("Parameter value = " * string(param))
return(obj_func_value)
end
opt1 = Opt(:LN_COBYLA, 2)
lower_bounds!(opt1, [-10.0, -10.0])
upper_bounds!(opt1, [10.0, 10.0])
ftol_rel!(opt1, 0.5)
ftol_abs!(opt1, 0.5)
min_objective!(opt1, objective_function)
(fObjOpt, paramOpt, flag) = optimize(opt1, [9.0, 9.0])
Cheers and thanks again for responding,
Colin
On Thursday, 7 January 2016 14:42:56 UTC+11, Steven G. Johnson wrote:
>
>
>
> On Wednesday, January 6, 2016 at 7:47:07 PM UTC-7, [email protected]
> wrote:
>>
>> Actually, I spoke to soon.
>>
>> While your suggestion is very effective for convergence routines that
>> call the objective function once at each step, it can lead to some pretty
>> confusing results if the objective function is called multiple times at
>> each step. For example, some of the routines for derivative-free
>> optimisation such as LN_COBYLA will do this in order to construct a linear
>> approximation. So for these routines, one has to fairly exhaustively filter
>> the printed output in order to get the actual objective function and
>> parameter values at a given step. It becomes pretty much impossible as the
>> dimension of the problem increases.
>>
>
> That's not how COBYLA works, except in the initialization phase. After
> the first N+1 steps in N dimension, it uses the memory of the previous
> steps to update its first derivative approximation, rather than doing N+1
> evaluations on each step which would be very expensive.
>
> (Avoiding the necessity of doing N+1 evaluations on each step is pretty
> much the whole point of using a specialized derivative-free algorithm
> rather than using a gradient-based algorithm where you use finite
> differences.)
>
> In consequence, just printing the objective function usually gives a
> pretty good idea of how it is doing. Often, I will print the both the
> current objective function and the best value found so far.
>