Actually, I spoke to soon.

While your suggestion is very effective for convergence routines that call 
the objective function once at each step, it can lead to some pretty 
confusing results if the objective function is called multiple times at 
each step. For example, some of the routines for derivative-free 
optimisation such as LN_COBYLA will do this in order to construct a linear 
approximation. So for these routines, one has to fairly exhaustively filter 
the printed output in order to get the actual objective function and 
parameter values at a given step. It becomes pretty much impossible as the 
dimension of the problem increases.

So, great suggestion, but unfortunately it doesn't work for all algorithms. 
If anyone else has some suggestions I'm still interested in solutions to 
this problem...

Cheers,

Colin

ps, here is some sample code to see what I'm talking about. In the printed 
output, particularly in the first three calls to the objective function, 
you can see the algorithm constructed the linear approximation by repeated 
calls to the objective function. In fact, the second and third calls to the 
objective function result in exactly the same objective function value, but 
the routine doesn't terminate, because both these calls are part of the 
same "step":

using NLopt
opt1 = Opt(:LN_COBYLA, 2)
lower_bounds!(opt1, [-10.0, -10.0])
upper_bounds!(opt1, [10.0, 10.0])
ftol_rel!(opt1, 1.0)
min_objective!(opt1, objective_function)
(fObjOpt, paramOpt, flag) = optimize(opt1, [9.0, 9.0])

function objective_function(param::Vector{Float64}, grad::Vector{Float64})
    obj_func_value = param[1]^2 + param[2]^2 + 1.0
    println("Objective func value = " * string(obj_func_value))
    println("Parameter value = " * string(param))
    return(obj_func_value)
end


On Wednesday, 6 January 2016 11:31:51 UTC+11, Kristoffer Carlsson wrote:
>
> Can't you just print stuff in your julia objective function that you pass 
> to NLopt? 

Reply via email to