Hi all,

I am trying to optimise the log-likelihood of the Gaussian Process. This is 
a straight port of some code form MATLAB, so I know the gradients are 
correct. Using Optim.jl I don't have too many problems (I was one told 
"dphia < 0" however I can't replicate it). 
Using NLopt, which the documentation seems to imply should be more stable, 
I regularly get failures if I try to run for more than a couple of 
iterations - generally it will work with 5 to 10, no more. The exit code is 
quite unhelpful, simply "NLopt failure". I have no problems running 
non-gradient based methods, (eg COBYLA) which would lead me to think my 
gradients where incorrect - except that they work in MATLAB and with 
Optim.jl, and checkout with finite differencing.

Any ideas of how to start debugging this? I could do with being able to 
apply constraints.

Tom

Reply via email to