Hello,

Not sure if I should have made this as an issue in GitHub, but I have some 
question/suggestion regarding the multi-level single linkage method.

I use NLopt's MLSL-LDS optmization via nloptr R package when working with 
hidden Markov models (I am an author of seqHMM R package), and when I compared 
results from global optimization via MLSL versus just local optimization with 
L-BFGS (which I also used in MLSL), I just realized that MLSL does not seem to 
perform local optimization using the initial parameter values. If I read the 
source correctly, they are only used in defining the initial sample somehow (in 
my own code I also set parameter boundaries based on some heuristics with the 
initial values), but there is often the case that performing the local 
optimization with good initial values already finds the global optimum, and the 
additional global phase is just for the "proof". But now at least in one of my 
example models it happens so that global optimization via MLSL doesn't find the 
optimum in reasonable time as it does not check the initial guess at all.

So could the algorithm be altered in a way that the local optimization is first 
performed on the initial guess, and then additional point would be drawn? 
Either still based on the initial guess or the first local optimization. First 
option wouldn't interfere with the MLSL itself (just one additional local 
optimization before proceeding as before) so for back-compatibility it might be 
safer option,  whereas the second option, without understanding the details of 
the MLSL, seems intuitively good alternative as we might at least be somewhere 
in the right neighborhood compared to completely bogus initial guess.

Best regards,

Jouni Helske
Department of Mathematics and Statistics
P.O. Box 35 (MaD)
40014 University of Jyväskylä
Finland


_______________________________________________
NLopt-discuss mailing list
[email protected]
http://ab-initio.mit.edu/cgi-bin/mailman/listinfo/nlopt-discuss

Reply via email to