It would certainly be possible to modify the MLSL algorithm to force a local 
search from the first point.  One option is to just do a  local search first, 
then pass the result of the local search into MLSL as the starting point, 
although this is slightly suboptimal because it might do a redundant second 
local minimization from that point.

The MLSL algorithm (see mlsl/mlsl.c in the NLopt source) is basically:

1) Create a random sampling of points.  This includes your initial guess.  
Evaluate the objective at all of them.

2) Take the point with the smallest objective value.  If this is a "potential 
minimizer" according to the MLSL criteria (see is_potential_minimizer), then do 
a local search.  Otherwise, go to the next-smallest value, etc.

3) add the resulting local minimum to the sample, and goto (2).

Looking at the code to refresh my memory, the only case in which it would not 
search your initial point is one of
        a) it finds a smaller objective value in the random sampling phase
        b) your initial point is too close to the boundary of your search domain
I'm guessing you have (b).

It would certainly be quite reasonable, and rather easy, to modify the 
algorithm to force it to start its local search with your starting guess.   
Feel free to submit a pull request (PR) on github.

> On Nov 4, 2015, at 5:57 AM, Helske, Jouni <[email protected]> wrote:
> 
> Hello,
>  
> Not sure if I should have made this as an issue in GitHub, but I have some 
> question/suggestion regarding the multi-level single linkage method.
>  
> I use NLopt's MLSL-LDS optmization via nloptr R package when working with 
> hidden Markov models (I am an author of seqHMM R package), and when I 
> compared results from global optimization via MLSL versus just local 
> optimization with L-BFGS (which I also used in MLSL), I just realized that 
> MLSL does not seem to perform local optimization using the initial parameter 
> values. If I read the source correctly, they are only used in defining the 
> initial sample somehow (in my own code I also set parameter boundaries based 
> on some heuristics with the initial values), but there is often the case that 
> performing the local optimization with good initial values already finds the 
> global optimum, and the additional global phase is just for the "proof". But 
> now at least in one of my example models it happens so that global 
> optimization via MLSL doesn't find the optimum in reasonable time as it does 
> not check the initial guess at all.
>  
> So could the algorithm be altered in a way that the local optimization is 
> first performed on the initial guess, and then additional point would be 
> drawn? Either still based on the initial guess or the first local 
> optimization. First option wouldn't interfere with the MLSL itself (just one 
> additional local optimization before proceeding as before) so for 
> back-compatibility it might be safer option,  whereas the second option, 
> without understanding the details of the MLSL, seems intuitively good 
> alternative as we might at least be somewhere in the right neighborhood 
> compared to completely bogus initial guess.
>  
> Best regards,
>  
> Jouni Helske
> Department of Mathematics and Statistics
> P.O. Box 35 (MaD)
> 40014 University of Jyväskylä
> Finland
>  
>  
> _______________________________________________
> NLopt-discuss mailing list
> [email protected] <mailto:[email protected]>
> http://ab-initio.mit.edu/cgi-bin/mailman/listinfo/nlopt-discuss 
> <http://ab-initio.mit.edu/cgi-bin/mailman/listinfo/nlopt-discuss>
_______________________________________________
NLopt-discuss mailing list
[email protected]
http://ab-initio.mit.edu/cgi-bin/mailman/listinfo/nlopt-discuss

Reply via email to