Re: [R] Poor performance of Optim

2011-10-02 Thread yehengxin
Thank you for your response! But the problem is when I estimate a model without knowing the true coefficients, how can I know which reltol is good enough? 1e-8 or 1e-10? Why can commercial packages automatically determine the right reltol but R cannot? -- View this message in context:

Re: [R] Poor performance of Optim

2011-10-02 Thread yehengxin
What I tried is just a simple binary probit model. Create a random data and use optim to maximize the log-likelihood function to estimate the coefficients. (e.g. u = 0.1+0.2*x + e, e is standard normal. And y = (u 0), y indicating a binary choice variable) If I estimate coefficient of x, I

Re: [R] Poor performance of Optim

2011-10-02 Thread yehengxin
Oh, I think I got it. Commercial packages limit the number of decimals shown. -- View this message in context: http://r.789695.n4.nabble.com/Poor-performance-of-Optim-tp3862229p3864271.html Sent from the R help mailing list archive at Nabble.com.

Re: [R] Poor performance of Optim

2011-10-02 Thread Daniel Malter
Ben Bolker sent me a private email rightfully correcting me that was factually wrong when I wrote that ML /is/ a numerical method (I had written sloppily and under time pressure). He is of course right to point out that not all maximum likelihood estimators require numerical methods to solve.

Re: [R] Poor performance of Optim

2011-10-02 Thread Daniel Malter
And there I caught myself with the next blooper: it wasn't Ben Bolker, it was Bert Gunter who pointed that out. :) Daniel Malter wrote: Ben Bolker sent me a private email rightfully correcting me that was factually wrong when I wrote that ML /is/ a numerical method (I had written sloppily

Re: [R] Poor performance of Optim

2011-10-02 Thread Ravi Varadhan
Hi, You really need to study the documentation of optim carefully before you make broad generalizations. There are several algorithms available in optim. The default is a simplex-type algorithm called Nelder-Mead. I think this is an unfortunate choice as the default algorithm. Nelder-Mead

[R] Poor performance of Optim

2011-10-01 Thread yehengxin
I used to consider using R and Optim to replace my commercial packages: Gauss and Matlab. But it turns out that Optim does not converge completely. The same data for Gauss and Matlab are converged very well. I see that there are too many packages based on optim and really doubt if they can be

Re: [R] Poor performance of Optim

2011-10-01 Thread Rubén Roa
-Original Message- From: r-help-boun...@r-project.org on behalf of yehengxin Sent: Sat 10/1/2011 8:12 AM To: r-help@r-project.org Subject: [R] Poor performance of Optim I used to consider using R and Optim to replace my commercial packages: Gauss and Matlab. But it turns out that Optim

Re: [R] Poor performance of Optim

2011-10-01 Thread Joshua Wiley
Is there a question or point to your message or did you simply feel the urge to inform the entire R-help list of the things that you consider? Josh On Fri, Sep 30, 2011 at 11:12 PM, yehengxin xy...@hotmail.com wrote: I used to consider using R and Optim to replace my commercial packages: Gauss

Re: [R] Poor performance of Optim

2011-10-01 Thread Marc Girondot
Le 01/10/11 08:12, yehengxin a écrit : I used to consider using R and Optim to replace my commercial packages: Gauss and Matlab. But it turns out that Optim does not converge completely. What it means completely ? The same data for Gauss and Matlab are converged very well. I see that there

Re: [R] Poor performance of Optim

2011-10-01 Thread Spencer Graves
Have you considered the optimx package? I haven't tried it, but it was produced by a team of leading researchers in nonlinear optimization, including those who wrote most of optim (http://user2010.org/tutorials/Nash.html) years ago. There is a team actively working on this.

Re: [R] Poor performance of Optim

2011-10-01 Thread Daniel Malter
With respect, your statement that R's optim does not give you a reliable estimator is bogus. As pointed out before, this would depend on when optim believes it's good enough and stops optimizing. In particular if you stretch out x, then it is plausible that the likelihood function will become flat