The hessian from `optim' is not as accurate as that from `numDeriv' (with the default of Richardson extrapolation), so I would trust the numDeriv's hessian result over that of optim. However, without seeing what you actually did, this is only a surmise.
Ravi. ____________________________________________________________________ Ravi Varadhan, Ph.D. Assistant Professor, Division of Geriatric Medicine and Gerontology School of Medicine Johns Hopkins University Ph. (410) 502-2619 email: [email protected] ----- Original Message ----- From: Jonathan Phillips <[email protected]> Date: Sunday, November 7, 2010 3:45 am Subject: [R] saddle points in optim To: [email protected] > Hi, > I've been trying to use optim to minimise least squares for a > function, and then get a guess at the error using the hessian matrix > (calculated from numDeriv::hessian, which I read in some other r-help > post was meant to be more accurate than the hessian given in optim). > > To get the standard error estimates, I'm calculating > sqrt(diag(solve(x))), hope that's correct. > > I've found that using numDeriv's hessian gets me some NaNs for errors, > whereas the one from optim gets me numbers for all parameters. If I > look for eigenvalues for numDeriv::hessian, I get two negative numbers > (and six positive - I'm fitting to eight parameters), so does this > mean that optim hasn't converged correctly, and has hit a saddle > point? If so, is there any way I could assist it to find the minimum? > > Thanks, > Jon Phillips > > ______________________________________________ > [email protected] mailing list > > PLEASE do read the posting guide > and provide commented, minimal, self-contained, reproducible code. ______________________________________________ [email protected] mailing list https://stat.ethz.ch/mailman/listinfo/r-help PLEASE do read the posting guide http://www.R-project.org/posting-guide.html and provide commented, minimal, self-contained, reproducible code.

