On Thu, Feb 19, 2004 at 09:22:09AM -0800, Thomas Lumley wrote:

>>  So, what is the _right_ way for obtatining SE? Why two those formulas above
>>  differ?
> If you are maximising a likelihood then the covariance matrix of the
> estimates is (asymptotically) the inverse of the negative of the Hessian.
> The standard errors are the square roots of the diagonal elements of the
> covariance.
> So if you have the Hessian you need to invert it, if you have the
> covariance matrix, you don't.

Yes, the covariance matrix is inverse of the Hessian, that's clear.
But my queston is, why in the first example:

    > sqrt(diag(2*out$minimum/(length(y) - 2) * solve(out$hessian)))
    The 2 in the line above represents the number of parameters. A 95%
    confidence interval would be the parameter estimate +/- 1.96 SE. We
    can superimpose the least squares fit on a new plot:

- we don _not_ use simply 'sqrt(diag(solve(out$hessian)))', how in the
second example, but also include in some way "number of parameters" == 2?
What does '2*out$minimum/(length(y) - 2)' multiplier mean?



[EMAIL PROTECTED] mailing list
PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html

Reply via email to