>> Hello! >> >> Looking on how people use optim to get MLE I also noticed that one can >> use returned Hessian to get corresponding standard errors i.e. something >> like >> >> result <- optim(<< snip >>, hessian=T) >> result$par # point estimates >> vc <- solve(result$hessian) # var-cov matrix >> se <- sqrt(diag(vc)) # standard errors >> >> What is actually Hessian representing here? I appologize for lack of >> knowledge, but ... Attached PDF can show problem I am facing with this >> issue. >> > > The Hessian is the second derivative of the objective function, so if the > objective function is minus a loglikelihood the hessian is the observed > Fisher information. The inverse of the hessian is thus an estimate of > the variance-covariance matrix of the parameters. > > For some models this is exactly I/n in your notation, for others it is > just close (and there are in fact theoretical reasons to prefer the > observed information). I don't remember whether the two-parameter gamma > family is one where the observed and expected information are identical.
The optim help page says: hessian Logical. Should a numerically differentiated Hessian matrix be returned? I interpret this as providing a finite differences approximation of the Hessian (possibly based on exact gradients?). Is that the case or is it a Hessian that results from the optimization process? Best, Ingmar ______________________________________________ [email protected] mailing list https://stat.ethz.ch/mailman/listinfo/r-help PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html
