"J.R. Lockwood" <[EMAIL PROTECTED]> writes: > > > > Dear listers, > > > > I can't find the variance or se of the coefficients in a multilevel model > > using lme. > > > > The component of an lme() object called "apVar" provides the estimated > asymptotic covariance matrix of a particular transformation of the > variance components. Dr. Bates can correct me if I'm wrong but I > believe it is the matrix logarithm of Cholesky decomposition of the > covariance matrix of the random effects. I believe the details are in > the book by Pinheiro and Bates. Once you know the transformation you > can use the "apVar" elements to get estimated asympotic standard > errors for your variance components estimates using the delta method. > > J.R. Lockwood > 412-683-2300 x4941 > [EMAIL PROTECTED] > http://www.rand.org/methodology/stat/members/lockwood/
First, thanks to those who answered the question. I have been away from my email for about a week and am just now catching up on the r-help list. As I understand the original question from Andrej he wants to obtain the standard errors for coefficients in the fixed effects part of the model. Those are calculated in the summary method for lme objects and returned as the component called 'tTable'. Try library(nlme) example(lme) summary(fm2)$tTable to see the raw values. Other software for fitting mixed-effects models, such as SAS PROC MIXED and HLM, return standard errors along with the estimates of the variances and covariances of the random effects. We don't return standard errors of estimated variances because we don't think they are useful. A standard error for a parameter estimate is most useful when the distribution of the estimator is approximately symmetric, and these are not. Instead we feel that the variances and covariances should be converted to an unconstrained scale, and preferably a scale for which the log-likelihood is approximately quadratic. The apVar component that you mention is an approximate variance-covariance matrix of the variance components on an unbounded parameterization that uses the logarithm of any standard deviation and Fisher's z transformation of any correlations. If all variance-covariance matrices being estimated are 1x1 or 2x2 then this parameterization is both unbounded and unconstrained. If any are 3x3 or larger then this parameterization must be further constrained to ensure positive definiteness. Nevertheless, once we have finished the optimization we convert to this 'natural' parameterization to assess the variability of the estimates because these parameters are easily interpreted. The actual optimization of the profiled log-likelihood is done using the log-Cholesky parameterization that you mentioned because it is always unbounded and unconstrained. Interpreting elements of this parameter vector is complicated. I hope this isn't too confusing. ______________________________________________ [EMAIL PROTECTED] mailing list https://www.stat.math.ethz.ch/mailman/listinfo/r-help
