[R] random effects in lme

2005-02-02 Thread Christoph Scherber
Dear all,
Suppose I have a linear mixed-effects model (from the package nlme) with 
nested random effects (see below); how would I present the results from 
the random effects part in a publication?

Specifically, I´d like to know:
(1) What is the total variance of the random effects at each level?
(2) How can I test the significance of the variance components?
(3) Is there something like an r squared for the whole model which I 
can state?

The data come from an experiment on plant performance with and without 
insecticide, with and without grasses present, and across different 
levels of plant diversity (div).

Thanks for your help!
Christoph.
lme(asin(sqrt(response)) ~ treatment + logb(div + 1, 2) + grass,
random =  ~ 1 | plotcode/treatment, na.action = na.exclude, method = ML)
Linear mixed-effects model fit by maximum likelihood
Data: NULL
  AIC  BIC  logLik
-290.4181 -268.719 152.209
Random effects:
Formula:  ~ 1 | plotcode
  (Intercept)
StdDev:  0.04176364
Formula:  ~ 1 | treatment %in% plotcode
 (Intercept)   Residual
StdDev:  0.08660458 0.00833387
Fixed effects: asin(sqrt(response)) ~ treatment + logb(div + 1, 2) + grass
  Value  Std.Error DF   t-value p-value
  (Intercept)  0.1858065 0.01858581 81  9.997225  .0001
treatment  0.0201384 0.00687832 81  2.927803  0.0044
logb(div + 1, 2) -0.0203301 0.00690074 79 -2.946073  0.0042
grass  0.0428934 0.01802506 79  2.379656  0.0197
Standardized Within-Group Residuals:
 Min  Q1 Med Q3   Max
-0.2033155 -0.05739679 -0.00943737 0.04045958 0.3637217
Number of Observations: 164
Number of Groups:
plotcode ansatz %in% plotcode
82  164
__
R-help@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html


RE: [R] random effects in lme

2005-02-02 Thread Lorenz . Gygax

It is unlikely that your request will be answered faster if you post it  a
second time in exactly the same way ...

 Suppose I have a linear mixed-effects model (from the package 
 nlme) with nested random effects (see below); how would I present
 the results from the random effects part in a publication?
 
 Specifically, I´d like to know:
 (1) What is the total variance of the random effects at each level?

 Random effects:
 Formula:  ~ 1 | plotcode
(Intercept)
 StdDev:  0.04176364
 
  Formula:  ~ 1 | treatment %in% plotcode
   (Intercept)   Residual
 StdDev:  0.08660458 0.00833387

What is wrong with an estimted StdDev on the level of plotcode of 0.0418 and
on the level of treatment (which is actually the same as to say that this is
a block of plants within plotcode that received the same treatment?) of
0.087?

 (2) How can I test the significance of the variance components?

Calculate a model with an without the component of interest and compare the
models using anova().

Perhaps you should read Pinheiro  Bates (2000)?

 (3) Is there something like an r squared for the whole 
 model which I can state?

None is provided and I do not know how easily it could be calculated. But
the use of the measure can be questioned. It is an absolute measure on how
much of the variability in the data is explained. Imagine you had true
replicates (i.e. several measurements under absolutely identical
situations). Imagine further that these measurements do nevertheless show
some variability. If this variability was substantial, your r-squared would
be low even though your model might pick up all the structure that you can
find in the data. Thus you can only get as good as 'natural' variability in
your data which is not considered by the r-squared measure.

Thus, (corrected) r-squared values might tell you something if you compare
different models based on the same data (in a similar way as the AIC and BIC
criteria) but not if you compare completely different data sets.

Regards, Lorenz
- 
Lorenz Gygax, Dr. sc. nat.
Centre for proper housing of ruminants and pigs
Swiss Federal Veterinary Office
agroscope FAT Tänikon, CH-8356 Ettenhausen / Switzerland

__
R-help@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html


Re: [R] random effects with lme() -- comparison with lm()

2004-01-16 Thread Douglas Bates
Jerome Asselin [EMAIL PROTECTED] writes:

 On Thu, 2004-01-15 at 16:30, Douglas Bates wrote:
 ...snip...
  (BTW, I wouldn't say that this is equivalent to a fixed effects
  model.  It is still a random effects model with two variance
  components.  It just doesn't have well-defined estimates for those two
  variance components.)
 
 Agreed.
 
 ...snip...
  You should find that intervals() applied to your fitted model produces
  huge intervals on the variance components, which is one way of
  diagnosing an ill-defined or nearly ill-defined model.
 
 Following your suggestion, I got:
  intervals(lme(Y~1,data=simdat,random=~1|A))
 Error in intervals.lme(lme(Y ~ 1, data = simdat, random = ~1 | A)) :
 Cannot get confidence intervals on var-cov components:
 Non-positive definite approximate variance-covariance
 
 This led me to:
  lme(Y~1,data=simdat,random=~1|A)$apVar
 [1] Non-positive definite approximate variance-covariance
 
 As a new feature suggestion for lme(), would it be appropriate to use
 apVar as a warning flag in this case?

Certainly.

You may know that we are doing a major revision of the lme
computational methods based on the ability to calculate both the
gradient and the Hessian of the profiled log-likelihood, as described
in
http://www.stat.wisc.edu/~bates/reports/MixedComp.pdf

I think that when we have both the gradient and the Hessian we will be
in a much better situation to diagnose ill-defined estimates.  The
apVar component in the current lme objects is an approximate
variance-covariance matrix from numerical derivatives.  Working with
an exact Hessian should be much more reliable.

__
[EMAIL PROTECTED] mailing list
https://www.stat.math.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html


[R] random effects with lme() -- comparison with lm()

2004-01-15 Thread Jerome Asselin

Hi all,

In the (very simple) example below, I have defined a random effect for
the residuals in lme(). So the model is equivalent to a FIXED effect
model. Could someone explain to me why lme() still gives two standard
deviation estimates? I would expect lme() to return either:
a) an error or a warning for having an unidentifiable model;
b) only one standard deviation estimate.

Thank you for your time.
Jerome Asselin

 library(nlme)
 simdat - data.frame(A=1:4,Y=c(23,43,11,34))
 simdat
  A  Y
1 1 23
2 2 43
3 3 11
4 4 34
 lme(Y~1,data=simdat,random=~1|A)
...snip...
Random effects:
 Formula: ~1 | A
(Intercept) Residual
StdDev:12.96007 4.860027
...snip...
 summary(lm(Y~1,data=simdat))$sigma
[1] 13.84136
 sqrt(12.96007^2+4.860027^2)
[1] 13.84136

__
[EMAIL PROTECTED] mailing list
https://www.stat.math.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html


Re: [R] random effects with lme() -- comparison with lm()

2004-01-15 Thread Douglas Bates
Jerome Asselin [EMAIL PROTECTED] writes:

 Hi all,
 
 In the (very simple) example below, I have defined a random effect for
 the residuals in lme(). So the model is equivalent to a FIXED effect
 model. Could someone explain to me why lme() still gives two standard
 deviation estimates? I would expect lme() to return either:
 a) an error or a warning for having an unidentifiable model;
 b) only one standard deviation estimate.
 
 Thank you for your time.
 Jerome Asselin
 
  library(nlme)
  simdat - data.frame(A=1:4,Y=c(23,43,11,34))
  simdat
   A  Y
 1 1 23
 2 2 43
 3 3 11
 4 4 34
  lme(Y~1,data=simdat,random=~1|A)
 ...snip...
 Random effects:
  Formula: ~1 | A
 (Intercept) Residual
 StdDev:12.96007 4.860027
 ...snip...
  summary(lm(Y~1,data=simdat))$sigma
 [1] 13.84136
  sqrt(12.96007^2+4.860027^2)
 [1] 13.84136

The estimates from lme are REML estimates because, as you have seen,
the sum of the estimated variances is correct and in this case only
the sum is well-defined.  Of course there are an infinite number of
other possible REML estimates and that situation is not flagged.
(BTW, I wouldn't say that this is equivalent to a fixed effects
model.  It is still a random effects model with two variance
components.  It just doesn't have well-defined estimates for those two
variance components.)

What has happened is that lme set up the optimization problem and
passed it off to one of the optimizer functions which came up with
converged estimates according to some convergence criterion.  In a
simple situation like this it is easy to determined that the estimates
are not well defined.  However, lme is designed to handle very general
model specifications and catching situations where estimates are not
well defined in the general case is difficult.  So the way we designed
lme is to take the estimates from the optimizer without doing further
analysis to see if they are consistent.

You should find that intervals() applied to your fitted model produces
huge intervals on the variance components, which is one way of
diagnosing an ill-defined or nearly ill-defined model.

__
[EMAIL PROTECTED] mailing list
https://www.stat.math.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html


Re: [R] random effects with lme() -- comparison with lm()

2004-01-15 Thread Jerome Asselin
On Thu, 2004-01-15 at 16:30, Douglas Bates wrote:
...snip...
 (BTW, I wouldn't say that this is equivalent to a fixed effects
 model.  It is still a random effects model with two variance
 components.  It just doesn't have well-defined estimates for those two
 variance components.)

Agreed.

...snip...
 You should find that intervals() applied to your fitted model produces
 huge intervals on the variance components, which is one way of
 diagnosing an ill-defined or nearly ill-defined model.

Following your suggestion, I got:
 intervals(lme(Y~1,data=simdat,random=~1|A))
Error in intervals.lme(lme(Y ~ 1, data = simdat, random = ~1 | A)) :
Cannot get confidence intervals on var-cov components:
Non-positive definite approximate variance-covariance

This led me to:
 lme(Y~1,data=simdat,random=~1|A)$apVar
[1] Non-positive definite approximate variance-covariance

As a new feature suggestion for lme(), would it be appropriate to use
apVar as a warning flag in this case?

Sincerely,
Jerome Asselin

__
[EMAIL PROTECTED] mailing list
https://www.stat.math.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html


[R] Random effects in LME

2003-07-07 Thread Robert Tanton
I have a model which has 3 levels (Year some indexes calculated (i); State
(j); and Year for which data were available (k)).

I want a random effect for the Year the Indexes are calculated within State
(i,j). I normally use MLWin, and its simple; I specify the three levels;
then introduce a constant, and in the drop down box for the constant, I have
3 check boxes; I click on j (State); and voila, a random effect u(ij). Note
this is the only random effect I want.

I haven't found how to do this in R. I have tried:

mlcgc1 - lme(Factors ~ 1, data=cgc, random=~1+1+0|YOA/State/YOR)

but this introduces a random effect for all levels; I have tried

mlcgc2 - lme(Factors ~ 1,data=cgc,random = list(YOA=~1,State=~1))

but this gives me a 2 level model. So I tried:

mlcgc3 - lme(Factors ~ 1,data=cgc,random = list(YOA=~1,State=~1,YOR=~0))

but this still gives me random effects for all 3 levels.

Any ideas?

Thanks.

Robert Tanton.

__
[EMAIL PROTECTED] mailing list
https://www.stat.math.ethz.ch/mailman/listinfo/r-help