Re: [R] Homogeneity of regression slopes

2010-09-15 Thread Doug Adams
That's good insight, and gives me some good ideas for what direction
to this.  Thanks everyone !

Doug

P.S. - I guess if you have a significant interaction, that implies the
slopes of the individual regression lines are significantly different
anyway, doesn't it...



On Tue, Sep 14, 2010 at 11:33 AM, Thomas Stewart tgstew...@gmail.com wrote:
 If you are interested in exploring the homogeneity of variance assumption,
 I would suggest you model the variance explicitly.  Doing so allows you to
 compare the homogeneous variance model to the heterogeneous variance model
 within a nested model framework.  In that framework, you'll have likelihood
 ratio tests, etc.
 This is why I suggested the nlme package and the gls function.  The gls
 function allows you to model the variance.
 -tgs
 P.S. WLS is a type of GLS.
 P.P.S It isn't clear to me how a variance stabilizing transformation would
 help in this case.

 On Tue, Sep 14, 2010 at 6:53 AM, Clifford Long gnolff...@gmail.com wrote:

 Hi Thomas,

 Thanks for the additional information.

 Just wondering, and hoping to learn ... would any lack of homogeneity of
 variance (which is what I believe you mean by different stddev estimates) be
 found when performing standard regression diagnostics, such as residual
 plots, Levene's test (or equivalent), etc.?  If so, then would a WLS routine
 or some type of variance stabilizing transformation be useful?

 Again, hoping to learn.  I'll check out the gls() routine in the nlme
 package, as you mentioned.

 Thanks.

 Cliff


 On Mon, Sep 13, 2010 at 10:02 PM, Thomas Stewart tgstew...@gmail.com
 wrote:

 Allow me to add to Michael's and Clifford's responses.

 If you fit the same regression model for each group, then you are also
 fitting a standard deviation parameter for each model.  The solution
 proposed by Michael and Clifford is a good one, but the solution assumes
 that the standard deviation parameter is the same for all three models.

 You may want to consider the degree by which the standard deviation
 estimates differ for the three separate models.  If they differ wildly,
 the
 method described by Michael and Clifford may not be the best.  Rather,
 you
 may want to consider gls() in the nlme package to explicitly allow the
 variance parameters to vary.

 -tgs

 On Mon, Sep 13, 2010 at 4:52 PM, Doug Adams f...@gmx.com wrote:

  Hello,
 
  We've got a dataset with several variables, one of which we're using
  to split the data into 3 smaller subsets.  (as the variable takes 1 of
  3 possible values).
 
  There are several more variables too, many of which we're using to fit
  regression models using lm.  So I have 3 models fitted (one for each
  subset of course), each having slope estimates for the predictor
  variables.
 
  What we want to find out, though, is whether or not the overall slopes
  for the 3 regression lines are significantly different from each
  other.  Is there a way, in R, to calculate the overall slope of each
  line, and test whether there's homogeneity of regression slopes?  (Am
  I using that phrase in the right context -- comparing the slopes of
  more than one regression line rather than the slopes of the predictors
  within the same fit.)
 
  I hope that makes sense.  We really wanted to see if the predicted
  values at the ends of the 3 regression lines are significantly
  different... But I'm not sure how to do the Johnson-Neyman procedure
  in R, so I think testing for slope differences will suffice!
 
  Thanks to any who may be able to help!
 
  Doug Adams
 
  __
  R-help@r-project.org mailing list
  https://stat.ethz.ch/mailman/listinfo/r-help
  PLEASE do read the posting guide
  http://www.R-project.org/posting-guide.html
  and provide commented, minimal, self-contained, reproducible code.
 

        [[alternative HTML version deleted]]

 __
 R-help@r-project.org mailing list
 https://stat.ethz.ch/mailman/listinfo/r-help
 PLEASE do read the posting guide
 http://www.R-project.org/posting-guide.html
 and provide commented, minimal, self-contained, reproducible code.




__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


[R] Homogeneity of regression slopes

2010-09-13 Thread Doug Adams
Hello,

We've got a dataset with several variables, one of which we're using
to split the data into 3 smaller subsets.  (as the variable takes 1 of
3 possible values).

There are several more variables too, many of which we're using to fit
regression models using lm.  So I have 3 models fitted (one for each
subset of course), each having slope estimates for the predictor
variables.

What we want to find out, though, is whether or not the overall slopes
for the 3 regression lines are significantly different from each
other.  Is there a way, in R, to calculate the overall slope of each
line, and test whether there's homogeneity of regression slopes?  (Am
I using that phrase in the right context -- comparing the slopes of
more than one regression line rather than the slopes of the predictors
within the same fit.)

I hope that makes sense.  We really wanted to see if the predicted
values at the ends of the 3 regression lines are significantly
different... But I'm not sure how to do the Johnson-Neyman procedure
in R, so I think testing for slope differences will suffice!

Thanks to any who may be able to help!

Doug Adams

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


[R] aov - subjects nested within groups crossed with questions

2010-06-22 Thread Doug Adams
Hello,

I was going to use lmer() on this data, but it seemed easier -- and
more importantly, more meaningful -- to just analyze smaller sections
of it individually.  I'd like to ask for help to see if I'm analyzing
the separate parts correctly.  Each part is the same, and they all
look like this:

__
question 1  question 2  question 3  
question 4
group 1
subject 1   #   #   #   
#
subject 2   #   #   #   
#
subject 3   #   #   #   
#
subject 4   #   #   #   
#
subject 5   #   #   #   
#
subject 6   #   #   #   
#
subject 7   #   #   #   
#
subject 8   #   #   #   
#
subject 9   #   #   #   
#
subject 10  #   #   #   
#
group 2
subject 11  #   #   #   
#
subject 12  #   #   #   
#
subject 13  #   #   #   
#
subject 14  #   #   #   
#
subject 15  #   #   #   
#
subject 16  #   #   #   
#
subject 17  #   #   #   
#
subject 18  #   #   #   
#
subject 19  #   #   #   
#
subject 20  #   #   #   
#
¯¯

This is the call to aov I used:
aov (response ~ (group*question) + Error(person/question), data)

...and this is the output I get:

__
Error: person
  Df Sum Sq Mean Sq F value   Pr(F)
group  1  6.086  6.0860  7.2069 0.008867 **
question   1  0.720  0.7199  0.8525 0.358696
Residuals 78 65.869  0.8445
---
Signif. codes:  0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1

Error: person:question
   Df Sum Sq Mean Sq F valuePr(F)
question1 18.014 18.0144 24.7920 3.671e-06 ***
group:question  1  0.004  0.0041  0.0057  0.94
Residuals  79 57.403  0.7266
---
Signif. codes:  0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1

Error: Within
   Df Sum Sq Mean Sq F value Pr(F)
Residuals 483 390.78 0.80907
¯¯

If I want to test for group difference, am I looking at the F value
for group under Error: person or does that section of the output not
take into account the entire variance structure I should be
acknowledging?  Does my aov syntax seem appropriate in the first
place?
Thanks everyone very much for any help you can give,

Doug Adams
question 1  question 2  question 3  
question 4

group 1
subject 1   #   #   #   
#
subject 2   #   #   #   
#
subject 3   #   #   #   
#
subject 4   #   #   #   
#
subject 5   #   #   #   
#
subject 6   #   #   #   
#
subject 7   #   #   #   
#
subject 8   #   #   #   
#
subject 9   #   #   #   
#
subject 10  #   #   #   
#

group 2
subject 11  #   #   #   
#
subject 12  #   #   #   
#
subject 13  #   #   #   
#
subject 14  #   #   #   
#
subject 15  #   #   #   
#
subject 16  #   #   #   
#
subject 17  #   #   #   
#
subject 18

Re: [R] lmer, mcmcsamp, coda, HPDinterval

2010-02-01 Thread Doug Adams

Ah, that did it.  Thank you!

-
Doug Adams
MStat Student
University of Utah
-- 
View this message in context: 
http://n4.nabble.com/lmer-mcmcsamp-coda-HPDinterval-tp1457803p1459380.html
Sent from the R help mailing list archive at Nabble.com.

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


[R] lmer, mcmcsamp, coda, HPDinterval

2010-01-30 Thread Doug Adams

Hi,

I've got a linear mixed model created using lmer:

A6mlm - lmer(Score ~ division + (1|school), data=Age6m)

(To those of you to whom this model looks familiar, thanks for your patience
with this  my other questions.)  Anyway, I was trying this to look at the
significance of my fixed effects:

A6post - mcmcsamp(A6mlm, 5)
library(coda)
HPDinterval(A6post)

..but I got this message:
no applicable method for 'HPDinterval' applied to an object of class
merMCMC

Should I be coercing A6post to another type, or am I missing other steps
altogether?
Thanks   :)

Doug Adams

-
Doug Adams
MStat Student
University of Utah
-- 
View this message in context: 
http://n4.nabble.com/lmer-mcmcsamp-coda-HPDinterval-tp1457803p1457803.html
Sent from the R help mailing list archive at Nabble.com.

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] Hierarchical Linear Model using lme4's lmer

2010-01-16 Thread Doug Adams

Hehe (about the kitchen sink)
Thanks very much to all three of you.




Douglas Bates-2 wrote:
 
 On Sat, Jan 16, 2010 at 8:20 AM, Walmes Zeviani
 walmeszevi...@hotmail.com wrote:

 Doug,

 It appears you are mixing nlme and lme4 formulation type.
 On nlme library you type

 lme(y~x, random=~1|subjetc)

 On lme4 library you type

 lmer(y~x+(1|subject))

 You mixed them.

 At your disposal.
 
 Which is what I tell my wife when I am standing by our sink.
 
 Walmes.


 Doug Adams wrote:

 Hi,

 I was wondering:  I've got a dataset where I've got student 'project's
 nested within 'school's, and 'division' (elementary, junior, or
 senior) at the student project level.  (Division is at the student
 level and not nested within schools because some students are
 registered as juniors  others as seniors within the same school.)

 So schools are random, division is fixed, and the student Score is the
 outcome variable.  This is what I've tried:

 lmer(data=Age6m, Score ~ division + (1|school), random=~1 | school)

 Am I on the right track?  Thanks everyone,   :)

 Doug Adams
 MStat Student
 University of Utah
 
 Walmes is correct that this is mixing two formulations of the model.
 It turns out that the model will be fit correctly anyway.  The lmer
 function has a ... argument which will silently swallow the argument
 random = ~ 1|school and ignore it.  Looks like we should add a check
 for specification of a random argument and provide a warning if it is
 present.
 
 __
 R-help@r-project.org mailing list
 https://stat.ethz.ch/mailman/listinfo/r-help
 PLEASE do read the posting guide
 http://www.R-project.org/posting-guide.html
 and provide commented, minimal, self-contained, reproducible code.
 
 


-
Doug Adams
MStat Student
University of Utah
-- 
View this message in context: 
http://n4.nabble.com/Hierarchical-Linear-Model-using-lme4-s-lmer-tp1015485p1015916.html
Sent from the R help mailing list archive at Nabble.com.

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


[R] Hierarchical Linear Model using lme4's lmer

2010-01-15 Thread Doug Adams
Hi,

I was wondering:  I've got a dataset where I've got student 'project's
nested within 'school's, and 'division' (elementary, junior, or
senior) at the student project level.  (Division is at the student
level and not nested within schools because some students are
registered as juniors  others as seniors within the same school.)

So schools are random, division is fixed, and the student Score is the
outcome variable.  This is what I've tried:

lmer(data=Age6m, Score ~ division + (1|school), random=~1 | school)

Am I on the right track?  Thanks everyone,   :)

Doug Adams
MStat Student
University of Utah

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.