Re: Regression with repeated measures

2001-03-01 Thread Thom Baguley

Steve Gregorich wrote:
 
 Linear mixed models (aka
 multilelvel models, random
 coefficient models, etc) as
 implemented by many software
 products: SAS PROC MIXED,
 MIXREG, MLwiN, HLM, etc.
 
 You might want to look at some
 links on my website
 
 http://sites.netscape.net/segregorich/index.html

There are a few good intros available (some - like Goldstein and Hox's books
also on the web):

Goldstein, H. (1995). Multilevel statistical models. London: Arnold.
Hox, J. J. (1995). Applied multilevel analysis. Amsterdam: TT-Publikaties.
Paterson, L.,  Goldstein, H. (1991). New statistical methods for analyzing
social structures: an introduction to multilevel models. British Educational
Research Journal, 17, 387-393.
Snijders, T. A. B.,  Bosker, R. J. (1999). Multilevel analysis: an
introduction to basic and advanced multilevel modeling. London: Sage.

The Snijders  Bosker is a very good intro. Kreft  de Leeuw also published an
intro text (though I haven't read it yet).

Thom


=
Instructions for joining and leaving this list and remarks about
the problem of INAPPROPRIATE MESSAGES are available at
  http://jse.stat.ncsu.edu/
=



Re: Cronbach's alpha and sample size

2001-03-01 Thread Nicolas Sander


Thank you all for the helping answers. 
I had the problem of obtaining negative Alphas, when some subjects where
excluded from analyses (three out of ten). When they were included, I
had alphas of .65 to .75 (N items =60). The problem is - as I suspect -
that the average interitem correlation is very low and drops below zero
when these subjects were excluded. 
It might interest you, that I'm used to negative correlations in the
correlation matrix because I work with difference scores of reation time
measures (so there is no directional coding problem). Lots of repeated
measures ensure high consistency despite low average inter item
correlations and despite some negative correlations between individual
measures.

Nico
--


=
Instructions for joining and leaving this list and remarks about
the problem of INAPPROPRIATE MESSAGES are available at
  http://jse.stat.ncsu.edu/
=



Re: Regression with repeated measures

2001-03-01 Thread Thom Baguley

Donald Burrill wrote:
 Probably the best approach is the multilevel (aka hierarchical) modelling
 advocated by previous respondents.  Possible problems with that approach:
 (1) you'll need purpose-built software, which may not be conveniently
 available at USD;  (2) the user is usually required (as I rather vaguely

Very good point.

 recall from a brush with Goldstein's ML3 a decade ago) to specify which
 (co)variances are to be estimated in the model, both within and between
 levels, and if your student isn't up to this degree of technical skill,
 (s)he may not have a clue as to what the output will be trying to say.

MlWiN is much easier to use (though does require good knowledge of standard
GLM regression equations). The default is just to model the variance at each
level, though adding in variance parameters is very easy. I'd love to have a
standard GLM program with the same interface (adding, deleting terms from a
visual representation of the regression equation).

I agree that in lots of cases multilevel modeling may be the "ideal" choice
but not sensible in practice (sample size considerations or for some teaching 
contexts).

For some problems, a multilevel model is not required at all. By treating
repeated obs as independent N is inflated. It may be sufficient (depending on
what effects you want to estimate) just to correct N to reflect this design
effect. Snijders and Bosker's book is pretty lucid on this (pp16-24).

Thom


=
Instructions for joining and leaving this list and remarks about
the problem of INAPPROPRIATE MESSAGES are available at
  http://jse.stat.ncsu.edu/
=



comparing multiple correlated correlations

2001-03-01 Thread Allyson Rosen

OK here's another question from a newbie.  In this small sample of 14
subjects, I wanted to compare several correlated correlations: individual's
brain volumes correlated with a measure of memory performance.
Specifically, I wanted to say that 1 correlation is stronger than the other
3.  There's lots out there on just comparing 2 correlations but I wanted to
compare all 4 at once.

The most appropriate article I found was by Olkin and Finn
 I. Olkin and J. Finn. Testing correlated correlations. Psychological
Bulletin 108(2):330-333, 1990.

The problem is that they assume huge sample sizes.  I consulted with a
statistician and she suggested a jack knife procedure in which I set up the
following comparison:
r1-average(r2,r3,r4)
I iteratively remove each subject and calculate this comparison and the
difference of that output from the total group comparison.
i.e. r1-average(r2,r3,r4) WITHOUT subject 1 included, r1-average(r2,r3,r4)
without subject 2 included... and generate the difference of each of these
scores from the total scores.
Finally, I generate a confidence interval.  If that confidence interval does
not include zero, then the comparison is significant.
It worked and now I want to cite an appropriate source in the paper.  Is
there a good reference on similar jack knife procedures?  I found this in
the spss appendix.

 M. H. Quenouville. Approximate tests of correlation in time series. Journal
of the Royal Statistical Society, Series B 11:68, 1949

Many thanks,

Allyson




=
Instructions for joining and leaving this list and remarks about
the problem of INAPPROPRIATE MESSAGES are available at
  http://jse.stat.ncsu.edu/
=



ANN: Book: Causation, Prediction, and Search

2001-03-01 Thread wolfskil

I thought readers of sci.stat.edu might be interested in this book.  For
more information please visit http://mitpress.mit.edu/promotions/books/SPICHF00.

Best,
Jud

Causation, Prediction, and Search
second edition
Peter Spirtes, Clark Glymour, and Richard Scheines

What assumptions and methods allow us to turn observations into causal
knowledge, and how can even incomplete causal knowledge be used in
planning and prediction to influence and control our environment? In
this book Peter Spirtes, Clark Glymour, and Richard Scheines address
these questions using the formalism of Bayes networks, with results that
have been applied in diverse areas of research in the social,
behavioral, and physical sciences.

The authors show that although experimental and observational study
designs may not always permit the same inferences, they are subject to
uniform principles. They axiomatize the connection between causal
structure and probabilistic independence, explore several varieties of
causal indistinguishability, formulate a theory of manipulation, and
develop asymptotically reliable procedures for searching over
equivalence classes of causal models, including models of categorical
data and structural equation models with and without latent variables.

The authors show that the relationship between causality and probability
can also help to clarify such diverse topics in statistics as the
comparative power of experimentation versus observation, Simpson's
paradox, errors in regression models, retrospective versus prospective
sampling, and variable selection.

The second edition contains a new introduction and an extensive survey
of advances and applications that have appeared since the first edition
was published in 1993. 

Peter Spirtes is Professor of Philosophy at the Center for Automated
Learning and Discovery, Carnegie Mellon University. Clark Glymour is
Alumni University Professor of Philosophy at Carnegie Mellon University
and Valtz Family Professor of Philosophy at the University of
California, San Diego. He is also Distinguished External Member of the
Center for Human and Machine Cognition at the University of West
Florida, and Adjunct Professor of Philosophy of History and Philosophy
of Science at the University of Pittsburgh. Richard Scheines is
Associate Professor of Philosophy at the Center for Automated Learning
and Discovery, and at the Human Computer Interaction Institute, Carnegie
Mellon University.

7 x 9, 496 pp., 225 illus., cloth ISBN 0-262-19440-6
Adaptive Computation and Machine Learning series
A Bradford Book


=
Instructions for joining and leaving this list and remarks about
the problem of INAPPROPRIATE MESSAGES are available at
  http://jse.stat.ncsu.edu/
=