Re: Cronbach's alpha and sample size

2001-03-06 Thread Nicolas Sander

Dear Gregor,

thank you very much for your comments.

Due to possibly existing general interest I post this message also in
sci.stat.edu.

"Gregor Socan" [EMAIL PROTECTED] wrote

You obviously should not use coefficient alpha. First, if an exclusion of 
some people causes such a change, then alpha is too instable to be 
interpreted (and your sample is extremely small indeed).

The sample size is indeed a problem. But not only with respect to the
calculation of coefficient alpha: ANY reliability estimate will
potentially suffer from this lack of N. 

Maybe the 
fenomenon is very stable within persons, but obviously not so much 
among persons.

I'm intersted firstly in the variance accounted for by the true score of
an 'underlying construct' and secondly in its temporal stability. So the
calculation of cronbach's alpha (or its variants) seems to be clearly
indicated.

Second, negative correlations are indicators of very serious violations of 
assumptions on which alpha is based.

I cannot see that some negative correlations among items violate any
assumptions of the calculation of alpha. These negative correlations are
due to error of measurement and/or sampling. Thus, they represent some
amount of variance which cannot be accounted for by true score variance.
If it were an assumtion of alpha that present error rules out the
calculation of alpha, we would never obtain alphas lower than 1. But of
course I acknolewdge the problem that alpha is not interpretable, when
the average interitem correlation drops below zero (thus leading to
negative alpha).

Difference scores are also very 
problematic from reliability point of view. Have you read some 
psychometric literature like Lord and Novick (1968)? 

I cannot see any inherent problem of assessing the reliability of
difference scores. To my mind, it is misleading to deduce some a priori
unreliability of difference scores when the true score theory is applied
as Gullikson (1950) or Lord and Novick (1968) put it. Under the
assupmtions of strict paralell tests (equal means, equal standard
deviations etc.), a reliability estimate of difference scores cannot be
different from zero unless the assumptions of equal means are violated!

Why do we use tests instead of single items? Because, aggregating items
to a test score boosts the true score variance and dampens error
variance (as long as the items are replications of the measured
construct). To my mind I can enhance the true score variance of
difference scores in the same way. The only distinction to the
estimation of 'usual' absolute scores is the need for twice as much
replications of the 'absolute' measures in order to obtain equal hights
of reliability (compared to the 'absolute' constituents of a difference
score). Thus, the above paradox can easily be circumvented. For more
details you might want to refer to:
Wittmann, W. W. (1988). Multivariate reliability theory: Principles of
symmetry and successful validation strategies. In J. R. Nesselroade  R.
B. Cattell (Eds.), Handbook of multivariate experimental psychology (2nd
ed., pp. 505-560). New York: Plenum. 

Negative 
correlations mean that either your measure is worthless or that you are 
using a wrong method to assess quality of your measurements.

If you have single measures with only a small portion of true score
variance, I find it not surprising to obtain some negative correlations
by chance.
And I don't see any other method of assessing the reliabilty if I want
to focus on true score variance of an underlying construct.

Please point me to any fallacies or misunderstandings I may have
flaunted.

Nico


=
Instructions for joining and leaving this list and remarks about
the problem of INAPPROPRIATE MESSAGES are available at
  http://jse.stat.ncsu.edu/
=



Re: Cronbach's alpha and sample size

2001-03-01 Thread Nicolas Sander


Thank you all for the helping answers. 
I had the problem of obtaining negative Alphas, when some subjects where
excluded from analyses (three out of ten). When they were included, I
had alphas of .65 to .75 (N items =60). The problem is - as I suspect -
that the average interitem correlation is very low and drops below zero
when these subjects were excluded. 
It might interest you, that I'm used to negative correlations in the
correlation matrix because I work with difference scores of reation time
measures (so there is no directional coding problem). Lots of repeated
measures ensure high consistency despite low average inter item
correlations and despite some negative correlations between individual
measures.

Nico
--


=
Instructions for joining and leaving this list and remarks about
the problem of INAPPROPRIATE MESSAGES are available at
  http://jse.stat.ncsu.edu/
=



Re: Cronbach's alpha and sample size

2001-02-28 Thread Rich Ulrich

On Wed, 28 Feb 2001 12:08:55 +0100, Nicolas Sander
[EMAIL PROTECTED] wrote:

 How is Cronbach's alpha affected by the sample size apart from questions
 related to generalizability issues?

 - apart from generalizability, "not at all."
 
 Ifind it hard to trace down the mathmatics related to this question
 clearly, and wether there migt be a trade off between N of Items and N
 of sujects (i.e. compensating for lack of subjects by high number of
 items).

I don't know what you mean by 'trade-off.'   I have trouble trying to
imagine just what it is, that you are trying to trace down.
But, NO.  

Once you assume some variances are equal, Alpha can be seen 
as a fairly simple function of the number of items and the average
correlation -- more items, higher alpha.   The average correlation has
a tiny bias  by N, but that's typically, safely ignored.

-- 
Rich Ulrich, [EMAIL PROTECTED]
http://www.pitt.edu/~wpilib/index.html


=
Instructions for joining and leaving this list and remarks about
the problem of INAPPROPRIATE MESSAGES are available at
  http://jse.stat.ncsu.edu/
=