On 23 Mar 2001 02:53:11 GMT, John Uebersax <[EMAIL PROTECTED]>
wrote:
> Paul's comment is very apt. It is very important to consider whether
> a consistent error should or should not count against reliability.
> In some cases, a constant positive or negative bias should not matter.
- If you have a choice, you design your experiment so that a bias
will not matter. Assays may be conducted in large batches, or the
same rater may be assigned for both Pre and Post assessment.
> For example, one might be willing to standardize each measure before
> using it in statistical analysis. The standardization would then
> remove differences due to a constant bias (as well as differences
> associated with a different variance for each measure/rating).
? so that rater A's BPRS on the patient is divided by something, to
make it compare to rater B's rating? That sounds hard to justify.
I agree that, conceivably, raters could want to use a scale
differently. If that's a chance, then: Before you start the study,
you train the raters to use the scale the same.
Standardizing for variance like that, between *raters,* is
something I don't remember doing. I do standardize for the
observed SD for a variable, when I create a composite score
across several variables.
--
Rich Ulrich, [EMAIL PROTECTED]
http://www.pitt.edu/~wpilib/index.html
=================================================================
Instructions for joining and leaving this list and remarks about
the problem of INAPPROPRIATE MESSAGES are available at
http://jse.stat.ncsu.edu/
=================================================================