On Wed, 27 Mar 2002 21:17:22 +0000 (UTC), "JP"
<Janepeutrell@removethis bit.btinternet.com> wrote:
> Thankyou, this does help, although the data I have does not fit either of
> your examples. I have a single candidates answer sheet to 12 questions (each
> question is scored 1, 2, 3, or 4) which has been marked by 15 different
> examiners. I wish to have a single number to assess overall inter-examiner
> agreement. I had thought that interclass correlation wass the correct
> technique, but was told I should be using intraclass correlation instead,
> and have been unable to find a convincing explanation ever since.
> Ian Kestin

Oh!  well, you are right.
You certainly do not have data for computing a correlation,
either the usual  intraclass or interclass.
Without a *sample*  to represent a *range*  of traits,
you are limited, without a doubt, to describing
deviations rather than similarity.

You can describe how much the raters vary on a
question, say, as the Standard deviation of
responses.  Or you have their range.  

You could get a number across the 12 questions,
which would be computed with a correlation-formula.
It would not be a Pearson  r, though,  if  'r'   is a 
reference to something with a known statistical
distribution.   - That would be something that falls 
into the class of 'profile analysis'.  It would be 
somewhat pointless or weird to compute it, if you 
didn't have a context and an a-priori  reason for it.

-- 
Rich Ulrich, [EMAIL PROTECTED]
http://www.pitt.edu/~wpilib/index.html
.
.
=================================================================
Instructions for joining and leaving this list, remarks about the
problem of INAPPROPRIATE MESSAGES, and archives are available at:
.                  http://jse.stat.ncsu.edu/                    .
=================================================================

Reply via email to