[EMAIL PROTECTED] (Dale McLerran) wrote in message > So why not compute an intraclass correlation (IC) instead? The IC > can be computed quite easily using PROC MIXED. Also, IC does not > require that every rater evaluate each subject. It does not require > a consistent set of raters, though it can be computed with and > without a consistent rater set...
Dale gives good reasons above to consider the intraclass correlation (ICC) instead of weighted kappa. To these one can add that the ICC approach: 1. has the ability to predict the relibility of mean ratings based on two or more raters. 2. can distinguish between various designs and related inferences--e.g., are the raters in the study the entire population of raters, or merely a sample? > ...is there any problem in using these IC values instaed of a kappa? No problems that I can see, except in the case of one measure which is purely nominal. For the ordered-categorical measures, if the categories are not equally wide, then they could be assigned a priori numeric levels to reflect that (e.g., 1, 2.3, 4) instead of (1, 2, 3). SAS has a macro, intracc.sas, to calculate the ICC. For more info, see: http://ftp.sas.com/techsup/download/stat/intracce.html Related links: http://www.missouri.edu/~marc/icc_text.htm http://luna.cas.usf.edu/~mbrannic/files/pmet/shrout1.htm ------------------------------------------------------------------------------- John Uebersax, PhD (858) 597-5571 La Jolla, California (858) 625-0155 (fax) email: [EMAIL PROTECTED] Statistics: http://ourworld.compuserve.com/homepages/jsuebersax/agree.htm Psychology: http://members.aol.com/spiritualpsych -------------------------------------------------------------------------------- . . ================================================================= Instructions for joining and leaving this list, remarks about the problem of INAPPROPRIATE MESSAGES, and archives are available at: . http://jse.stat.ncsu.edu/ . =================================================================