I am in the process of determining inter and intrarater reliabilities with
a 70 category coding procedure with 3 different coders 

we are using a �Cohen�s kappa�  to determine the reliability amongst
coders (a more stringent standard ?) - have not finished yet 

a simple interrater agreement ratio can probably give you a higher score 
(number of agreements divided by total agreements + disagreements) 
 

In article <[EMAIL PROTECTED]>,
[EMAIL PROTECTED] (Allen E Cornelius) wrote:

>     I have an interrater reliability dilemma.  We are examining a 3-item
>scale (each item scored 1 to 5) used to rate compliance behavior of
>patients.  Two separate raters have used the scale to rate patients'
>behavior, and we now want to calculate the interrater agreement for the
>scale.  Two problems:
-- 
Robert Kujda  Purdue University
Division of Art and Design - Department of Visual and Performing Arts
Curriculum and Instruction CA-1 West Lafayette, IN 47907
765 494-3058  [EMAIL PROTECTED]
have an Aesthetic Experience 
                                /\__/|
             (\      ____ \   ,   , \______ _ 
               \\_/ __     =  _ o= _____ o/ /o

Reply via email to