Sorry if this has come up before, but
here it goes.
Is there a way I can compare
kappa-values? The backgound is as
follows:
Four physicians has coded a 100 surgical
notes.
Each physician has coded each surgical
note using all four different
classifications. (thus coning the same
note in four different ways).
The classifications has differing
numbers of catagories (one has 8, one
10, one 16 and so on).
I've calculated the degree of agreement
within each classification using
generalized kappa. How can I compare
these values? I'm not an experienced
statistichian, so I'm kind of lost here.
I've looked at Fleiss and Haas, but they
don't seem to help in this issue.
/Mats Carlsson
===========================================================================
This list is open to everyone. Occasionally, less thoughtful
people send inappropriate messages. Please DO NOT COMPLAIN TO
THE POSTMASTER about these messages because the postmaster has no
way of controlling them, and excessive complaints will result in
termination of the list.
For information about this list, including information about the
problem of inappropriate messages and information about how to
unsubscribe, please see the web page at
http://jse.stat.ncsu.edu/
===========================================================================