Here are some comments.

On Thu, 07 Dec 2000 13:51:05 GMT, Jean-Pierre Guay 
<[EMAIL PROTECTED]> wrote:

> Considering that the value of ICC (2,k) is influenced by the number of
> judges, would any of you know a correction that would allow me to
> compare results based on 2 diffeent sets of judges (for example compare
> a set of analyses based on 4 judges with a set based on 7)?

I did not know that the ICC was "influenced by number of judges."  
Does that mean, some particular ICC is a biassed estimator?

Does that really hold for the whole family, all six or eight different
coefficients that are most often called upon  to estimate the
intraclass correlation?  

And - since the bias I readily imagine is one existing near zero - is
that bias large enough that it would be noticeable when the R-squared
is large?


In any case, it is usually a really bad idea to compare any two
correlation coefficients when you have any choice in the matter; 
the tests have to build-in strong assumptions which are awkward to
check on.

In the present instance,  the most feasible test would require that
one sample was used by all judges, who 
were all rating  exactly the same scale, with 
the hypothesis holding for more consistency for one set 
("type", presumably) of judge.   -- In this hypothesized instance,
there would be 11 raters who might be compared to their overall mean,
and I think a convincing test could be cobbled together without
reference to a technical ICC.

Hope this helps.
-- 
Rich Ulrich, [EMAIL PROTECTED]
http://www.pitt.edu/~wpilib/index.html


=================================================================
Instructions for joining and leaving this list and remarks about
the problem of INAPPROPRIATE MESSAGES are available at
                  http://jse.stat.ncsu.edu/
=================================================================

Reply via email to