On Sat, 19 Aug 2000 14:21:00 GMT, Mark Everingham
<[EMAIL PROTECTED]> wrote:
< ...>
> I have two classifier systems which take as input an image and produce
> as output a label for each pixel in the image, for example the input
> might be of an outdoor scene, and the labels sky/road/tree etc.
>
> I have a set of images with the correct labels, so I can test how
> accurately a classifier performs by calculating for example the mean
> number of pixels correctly classified per image or the mean number of
> sky pixels correctly classified etc.
>
> The problem is this: Given *two* different classifiers, I want to test
> if the accuracy achieved by each classifier differs *significantly*. One
> way I can think of doing this is:
< snip ...>
Look up McNemar's test in the chapter on 2x2 tables. This is
basically a sign-test. Without a lot to say in the way of
assumptions, you compare the *differences* in output.
If x1 is the number of pixels where A is right and B is wrong,
and x2 is the number were B is right and A is wrong, then
you test whether x1 =x2. The difference-squared, divided by the sum,
is (approximately) chi squared with 1 d.f.
--
Rich Ulrich, [EMAIL PROTECTED]
http://www.pitt.edu/~wpilib/index.html
=================================================================
Instructions for joining and leaving this list and remarks about
the problem of INAPPROPRIATE MESSAGES are available at
http://jse.stat.ncsu.edu/
=================================================================