On 12 Jul 2002 06:14:49 -0700, [EMAIL PROTECTED] (JGR) wrote: > Hi, > > I have the following problem: > I need to select between test procedures T1 and T2. Previous > observations show that T1 correctly identified problem P1 in 29 out of > 30 cases and T2 correctly identified problem P1 in 5 out of 5 cases. I > want to have a test procedure selection measure, which represents both > accuracy of the procedure and frequency of previous observations.
Interesting setup for a problem. How much more information is needed, in order to quantify a decision? I suppose that a Bayesian might say that it is obvious, that you need to consider the 'prior' distributions. I think, a) you have to quantify the prior-expectations, and b) you also have to figure how much to weight them. Bayesian statisticians often do (a), but isn't (b) usually, substantially, ignored? If the sales pitch (or scientific backing) for T1 had promised that T1 would be correct in all but 1 in ten thousand cases, then T1 must be an awful disappointment; somewhere, its assumptions (may) have failed. If T1 and T2 had both promised the same excellence at the start - now, T1 has obviously failed; T2 has not. -- Rich Ulrich, [EMAIL PROTECTED] http://www.pitt.edu/~wpilib/index.html . . ================================================================= Instructions for joining and leaving this list, remarks about the problem of INAPPROPRIATE MESSAGES, and archives are available at: . http://jse.stat.ncsu.edu/ . =================================================================
