Dear List Members,

could somebody please advice me on the following subject:

I should perform a sample size calculation on the number of necessary
measurements to define the cut-off point with a given precision, while
categorising
the subjects as diseased or well.

I have two well defined populations ( i.e. I have a gold standard),
but the marker will be measured with a given accuracy only, i.e. having values
like

x_i =x_0i + e_i

where x_i denotes the measured value, x_0i the 'true' value and e_i the random
measurement error. (The latter can reliable be estimated from former experiments
-
supposing normality, which is quite appealing.)

I do know the old paper of Mantel and xxx where such calculations were
performed,
but they did it
i) at a given sensitivity (specifity) and
ii) without measuremt error.

I would like to define the cut-off as the value, where the proportion of the
right diagnosis
is maximal (something like the Youden index), i.e. I have to use the observed
sensitivity
and specifity instead of e predefined value and, of course, should somehow
incorporate
the measurement error.

The overall performance of the test is very good (area under the ROC curve is
almost one),
but the lab needs the cut-off point for future categorisation with high
precision (i.e. something
like a tolerance interval).

Any comments, references etc. are welcome.

Thanks a lot in advance
Robert


--------------------------------------------------
Focus Clinical Drug Development GmbH, Neuss




=================================================================
Instructions for joining and leaving this list and remarks about
the problem of INAPPROPRIATE MESSAGES are available at
                  http://jse.stat.ncsu.edu/
=================================================================

Reply via email to