In article <[EMAIL PROTECTED]>,
han <[EMAIL PROTECTED]> wrote:
>I am very sorry to make a typo. The type 1 error is 0.05. 

>The original document is as below: 

>Freiman and colleagues were interested in assessing whether RCTs with
>negative results had sufficient statistical power to detect a 25% and
>a 50% relative difference between treatment interventions. Their
>review indicated that most of the trials had low power to detect these
>effects: only 7% (5/71) had at least 80% power to detect a 25%
>relative change between treatment groups and that 31% (22/71) had a
>50% relative change, as statistically significant (alpha=.05, one
>tailed).

>My personal comments are as below: 
>I think that type 1 error can be regarded as the consumer's risk. The
>regulatory agencies (like FDA) will not allow the type 1 error value
>below 0.05.

The type 1 error can only be called the stupidity risk.
The null hypothesis is almost always false; why should we
even be concerned with the probability that it will be
incorrectly rejected?  With a large enough sample, the
type 1 error will be exceeded.  Statistical significance
was a mistaken idea from the beginning, more than two
centuries ago.

        The type 2 error may be regarded as the sponsor's risk.
>The smaller the type 2 error, the less chance to miss the important
>therapy.

This risk is more to the consumer than the sponsor.  It is
the consumer who receives the therapy.

One needs to consider all costs and benefits, including the
side effects and the costs of treatment.  The treatment may
have very similar effects to the "standard".  Then, the
utility difference has to be weighted according to the 
importance for the state of nature, the loss-prior combination,
in the usual terms.

        if the sponsors did not care the budget and sample size, they
>can minimize the type 2 error and maximaze the power more than 80 %.
>Before making this decision of large sample size, the sponsor should
>re-evaluate the adequacy of margins and assumed treatment difference
>in comparison with controls from prior pivotal trials.

The cost of experimentation is part of the costs.

>Are there any other factors play some roles in missing an important
>therapy?


>As Freiman said, ----- 7% (5/71) had at least 80% power to detect a
>25% relative change between treatment groups and that 31% (22/71) had
>a 50% relative change ------.  Would you be kind enough to explain
>more?


-- 
This address is for information only.  I do not claim that these views
are those of the Statistics Department or of Purdue University.
Herman Rubin, Department of Statistics, Purdue University
[EMAIL PROTECTED]         Phone: (765)494-6054   FAX: (765)494-0558
.
.
=================================================================
Instructions for joining and leaving this list, remarks about the
problem of INAPPROPRIATE MESSAGES, and archives are available at:
.                  http://jse.stat.ncsu.edu/                    .
=================================================================

Reply via email to