In article <[EMAIL PROTECTED]>,
jim clark  <[EMAIL PROTECTED]> wrote:
>Hi

>On 22 Mar 2003, Herman Rubin wrote:

>> In article <[EMAIL PROTECTED]>,
>> jim clark  <[EMAIL PROTECTED]> wrote:
<> >Hi

<> >On 21 Mar 2003, Dennis Roberts wrote:

<> >> i would like an example or two where ... one can make a cogent argument 
<> >> that p ... in it's own right ... helps us understand the SIZE of an effect 
<> >> ... the IMPORTANCE of an effect ... the PRACTICAL benefit of an effect

<> >> maybe you could select one or two instances from an issue of the journal 
<> >> ... and lay them out in a post?

<> >Am I missing something ... isn't it important to determine
<> >whether an effect has a low probability of occurring by chance?  
<> >If an effect could have too readily occurred by chance, then its
<> >size would not seem to matter much and there is no reason to
<> >think that it has practical benefit in general.  No one is saying
<> >that p values are the be all and end all, but neither does that
<> >mean they have no value for their intended purpose (i.e.,
<> >identifying outcomes that are readily explained by random
<> >factors).

>> This is the type of brainwashing which is accomplished by
>> the classical approach.  The practical benefit only depends
>> on the size of the effect, and has nothing to do with the
>> chance that something that extreme would have occurred if
>> there was no effect at all.

>I would be very surprised if the "practical benefit" (say as
>indicated by effect size) was completely independent of p value,
>at least not among a population of studies that included studies
>with 0 benefit.  Practical and statistical significance are not
>identical, but that does not mean that they are independent.  
>Nor does a single hypothetical example, as below, address this
>question.

>> Here is an extreme version of a bad example; there is a
>> disease which is 50% lethal.  The old treatment has been
>> given to 1,000,000 people and 510,000 have survived. 
>> There is a new treatment which has been given to 3 people,
>> and all have survived.  You find you have the disease; 
>> which treatment will you take?

>> The first has a very small p-value; it is about 20
>> sigma out.  The second has a probability of 1/8 of
>> occurring by chance if the treatment does nothing.

>Note that I said "practical benefit in general."  So how much
>money should the health care system put into this new treatment
>based on this study of 3 people?  A second question is what you
>would recommend or do yourself if 2 of 3 people had survived?  
>That is still 67% vs. 51%, a large difference if all you are
>interested in is effect size.

>Your example (especially for 2 out of 3 successes, since 1/8
>approaches significance) nicely illustrates that one can obtain
>large effect sizes without achieving anything like acceptable
>levels of significance, presumably because of inadequate sample
>sizes.  But we should not put much confidence in conclusions from
>such studies because of the lack of significance, although we
>might be willing to gamble (i.e., that the treatment is effective
>in general) given sufficiently unfavourable circumstances.

It is true that one cannot put much confidence in
conclusions without "acceptable levels of significance",
but when there is not much information, there is not much
information.  One cannot get statistical blood out of a
statistical turnip.

However, it is not the case that one can be confident of
the null in such cases, either, and this is the usual 
attitude of those who believe in significance.  From 
any type of decision approach, the significance level 
should decrease with increasing sample size.  The other
form of this is that it should increase with decreasing
sample size!  It is very easy to give models in which
one should not consider accepting the hypothesis without
a fair amount of data.
-- 
This address is for information only.  I do not claim that these views
are those of the Statistics Department or of Purdue University.
Herman Rubin, Deptartment of Statistics, Purdue University
[EMAIL PROTECTED]         Phone: (765)494-6054   FAX: (765)494-0558
.
.
=================================================================
Instructions for joining and leaving this list, remarks about the
problem of INAPPROPRIATE MESSAGES, and archives are available at:
.                  http://jse.stat.ncsu.edu/                    .
=================================================================

Reply via email to