In article <p04330102b6d43bd5b798@[139.80.121.126]>,
Will Hopkins <[EMAIL PROTECTED]> wrote:
>Responses to various folks.  And to everyone touchy about one-tailed 
>tests, let me make it quite clear that I am only promoting them as a 
>way of making a sensible statement about probability.  A two-tailed p 
>value has no real meaning, because no real effects are ever null.  A 
>one-tailed p value, for a normally distributed statistic, does have a 
>real meaning, as I pointed out.  But precision of 
>estimation--confidence limits--is paramount.  Hypothesis testing is 
>passe.

>Donald Burrill queried my assertion about one-tailed p values 
>representing the probability that the true value is opposite in sign 
>to what you observed.  Don  restated what a one-tailed p represents, 
>as it is defined by hypothesis testers, but he did not show that my 
>assertion was false.  He did point out that I have to know the 
>sampling distribution of the statistic.  Yes, of course.  I assumed a 
>normal (or t) distribution.

>Here's one proof of my assertion, using arbitrary real values.  I 
>always find these confidence-limit machinations a bit tricky.  If 
>someone has a better way to prove this, please let me know.

>Suppose you observe a value of 5.3 for some normally distributed 
>outcome statistic X, and suppose the one-tailed p is 0.04.

>Therefore the sampling distribution is such that, when the true value 
>is 0, the observed values will be greater than 5.3 for 4% of the time.

>Therefore, when the true value is not 0 but something else, T say, 
>then X-T will be greater than 5.3 for 4% of the time.  (This is the 
>tricky bit.  Don't leap to deny it without a lot of thought.  It 
>follows, because the sampling distribution is normal.  It doesn't 
>follow for sampling distributions like the non-central t.)

>But if X-T > 5.3 for 4% of the time, then rearranging, T < 5.3-X for 
>4% of the time. But our observed value is 5.3, so T < 0 for 4% of the 
>time.  That is, there is a 4% chance that the true value is less than 
>zero.  QED.

This is one of the standard fallacies.  The statement that
T < 5.3-X for 4% of the time is valid before X is observed,
but not after; this is true of all of the other statements
as well.  It is approximately true after the observation if
T has a prior almost uniform distribution over a rather
large range, so the density of T can be assumed constant
in the calculation of the posterior distribution.

                        ................

>Herman Rubin wrote about my assertion:
>>This is certainly not the case, except under highly dubious
>>Bayesian assumptions.

>Herman, see above.  And the only Bayesian assumption is what you 
>might call the null Bayesian:  that there is no prior knowledge of 
>the true value.  But any Bayesian- vs frequentist-type arguments here 
>are academic.

The "null Bayesian" is an EXTREMELY strong assumption, and
it is even somewhat contradictory, and the uniform distribution
over the real line is a much odder beast than even most who
understand mathematics think it is.  A posterior cannot be
obtained from it by any legitimate mathematical operation; 
this is not hard to prove.  It is not at all surprising that
the attempted use of the "null Bayesian" assumption did not
foster the use of Bayesian procedures.  It MAY be, as indicated
above, a reasonable approximation, but only that.
-- 
This address is for information only.  I do not claim that these views
are those of the Statistics Department or of Purdue University.
Herman Rubin, Dept. of Statistics, Purdue Univ., West Lafayette IN47907-1399
[EMAIL PROTECTED]         Phone: (765)494-6054   FAX: (765)494-0558


=================================================================
Instructions for joining and leaving this list and remarks about
the problem of INAPPROPRIATE MESSAGES are available at
                  http://jse.stat.ncsu.edu/
=================================================================

Reply via email to