On 20 Mar 2003 18:26:49 -0800, [EMAIL PROTECTED] (Karl L. Wuensch)
wrote:

> Were you to define p as the probability of getting data exactly as
> discrepant with the null as those you obtained, given the null, then,
> assuming you are dealing with a continuous variable, that probability is
> always going to be quite small, eh?  About as small as the probability of


[snip]
and 
> Why should I care about data more discrepant than what I've
> observed?  I haven't seen them.  Why should they affect the way I
> judge what I did observe?  :-)
> 

The above sounds has a point, but seems more than 
a little naive.

Likelihood, the height of the curve, is not "probability"
in the way we do define it. 

A.W.F. Edwards wrote a book about "Likelihood" (1972) 
which is still a classic.  Maybe I should look at my copy again?
Amazon.com  shows a 1992 edition:
======= from Amazon.com
Book Description
Dr Edwards' stimulating and provocative book advances the thesis that 
the appropriate axiomatic basis for inductive inference is not that of
probability, with its addition axiom, but rather likelihood - the 
concept introduced by Fisher as a measure of relative support amongst 
different hypotheses.  
...
=========


-- 
Rich Ulrich, [EMAIL PROTECTED]
http://www.pitt.edu/~wpilib/index.html
.
.
=================================================================
Instructions for joining and leaving this list, remarks about the
problem of INAPPROPRIATE MESSAGES, and archives are available at:
.                  http://jse.stat.ncsu.edu/                    .
=================================================================

Reply via email to