Rich wrote:
>
> Can anyone explain to me why a confidence level for an experiment
> needs to be selected BEFORE running the experiment? Can't the results
> be analyzed after, and the highest confidence level be selected based
> on the data?
The confidence level (for intervals; as opposed to significance level,
for hypothesis tests) does NOT have to be selected before running the
experiment. It is perfectly legitimate to construct whatever confidence
interval you like at any time.
However, usually, the confidence level is not selected by the
experimenter at all, but is set by convention. This makes it possible
for the reader to understand one confidence interval in the context of
others.
In particular, you do not want to report an effect by using the biggest
CI that just misses the null effect size. Doing this is roughly
equivalent to reporting a point estimate and a p-value, with extra
chart-junk to obfuscate things.
For significance levels, it is also legitimate to do what you
describe. Of course, as you will *always* be on the edge of rejecting
the null hypothesis, the outcome of the test is not of any interest!
What is of interest is the level you had to choose to get this result.
The significance level at which Ho is just rejected is the p-value. This
is a well-known way of reporting a hypothesis test, widely considered
to be more informative than fixed-level hypothesis testing.
What is, obviously, not legitimate is claiming any significance for the
bare fact that you have rejected the null hypothesis when you picked the
significance level to make that happen.
Robert Dawson
.
.
=================================================================
Instructions for joining and leaving this list, remarks about the
problem of INAPPROPRIATE MESSAGES, and archives are available at:
. http://jse.stat.ncsu.edu/ .
=================================================================