On Wed, 29 Jan 2003 23:14:36 GMT, Jerry Dallal
<[EMAIL PROTECTED]> wrote:

> Rich Ulrich wrote:
> > 
> > On Thu, 16 Jan 2003 20:35:02 GMT, Jerry Dallal
> > <[EMAIL PROTECTED]> wrote:
> > 
> > > Dennis Roberts wrote:
> > >
> > > > Comments appreciated.
> > >
> > > I think that what you and Rich are struggling with is that there is
> > > a difference between an expected length and a given probability of
> > > not exceeding some length.  If it's not, it's still an issue.
> > 
> > After trying to figure what to post next, I have finally
> > concluded that nobody else knows how to figure that, either.
> > (a) I don't have a textbook reference on how to figure
> > the CI  of a standard deviation;
> 
> calculate the CI for a variance.  take the square roots of the
> endpoints.

 - Less accurate than matching the chisquared.
 - I think a couple of texts avoid those further issues, though....
 

> 
> > (b) I don't remember where and when I learned it; and
> > (c) most of you folks don't know what I was talking about,
> > because you have never walked through the basic problem.

> (c') couldn't follow what you were trying to say.  Thought it might
> be about the issues raised by Kupper & Hafner, American
> Statistician, 43 (1989), 101-105.  Guess not.

Thank you for that reference.  I have looked it up.

The article by K&H  is identified as  the "Teacher's 
corner",  so the  matter is presumed to be something that 
professionals should recognize without great trouble.
K&H  say that they are pointing to an "overlooked, but 
nevertheless important, result due to Guenther (1965)."
>From what they say further, they think that some textbooks
(or, at least, some professors) are actively teaching -- 
widespread -- use of a couple of inadequate formulas.


What I posted is an easier and simpler subset of what 
that paper seems to discuss.  They go on to discuss the
numerical examples of Power examples that hypothesize
differences in means, and require the non-central distributions;
whereas  I stick with the variance.  They also refer to 
'tolerance'  intervals, which is a useful keyword  for the
more complicated situations.  

I think that my illustration is an improved treatment of 
one logical point, compared  with what  K&H  show.
The authors complain that folks plug  values for "sigma" 
(population SD) into a formula, and they say (I think)  
that the formulas don't work because of   <something 
about small-sample approximations>.  

Well, it doesn't work much for large samples, either.

 - My post  pointed out, rather  explicitly,  that  a cheap 
(small-N) estimate is far smaller than the range of the
population value, and you can't substitute the former for
the latter.     - Statisticians, I think, know this, and would
have long avoided the application of the formulas #1  and
#3.    (Or, would they?  Is there a better 'teaching' on tap?)

There is another subject that K&H   brush on.
I will have to look at it again, but I think that K&H  fail
to disentangle the dichotomous result, from what happens
with the Normal variances.  For instance, a survey, or
survive-vs-die,  has a variance that is fixed by the mean.
If you assume p= 50%,  you can't possibly underestimate
the standard error for your subsequent means.  You can 
also put a lid on the variance if you can put a lid on the
mean percent.  That is substantially different, IMHO,  from 
the question (or problem)  of treating a sample-variance 
estimate as a population variance at the wrong time.

-- 
Rich Ulrich, [EMAIL PROTECTED]
http://www.pitt.edu/~wpilib/index.html
.
.
=================================================================
Instructions for joining and leaving this list, remarks about the
problem of INAPPROPRIATE MESSAGES, and archives are available at:
.                  http://jse.stat.ncsu.edu/                    .
=================================================================

Reply via email to