Hi

I do not know what "camp" I fall in, nor do I know whether I should care 
particularly.  And Mike P's excellent summary of different positions suggests 
it isn't even clear how many camps there are or at least whether they are 
distinct.

What I do know is that if you select a sample of N observations with mean M and 
standard deviation S out of a population with mean MU and standard deviation 
SIGMA, then:

1.  M will fall within MU +/- z(alpha/2)*SIGMA/sqrt(N) with probability = 1 - 
alpha (hypothesis testing), and

2.  Equivalently, MU will fall within M +/- z(alpha/2)*SIGMA/sqrt(N) with 
probability = 1 - alpha (confidence intervals).

I know this (really only one thing, not two, as they are logically equivalent / 
synonymous) because mathematicians can demonstrate it is true and simulations 
(randomization tests) demonstrate that it is true.

Furthermore, I know that if S is used instead of SIGMA, then S will sometimes 
be smaller than SIGMA and sometimes larger than SIGMA, adding additional 
variability from sample to sample.  This means that I must use t(alpha/2) 
instead of z for the above to hold true.  This also can be demonstrated by 
mathematicians and by simulation (randomization).

As for the meaning of "confidence", I always assumed it was synonymous with 
certainty, probability, and equivalent terms.

I await my diagnosis!

Take care
Jim


James M. Clark
Professor of Psychology
204-786-9757
204-774-4134 Fax
[email protected]

>>> Michael Palij <[email protected]> 21-Apr-12 9:02 AM >>>
Just out of curiosity, I'd like to ask the folks who have contributed to
this thread to identify which of the three major statistical frameworks
they are using when making statements about the interpretations
about statistics and parameters. The three commonly used frameworks
are (1) Fisherian, (2) Neyman-Pearson, and (3) Bayesian.  I ask this
because Gerd Gigerenzer is famous for pointing out that psychological
statistics as presented in most sources is a "mish-mash" of Fisherian
and Neyman-Pearson frameworks.  To help clarify things, consider the
following article by Christensen (available on www.Jstor.org): 

Testing Fisher, Neyman, Pearson, and Bayes
Author(s): Ronald Christensen
Source: The American Statistician, Vol. 59, No. 2 (May, 2005), pp. 121-126
Published by: American Statistical Association
Stable URL: http://www.jstor.org/stable/27643644 .

With respect to the issue of confidence intervals (CI), the three
approaches suggest different interpretations of what a CI means
and one might read one's writing of CIs and be reminded of
the movie "The Princess Bride" where Inigio Montoyo says:
|You keep saying that word.  I do not think it means what you think it means.
see:
http://www.youtube.com/watch?v=YIP6EwqMEoE 

The Fisherian approach to confidence intervals are described
by Chistensen as:

|...although Fisher never gave up on his idea of fiducial inference,
|one can use Fisherian testing to arrive at "confidence regions"
|that do not involve either fiducial inference or repeated sampling.
|A (1 - alpha) confidence region can be defined simply as a
|collection of parameter values that would not be rejected by a
|Fisherian alpha level test, that is, a collection of parameter values
|that are consistent with the data as judged by an alpha level test.
|This definition involves no long run frequency interpretation of
|"confidence." It makes no reference to what proportion of
|hypothetical confidence regions would include the true parameter.
|It does, however, require one to be willing to perform an infinite
|number of tests without worrying about their frequency interpretation.
(p125)

Christensen has this to say about Neyman-Pearson CIs:

|  The NP approach to finding confidence regions is also to
|find parameter values that would not be rejected by a alpha level test.
|However, just as NP theory interprets the size alpha of a test as
|the long run frequency of rejecting an incorrect null hypothesis,
|NP theory interprets the confidence 1 - alpha as the long run
|probability of these regions including the true parameter.
|The rub is that you only have one of the regions, not a long
|run of them, and you are trying to say something about this
|parameter based on these data. In practice, the long run
|frequency of a somehow gets turned into something called
|"confidence" that this parameter is within this particular region.
|
|Although I admit that the term "confidence," as commonly
|used, feels good, I have no idea what "confidence" really means
|as applied to the region at hand. Hubbard and Bayarri (2003)
|made a case, implicitly, that an NP concept of confidence
|would have no meaning as applied to the region at hand, that
|it only applies to a long run of similar intervals.
(p125)

To expand on the last point, one has to go Bayesian:
|Students, almost invariably, interpret confidence as posterior
|probability. For example, if we were to flip a coin many times,
|about half of the time we would get heads. If I flip a coin and
|look at it but do not tell you the result, you may feel
|comfortable saying that the chance of heads is still .5 even
|though I know whether it is heads or tails. Somehow the
|probability of what is going to happen in the future is turning
|into confidence about what has already happened but is
|unobserved. Since I do not understand how this transition from
|probability to confidence is made (unless one is a Bayesian
|in which case confidence actually is probability), I do not
|understand "confidence."
(p125)

Of course, the motivation for Fisher and Neyman-Pearson to
develop their statistical frameworks instead of working within the
Bayesian framework which existed long before they got on the
scene, was the rejection of the "objective priors" position and
the subsequent analyses (the 20th century development of
"subjective priors" gets around some of the problems and
Chirstensen apparently is using this framework to define
a Bayersian conception of confidence.

So, what do people mean when they say they are using
"confidence intervals"?

On a final point, let me quote the esteemed mathematical statistician
Erich Lehmann:

|And yet this does not tell the whole story.  For Neyman, probability
|was long-run frequency.  Hence, once the numerical values of the
|confidence were known, the probability statements about the parameter
|still possible were one (1) if the interval covered the true value,
| zero (0) if it did not.  However, though Neyman repeatedly
|stressed this point, many users did not accept it.  For them,
|the predata probability of coverage became their postdata degree
|of certainty that the (now fixed and numerically known) interval
|contained the unknown parameter. This was exactly Fisher's
|interpretation expressed at the beginning of his 1959 paper (quoted
|(in Sect 6.3 above).  Thus, while Neyman's confidence intervals
|became the accepted solution of the estimation problem, in
|practice they were often interpreted in a way that was much closer
|to Fisher's view than Neyman's. (p89)
From:
Lehmann, Erich L. (2011).  Fisher, Neyman, and the Creation of
Classical Statistics.  Springer:New York.

One might also want to take a look at the following paper by Lehmann
where he makes some interesting points:

The Fisher, Neyman-Pearson Theories of Testing Hypotheses: One
Theory or Two?
E. L. Lehmann
Journal of the American Statistical Association , Vol. 88, No. 424
(Dec., 1993), pp. 1242-1249
Published by: American Statistical Association
Article Stable URL: http://www.jstor.org/stable/2291263 

-Mike Palij
New York University
[email protected] 

P.S. Busy, busy, busy. ;-)

-------  Original Message -------
On Date: Fri, 20 Apr 2012 10:30:44 -0500, Douglas Peterson wrote:
> The size of the confidence interval is determined by the range it specifies 
> (i.e., a 95% CI is calculated using Z=1.645 or 1.96, one or two tailed).  The 
> chart should indicate that they depict a 95% CI or 99% CI or whatever it is.
>
> There are a couple of common problems that I believe should be taught to 
> students, particularly graduate students but also advanced undergraduate 
> methods students.
> 1) The confusion between interpreting CI and standard error bars.  What 
> constitutes a difference between two means with 95% CIs?  Many believe that 
> the CI must not overlap, which is not actually the case but does result in a 
> very conservative approach, and when comparing multiple means probably not a 
> bad idea.  I was taught, but now question, the idea that so long as the mean 
> of each group is outside the CI of the "other" group there was a difference 
> at the .05 level (assuming 95% CIs).  The idea is that the CI around mean A 
> represents the region containing the actual mean of the population with 95% 
> certainty.  Thus a mean for group B that falls outside the CI is only likely 
> to represent a sample from population A 5% of the time (ignoring 
> directionality for the time being).  BUT... you must also consider the CI 
> around group B, because while the mean for B might not be from population A, 
> you should also check that the mean from A is not from population B and thus 
> you must loo!
>  k at BOTH confidence intervals.  This is only a heuristic and not an 
> absolute, I always tell me students that to draw an inference that will be 
> included in the discussion they must perform an inferential test (the CI are 
> for the readers not for the researchers).  Standard error bars however 
> require no overlap of the ranges centered on each mean.  See Belia et. al 
> (2005) for an interesting study of published authors and their understanding 
> of CI and standard error bars.
>
> Belia, S., Fidler, F., Williams, J., & Cumming, G.  (2005).  Researchers 
> Misunderstand Confidence Intervals and Standard Error Bars.  Psychological 
> Methods, 10, 389-396.  APA  DOI: 10.1037/1082-989X.10.4.389
>
> 2) Some programs automatically compute CI based on the entire sample and not 
> for each individual mean (Excel uses only the deviation of the means).  The 
> CI for each mean must be computed specifically for that value.  CI's can thus 
> vary widely from one mean to another and if all of them look to be the same 
> size around 5 different means there is probably something going on (unless 
> you have a pretty good sized sample - of course at very large sample sizes 
> the CIs are pretty small).
>
>
>
> Doug
>
>
>
>
> Doug Peterson, PhD
> Associate Professor of Psychology
> The University of South Dakota
> Vermillion SD 57069
> 605.677.5223
> ________________________________________
> From: Jim Clark [[email protected]] 
> Sent: Thursday, April 19, 2012 9:12 PM
> To: Teaching in the Psychological Sciences (TIPS)
> Subject: RE: [tips] confidence intervals
>
> Hi
>
> But isn't there a p involved in the CI (or rather 1 - p)?  I'm not sure how 
> one interprets a CI without some notion of p or its inverse.  For example, 
> why do we choose z = 1.645 or 1.96 or 2.333 or whatever to construct the CI?
>
> Take care
> Jim
>
>
> James M. Clark
> Professor of Psychology
> 204-786-9757
> 204-774-4134 Fax
> [email protected] 
>
>>>> "Wuensch, Karl L" <[email protected]> 19-Apr-12 9:04 PM >>>
> I have always introduced estimation, point and interval, prior to Statistical 
> Hypothesis Inference Testing.  After introducing hypothesis testing, I note 
> that once one has a confidence interval, e can generally decide whether or 
> not a hypothesis fits well with the observed data or not, no pee necessary.
> Cheers,
> [Description: Karl L. Wuensch]<http://core.ecu.edu/psyc/wuenschk/klw.htm>
> From: Marte Fallshore [mailto:[email protected]] 
> Sent: Tuesday, April 17, 2012 4:55 PM
> To: Teaching in the Psychological Sciences (TIPS)
> Subject: [tips] confidence intervals
>
> I was at the Rocky Mountain Psychological Association meeting last weekend, 
> and there was a talk on confidence intervals. It got me thinking about 
> teaching about confidence intervals before I get to hypothesis testing and 
> then integrating it with each hypothesis test we do.
>
> Has anyone out there done that? How did it go? Have you found a book that may 
> be does something like that? Thanks,
>
> Marte
>
>
> ************************************************
> Marte Fallshore
> Department of Psychology
> Central Washington Univ.
> 400 E University Way
> Ellensburg, WA 98926-7575
>
> 509/963-3670
> 509/963-2307 (fax)
> Room 462, Psychology Building
>
> Everyone is entitled to their own opinions, but they are not entitled to 
> their own facts. ~Daniel Patrick Moynihan
>
> When I give food to the poor, they call me a saint.
> When I ask why the poor have no food, they call me a communist.
>        ~Dom Heider Camara
>
> I teach for free; they pay me to grade. (anon)
> ************************************************
>
>
> ---
>
> You are currently subscribed to tips as: 
> [email protected]<mailto:[email protected]>.
>
> To unsubscribe click here: 
> http://fsulist.frostburg.edu/u?id=13060.c78b93d4d09ef6235e9d494b3534420e&n=T&l=tips&o=17352
>  
>
> (It may be necessary to cut and paste the above URL if the line is broken)
>
> or send a blank email to 
> leave-17352-13060.c78b93d4d09ef6235e9d494b35344...@fsulist.frostburg.edu<mailto:leave-17352-13060.c78b93d4d09ef6235e9d494b35344...@fsulist.frostburg.edu>
>
>
>
>
>
>
>
> ---
> You are currently subscribed to tips as: [email protected].
> To unsubscribe click here: 
> http://fsulist.frostburg.edu/u?id=13251.645f86b5cec4da0a56ffea7a891720c9&n=T&l=tips&o=17389
>  
> or send a blank email to 
> leave-17389-13251.645f86b5cec4da0a56ffea7a89172...@fsulist.frostburg.edu 
>
>
> ---
> You are currently subscribed to tips as: [email protected].
> To unsubscribe click here: 
> http://fsulist.frostburg.edu/u?id=12991.6a54289b29ceb58cb7609cc50e0dc1c8&n=T&l=tips&o=17390
>  
> or send a blank email to 
> leave-17390-12991.6a54289b29ceb58cb7609cc50e0dc...@fsulist.frostburg.edu 
>
>
>
> ---
>
> END OF DIGEST
>
> ---
> You are currently subscribed to tips as: [email protected] 
> To unsubscribe click here: 
> http://fsulist.frostburg.edu/u?id=13415.1d1f05e59ddfa82248f422b49a72c2b3&n=T&l=tips&o=17407
>  
> or send a blank email to 
> leave-17407-13415.1d1f05e59ddfa82248f422b49a72c...@fsulist.frostburg.edu 

---
You are currently subscribed to tips as: [email protected].
To unsubscribe click here: 
http://fsulist.frostburg.edu/u?id=13251.645f86b5cec4da0a56ffea7a891720c9&n=T&l=tips&o=17410
 
or send a blank email to 
leave-17410-13251.645f86b5cec4da0a56ffea7a89172...@fsulist.frostburg.edu

---
You are currently subscribed to tips as: [email protected].
To unsubscribe click here: 
http://fsulist.frostburg.edu/u?id=13090.68da6e6e5325aa33287ff385b70df5d5&n=T&l=tips&o=17412
or send a blank email to 
leave-17412-13090.68da6e6e5325aa33287ff385b70df...@fsulist.frostburg.edu

Reply via email to