On Wed, 21 Jun 2000, Dale Berger wrote:
Yet, p=0 is a special case where an outcome is impossible. A
reasonable confidence interval for p should not include zero if the
outcome has been observed in a sample. Not so?
I am unable to reconcile this assertion with the fact that the only
In article nHG35.10954$[EMAIL PROTECTED],
"Shawn Wilson" [EMAIL PROTECTED] wrote:
[EMAIL PROTECTED] wrote in message
Have you successfully completed even
one course in statistics?
Six, three at the graduate level in the specific field of econometrics
and
three undergrad (including one
On Wed, 21 Jun 2000, Dale Berger wrote:
Yet, p=0 is a special case where an outcome is impossible. A
reasonable confidence interval for p should not include zero if the
outcome has been observed in a sample. Not so?
and Donald Burrill replied:
I am unable to reconcile this
Rich Ulrich wrote:
These are not quite equivalent options since the first one really
stinks -- If you are considering drawing conclusions about causation,
you need *random assignment* and the two Groups of performance are the
furthest thing from random.
Let's see: the simple notion of
Stephen,
I would also characterize the syllabus as too ambitious -- by far. Your
students are probably scared of statistics, and overwhelming them will only
make it worse. Unless you see a special need for it, for example, I'd omit
time series in a first course!
You might want to look at
At 04:31 PM 6/22/00 +, Gene Gallagher wrote:
This pattern was described in an obit about two-three years ago in the
NY Times. A statistician's obit noted that he'd found a flaw in the
Israeli air force's training program. Apparently, the Israeli air force
was punishing the worst performers
regression to the mean is not necessarily appropriate when looking at
pretest scores ... and then gain or improvement ...
if we had parallel tests ... one for pre and one for post ... when nothing
happens inbetween ... then maybe so ...
please see a short summary of this scenario ... applied
Look up the topic regression to the mean. This means that of values
measured several times , when extremes are revisited they can be at a
more typical value.
In article 8itf0t$a68$[EMAIL PROTECTED],
Gene Gallagher [EMAIL PROTECTED] wrote:
Rich Ulrich wrote:
These are not quite equivalent
Here's what I got for the confidence interval:
Let n = sample size, K = number of successes, p = sample proportion (=K/n), pi = true
proportion.
If n = 1250 and K = 1 (p = 1/1250), we can be 95% sure that pi about 0.41
(small-sample one-sided 95% confidence interval using the binomial
I took a sample to determine if users preferred one of 3
different designs (A, B, and C). Of the 740 people
sampled, 300 replied "no preference", 160 preferred
prototype A, 130 B, and 150 C.
Is there a significant difference between the protoypes? If
so, which ones? Which test do I use? How do
ANOVA is said to robust against assumption violations when the sample
size is large. However, when the sample size is huge, it tends to
overpower the test and thus the null may be falsly rejected. Which is a
lesser evil? Your input will be greatly appreciated.
Title: Untitled Document
June
21, 2000
12 matches
Mail list logo