Fortunately, many common statistical test are very robust. This helps
when "we almost never meet the assumptions of the model (e.g., random
sampling, normal populations, homogeneity of variance)."
Regards,
Hank

-----------------------------------------------------------------------
Hank Goldstein,                     |   HOME:   (563) 556-2115
Department of Psychology     |   FAX:      (563) 588-6789
Clarke College                       |   EMAIL:  
[EMAIL PROTECTED]
Dubuque, IA  52001              |   HOME:   1835 Cannon St.
Office: (563) 588-8111          |                  Dubuque, IA
52003-7904
-----------------------------------------------------------------------
"There is no cure for birth and death save to enjoy the interval." -
George Santayana
"The most wasted of all days is one without laughter." - e.e. cummings
-----------------------------------------------------------------------
>>> [EMAIL PROTECTED] 11/13/02 11:15 AM >>>
Hello--
This discussion has reinforced my belief that the classical statistical
model 
is relied on way too much for psychological data analysis!  For one
thing, we 
almost never mee the assumptions of the model (e.g., random sampling,
normal 
populations, homogeneity of variance).  And, just as importantly, it is
too 
damn confusing!  (it's no wonder students dislike statistics).

Regardless, a couple quick comments (and please correct me where you
think 
necessary):

>Martin J. Bourgeois wrote that:
"an observed difference between means is more likely to be replicated
when the 
p is .001 than when the p is .1. You can certainly calculate the
probability 
of replicating a
result with a given p value, and results with smaller p's are more
likely to 
be replicated (yes, it has been supported by data)"

Let's not forget that p values are GREATLY influenced by sample size. 
Given 
data sets with any means and variances, I can give you ANY p-value you
want, 
simply by adding more subjects.  So, is "likelihood of replication" in
the 
context of holding sample size constant?

Also, I'm not sure what is meant by an "observed difference".  Is this
the 
magnitude of difference (e.g., effect size), or just that there is a 
statistically significant difference?  P-values are affected by the
difference 
between means, sample size, and variance.  So, by definition, larger
"observed 
differences" result in smaller p-values, holding the other factors
constant of 
course.  But, is this difference more likely to be "replicated" than a
smaller 
difference?  Given equally good methods of random assignment to groups
(or, in 
the rare case, random sampling), we should be equally likely to
replicate the 
real state of the world, whatever it is.  Or, am I missing something
here?

>Mike Scoles wrote:
"A p-value is only meaningful if the null hypothesis is true."

This is absolutely correct, but too often forgotten!  In fact, many of
our 
stats books actually teach this incorrectly.  A p-value indicates the 
probability of obtaining your data, ASSUMING THAT THE NULL IS TRUE.  In
my 
opinion, this is the most important concept one needs to fully
comprehend if 
they are to properly use techniques from the classical statistical
model.  A 
must-read article on this topic is:
"On the Probability of Making Type I Errors" by Pollard and
Richardson, Psychological Bulletin, 1987, 102: 159-163

Interesting discussion!

Mike Tagler
Department of Psychology
Kansas State University



---
You are currently subscribed to tips as: [EMAIL PROTECTED]
To unsubscribe send a blank email to
[EMAIL PROTECTED]


---
You are currently subscribed to tips as: [EMAIL PROTECTED]
To unsubscribe send a blank email to [EMAIL PROTECTED]

Reply via email to