All you statisticians out there correct me if I'm wrong or misreading the
question, but...
Have you tried adding up all the deviation scores? They sum to zero. In
fact, this is part of how I explain the standard deviation in an intuitive
way. Thankfully, one stats professor clearly explained it to me like this as
an undergrad, after several utterly confused me. (By the way, the following
works quite well with those students who take non-mathematical "physics for
poets" courses, e.g., those with math anxiety. For the other students, the
mathematical approach is presented well in my intro text by Kalat.)
Like the mean, the SD is a summary score (desriptive stat). It is a summary
or average of the distance of the scores away from the mean. The mean
summarizes the central tendency while the SD summarizes the spread of the
curve. [Here I draw several contrasting curves where the means are the same,
but the SD's are quite different, e.g., large or slim.] One takes the
difference from the mean for each score, and then, we would LIKE to average
them. [Here I draw a curve with mean of 12 and a SD of 4 where I draw a
couple lines at the +1SD and -1SD explaining that these are the two
difference scores for two students who get +4 and -4 difference scores for
their test scores of 16 and 8, respectively.] Then I ask, what number for
the average do you suppose that we'd get? Almost always at least one person
realizes that you'd get zero. Then I say, very true! So, we use a TRICK!
We square each difference score to eliminate the negative sign and THEN find
the average. Finally, we "undo" our trick by taking the square root.
Keep in mind that this explanation is for intro psych. I expect no math
greater than the above; moreover, this is merely a piece in my explanation
to them about the interrelationship between effect size, sample size, and
S.D. This is given during the second chapter about research methods and
ties in quite nicely with "cause and effect." More importantly, the full
schpiel (schpiele?) immediately helps them to understand how in the world
psychologists can test probabilities in domains never before examined.
Rather than seeming too difficult, the entire job can be presented in a way
that's intuitive, if the math is left out (except for the SD as above). My
feeling is that we should start early with the idea of cause and effect,
including effect size, if we are ever to generate psychologists who feel as
comfortable with estimates of effect sizes as they are with p values.
I'm hoping that I didn't misread the question! Hope this is useful.
Christian Hart, Ph.D.
Assistant Professor of Psychology
Department of Behavioral Studies
Santa Monica College
-----Original Message-----
From: [EMAIL PROTECTED] [mailto:[EMAIL PROTECTED]]
Sent: Friday, September 22, 2000 12:20 PM
To: [EMAIL PROTECTED]
Subject: standard deviation
Greetings to all stats historians out there
A student asked today, upon starting discussion of the standard deviation,
why we do not simply use the average (mean) absolute value of the deviation
scores, rather than taking the square root of the sum of squares. Many
introductory stats texts provide no explicit rationale or history of the
standard deviation formula. The mean absolute value of the deviation scores
would seem to be a reasonable descriptive measure of the variability of the
scores.
TIA for any help you can provide.
Linda Tollefsrud
University of Wisconsin - Barron County
1800 College Drive
Rice Lake, WI 54868-2497
[EMAIL PROTECTED]
(715) 234-8176 ext. 5417