On Sun, 07 Sep 2008 13:17:10 -0700, Annette Taylor wrote:
>(1) I talk with my intro students about the misconception that 
>mentally ill people generally have a history of violence. 

One should probably start by asking what one means by

(a) "mentally ill"
and
(b) "violence"

If mentally ill is being used as a synonym for "crazy" then
it is a meaningless term.  If there is some specific diagnosis,
such as one of the DSM, then one should use it.  Perhaps
when one thinks of mental illness, one imagine a person 
having a psychotic episodic where one lashes out at
random.  However, given that many serial killers have
antisocial personality disorder and high degrees of psychopathy,
one might call them mentally ill but there everyday presentation
is usually one of normal functioning and even charming and
gregarious.  Who is more mentally ill:  the person with the
psychotic episode or the person with antisocial personality?
Consider:  was Tony Soprano mentally ill or just "neurotic"?

As for violence, it should be remembered that everyone is
capable of physically hurting and killing other people and
under certain circumstances such as wars or "police actions"
violent behavior is not only expected but desired.  One
part of becoming a member of the armed forces is to learn
how to kill others efficiently (a point that Kurt Vonnegut was
fond of reminding others of, a point that Stanley Kubrick
emphasized in "Full Metal Jacket", a point that Ken Burns
makes in his documentary series "The War" when soldiers,
once they got over their initial reluctance to kill the "enemy",
developed pride in becoming efficient killers).  So, what
exactly does one mean when one is referring to violence?
Socially acceptable forms of violence?  Socially prohibited
forms of violence -- and if prohibited, prohibited by whom?

>And the research evidence seems to support this. But in 
>thinking about where the misconception comes from, would 
>it not be correct to say that most people with a history of 
>violence have had a mental illness? 

Presumably the armed forces screen out people with mental
illness so only "normal" people get taught how be efficient killers.
Or are only talking about "amateur" practioners of violence?

>In other words, could one be violent or have unmotivated 
>violence and not be mentally ill?

After selecting appropriate definitions for the terms, it probably
is likely that one will conclude that not only is it possible but
necessary.  That's why companies like "Blackwater Worldwide"
is used for security by the U.S. in Iraq and other places; see:
http://en.wikipedia.org/wiki/Blackwater_USA

Last I checked, I didn't hear anyone calling mercenaries mentally
ill.  They might be crazy though.

>A more technical set of questions
>
>(1) Is it proper to talk about independent and dependent variables 
>in a correlational study? 

No.

>And to what extent? Isn't it *more* correct to call the variables 
>predictor and criterion variables?What is the current status of this 
>language?

The terms "predictor" and "criterion" are somewhat old-fashioned
and imply the use of multiple correlation analysis.  It may be useful
to make some distinctions:

(a) the term "independent variable" has been used in experimental
design as referring to a variable selected/manipulated by a researcher.
However, people in the social sciences (e.g., economics, sociology)
have used the term when they wanted to identify a "causal variable",
that is, a variable that causes changes in an outcome measure.  So,
the relative strength of the dollar might be seen as a causal factor in
reducing bond prices or the prices of commodities but it wouldn't
be an independent variable in the experimenter's sense.  This usage,
I believe, has diminished/disappeared because of the next point.

(b) In the area of "Structural Equation Modeling" (SEM), researchers
often deal with correlational datasets where some variables are
identified as "causal" and others are identified as "outcome". Causal
variables are called "exogenous" variables and outcome variables
are called "endogenous" variables.  There are many sources for SEM,
one of which is the webiste provided by Ed Rigdon who maintains
the SEM-L mailing list; see:
http://www2.gsu.edu/~mkteer/sem2.html

Although many psychologists have be taught to think about causality
only in terms of experimental designs, it is useful to keep in mind
that there are many situations where experimentation is not practical
but one can make systematic observation that may reveal causal
relationships.  Astronomy, of course, is based on this.

>(2) I have learned that a rule of thumb for evaluating the effect size 
>of a significant correlation is to square r and this is a crude indicator 
>of how much of the variability in the criterion variable comes from 
>the predictor variable. 

Things have become much more complicated and today many
people in meta-analysis would use "r" instead of "r squared".
Rosenthal and DiMatteo (Annual Review of Psychology 2001)
make the following statements:
|There are two main families of effect sizes, the r family 
|and the d family.
|
|The r family of product moment correlations includes Pearson r 
|when both variables are continuous, phi when both variables 
|are dichotomous, point biserial r when one variable is continuous 
|and one is dichotomous, and rho when both variables are in 
|ranked form, as well as Zr, the Fisher transformation of r.
|
|This family also includes the various squared indices of r and 
|related quantities, such as r2, omega squared, epsilon squared, 
|and eta squared. Squared indices are problematic, however, 
|because they lose their directionality (although this can be retrieved 
|through careful analysis of the findings), and the practical magnitude 
|of these indices is often misinterpreted. In an example regarding the 
|latter problem, it may be concluded that one percent of the variance 
|in a dependent variable owing to the independent variable is too 
|little to matter. However, if the independent variable is a very 
|inexpensive and safe intervention, and the dependent variable 
|involves saving lives [as was the case in research on prevention 
|of heart attacks with low-dose aspirin (Rosenthal & Rosnow 1991)], 
|the percentage of variance explained may be very small, but its 
|implications might be quite substantial.

>I'd like to hear if this is too crude to be useable. Is there another, 
>readily calculable effect size? 

Folks like Rosenthal would say just use "r".

>I am very bothered by studies that make a big deal of a significant 
>correlation of .2 or .3.

This is going to sound *SO* wrong but I'll say it anyway:

It's not the size of the "r" that's important, it's what you do with it.

Rosenthal points out (see aspirin example above) that a small
effect can still have significant practical implications but it is 
more important to realize that the real "significance" of a result
is the role it plays in a theoretical explanation.  If a theory predicts
that there should be a zero correlation between two or more 
variables, then finding a significant correlation, regardless of its
size, is important because it falsifies the theory.  If a result allows
one to reject a theory, it doesn't matter how large or small the
critical finding is.  Even small results can be important.

-Mike Palij
New York University
[EMAIL PROTECTED]




---
To make changes to your subscription contact:

Bill Southerly ([EMAIL PROTECTED])

Reply via email to