On 24 Dec 2003 15:06:00 -0800, [EMAIL PROTECTED] (Chih-Mao Hsieh) wrote:
[ concerning Herman Rubin's note, on valuing linearity 
over  'normality'  for factor analysis.]
> If what is meant by linearity is basically left truncation, then I have understood 
> it...
>  

No.

I see in my previous response, that you raised an 
issue about zeros.  

I think that Herman was saying --
All these variables are expected to be, by their 
measurement as well as their nature, generally 
"linear"  in predicting each other.  

That is often difficult to achieve with "counts"
when there is a conflict between (a) the Poisson-sort
of variability, where one might expect square-roots
of counts to have a better scaling property; and (b) the
unit-per-unit  increase in quantities that can be inherent.

What comes to my mind is the similar difficulty that I see
in constructing certain sorts of models of utility, in
economics:  In terms of "importance", and magnitudes
of change over time, it is desirable to take the logarithm 
of dollar amounts.  Doubling the dollars does not double
the satisfaction.  However, what you can *purchase* 
with 'dollars'  is apt to increase in that dumb linear fashion,
or be even faster. 

By the way, back to the original post:  IF the result of 
a test for 'non-normality'  is 'significant'  only by virtue 
of testing it with the power of many thousands of 
observations, that would be even further reason to 
discount the concern about normality.

[ snip, rest]
-- 
Rich Ulrich, [EMAIL PROTECTED]
http://www.pitt.edu/~wpilib/index.html
"Taxes are the price we pay for civilization." 
.
.
=================================================================
Instructions for joining and leaving this list, remarks about the
problem of INAPPROPRIATE MESSAGES, and archives are available at:
.                  http://jse.stat.ncsu.edu/                    .
=================================================================

Reply via email to