David Heiser wrote:
>
> CR I guess is now history. I followed his CR stuff from the beginning on
> SEMNET. It is too bad he never adapted or changed or really listened or
> learned or moved on from the dialogs.
>
I still have some things in mind. That is its sensitivity for
some distributional properties. If you factorize a correlation (matrix)
using the cr-calculus then you can identify a rotational position
where the distribution of the factor-scores approximates to
that, which CR deals with.
BC's basic examples were always such with two uniform factors. One was assumed
to be measured exactly (in X1), and the other by a composite of the both (Y1).
model measured estimated as residual from by regression
f1 ---> X1 X2 <--- f2 by regression(Y1<--X1)
b1*f1 +b2*f2 ---> Y1 Y2 <--- b2*f1 - b1*f2 by regression(X1<--Y1)
The CR-method is sensible for the joint distributions, which are different for
such pairs of variables (X1,X2) and (Y1,Y2):
* The scatterplot of X1,X2 is rectangular (distribution in x as well as in
y-direction is uniform),
* the scatterplot of Y1,Y2 is a diamond (distribution in x as well as in
y-direction is triangular),
If you factor the X1,X2,Y1,Y2 correlation-matrix, and include also CR-conformal
CR_x1,CR_x2... variables you can uniquely rotate to the position, where the scores
of the located factors have the "most uniform" distribution ("most uniform" in the
sense of the criteria defined by the CR-calculus).
That is the rational kernel of the effect, that made BC claiming, CR could detect
a causal direction)
So - in a step further in CR - even if you measure both X and Y-variables as
composites of F1 and F2,
X1 = a1*f1 + a2*f2
Y1 = b1*f1 + b2*f2
you can localize f1 and f2 in such a way, that they have the "most" uniform
distribution. ( *1) see sketchy argument below)
But - that is only useful, if you have reason to assume, that your population
factors are both unifom, uncorrelated; that your measured items are relatively
error free and not too many factors are involved. If I recall it right from my
fiddling with this last year, with composites of only 4 or more factors the
exploitable differences between the distributional properties are too small
to get good results. (but may be I don't recall it right, currently).
Gottfried Helms
----------------------
*1)
If you include CR-conformal squared-values of the measured items, factorize them
(full components) you have something like the following factor-loadings
X1 = 1* f1� 0 0
X2 = 0 1*f2� 0
Y1 = b1�f1� b2�f2� + 2b1*b2*f1*f2
Y2 = b2�f1� b1�f2� - 2b1*b2*f1*f2
This is nearly a quartimax or varimax-position.
Now if the x1,x2 variables are also composites from f1 and f2 as well as y1 and y2,
then a varimax would find the position where the factors are best separated.
Such a configuration can occur, when your measured X1 stems already from a correlation
of two factors, and your measured Y1 as well.
FWIW ...
.
.
=================================================================
Instructions for joining and leaving this list, remarks about the
problem of INAPPROPRIATE MESSAGES, and archives are available at:
. http://jse.stat.ncsu.edu/ .
=================================================================