Gus,

I am glad that we have both determined that the confidence bands are
narrower towards the extremes of the predictor when the predictor is y and
the predicted variable is x1. Conversely, when the predictor is x1 and the
predicted is y, then the confidence band is wider in the extremes of the
predictor. I believe this asymmetry in confidence bands allows us to infer
causation.

You say that you have created subsamples of data in which this does not
occur but you do not make it clear how you created the subsamples.  You say
that since CR only works with uniformly distributed causes that it is
invalid.  But I have argued for some time that causes should be sampled
uniformly.  Nunnally and others agree with me. The whole set of posts
concerning the absurdity of normally distributed cell sizes in an anova
support my thesis.  So you are wrong to use normality as a means of
invalidating CR.

You must be more explicit about how you created the subsamples if I am to
know what you did.

Best,

Bill



"Gus Gassmann" <[EMAIL PROTECTED]> wrote in message
[EMAIL PROTECTED]">news:[EMAIL PROTECTED]...
>
>
> [EMAIL PROTECTED] wrote:
>
> > Gus,
> >
> > Have you tried predicting x1 from y and checking the residual (errors)?
Be
> > sure to use uniform x1 and x2 when generating
> > y=x1+x2. Then, generate the residual by using y as the predictor of x1.
The
> > residual will equal x2 (when the predictor is the effect).  Then look at
the
> > relationship between the extremity of the predictor (y) and the absolute
> > value of the residual (error).  I think you will find the error actually
> > DECREASE as we move from the mean towards the extremes of the predictor
(y).
> > Thus flipping the regression prediction variables around will lead to
very
> > different error patterns.
>
> That last statement is correct. In the regression  y = b0 + b1 x1 + error,
> the error has essentially the same distribution as x2: uniform (on [-1,1]
> in my test), while the error in the regression x1 = b0 + b1 y + error
> behaves very differently, essentially filling out a diamond with vertices
> at (-2,0), (0,-1), (2,0), (0,1) if plotted against y.
>
> > The reduction in errors when predicting x1 from y is the basis of
> > corresponding correlations. Predicting from the effect decreases errors
in
> > the predictors extremes. Predicting from the cause increases errors in
the
> > extremes of the predictor.
> > The asymmetry allows us to detect which is the cause, which is the
effect.
>
> But this is plain wrong. First off, the effect diappears when you use
normal
> distributions in place of the uniform. Moreover, you can give the
appearance
> of any "cause" you care to show, because you bias the results in a certain
> direction. I performed the following experiment:
>
> I generated a large sample (1,000,000) of uniform x1 and x2 and computed
> y = x1 + x2. The residuals and correlations behave as you predicted,
leading
> you to the conclusion that y is caused by x1 (and x2).
>
> Then I selected a subsample from this in such a way that (x2,y) are
uniformly
> distributed. (This takes some doing, but it is possible.) With this data
set
> you'd
> come to the conclusion that the cause is y!
>
>



.
.
=================================================================
Instructions for joining and leaving this list, remarks about the
problem of INAPPROPRIATE MESSAGES, and archives are available at:
.                  http://jse.stat.ncsu.edu/                    .
=================================================================

Reply via email to