[EMAIL PROTECTED] writes:
>Yet r is estimated by averaging the estimates of r 
>across multiple imputations. In general, these estimates will not agree:
>if r>0, then the estimate of R^2 will be less than the squared estimate of
>r. If the estimator of r is unbiased, then the proposed estimate of R^2
must 
>be biased.

Let me say that the problem is similar to trying to estimate the unit
variance of a variable from several studies.  Should you average variances
or standard deviations?

In your case both r and R-square have the characteristic of being
unaffected if every unit is repeated the same number of times (though the
confidence intervals will be affected).  Thus if you had ten identical
data sets and concatenated them, your correlations would be the same as
for any one of them.  Now suppose the ten were not identical, but differed
for the imputed values.  Intuitively, I would think the r and R-square
that would result from concatenating the ten data sets would be a
consistent (unbiased ???) estimate. 

Let me say that I am not clear if you should convert both variables to z
scores within each data set first. But the approach of combining data sets
to obtain a single point estimate seems reasonable.



Reply via email to