> Hi, I have a quesite on meta-analysis with 'metafor'.
> I would like to calculate the standardized mean difference (SMD), as
> Hedges' g, in pre-post design studies.
> I have data on baseline (sample size, mean and SD in both the
> experimental
> and the control group) and at end of treatment (same as before).
> The 'metafor' site report a calculation based on Morris (2008).
> However, I would like to calculate the SMD as in Comprehensive
> Meta-analysis according to Borenstein:
> 
> d = (mean.pre - mean.post) / SD_within
> 
> SD_within = SD.diff / square root (2(1-r)

Note that this assumes that the SDs are the same at baseline and at the end of 
the treatment. Also, it is not how d values for pre-post designs are typically 
computed. There are several articles that describe various approaches, in 
particular:

Becker, B. J. (1988). Synthesizing standardized mean-change measures. British 
Journal of Mathematical and Statistical Psychology, 41(2), 257-278.

Gibbons, R. D., Hedeker, D. R., & Davis, J. M. (1993). Estimation of effect 
size from a series of experiments involving paired comparisons. Journal of 
Educational Statistics, 18(3), 271-279.

Morris, S. B. (2000). Distribution of the standardized mean change effect size 
for meta-analysis on repeated measures. British Journal of Mathematical and 
Statistical Psychology, 53(1), 17-29.

Morris, S. B., & DeShon, R. P. (2002). Combining effect size estimates in 
meta-analysis with repeated measures and independent-groups designs. 
Psychological Methods, 7(1), 105-125.

Morris, S. B. (2008). Estimating effect sizes from pretest-posttest-control 
group designs. Organizational Research Methods, 11(2), 364-386.

The two approaches that have been most thoroughly studied and described are:

d = (mean.pre - mean.post) / SD.diff

(standardization by the change score SD) and

d = (mean.pre - mean.post) / SD.pre

(standardization by the pre-test SD; one could also use the post-test SD).

The method described in the book is a bit of a juxtaposition, where SD.diff is 
'corrected' by 1/sqrt(2*(1-r)), which is identical to SD.pre (or SD.post) when 
SD.pre = SD.post. But that's never exactly the case. Plus I am not aware of any 
proper derivations of the large-sample distribution of d computed in this 
manner.

> r = correlation between pairs of observation (often it is not reported,
> and suggestion is to use r = 0.70)

Suggested where? I hope this is not a general "if you don't know the 
correlation, just use .70" suggestion, because that would be nonsense. If you 
don't know r for the sample, then you could try to make a reasonable guess that 
is informed by the characteristic or attribute that is being measured (some 
things are much more stable than other things) and the timelag between baseline 
and the follow-up measurement. Also, if some treatment happens between baseline 
and follow-up -- and some people are more likely to respond to the treatment 
than others -- then this is likely to reduce the correlation to some extent, 
depending on how much variability there is in treatment responses. These are at 
least some of the considerations that should go into making a proper guess 
about r.

> The variance of d (Vd) is calculated as (1/n + d^2/2n)2(1-r), where n =
> number of pairs

As mentioned above, I am not aware of any derivation of that equation. One can 
show that 2(1-r)/n + d^2/2n is an estimate of the asymptotic sampling variance 
of d when d is computed as (mean.pre - mean.post) / SD.pre (or with SD.post). 
So, when d is computed in the manner above, it is a bit like (mean.pre - 
mean.post) / SD.pre -- except for the way that SD.pre is actually estimated. 
So, if anything, the equation should look more like the one above and not the 
one in the book. I have actually communicated with Michael and Larry (Hedges) 
about this and Michael indicated that changes may need to be made to CMA.

> To derive Hedges' g from d, the correction 'J' is used:
> 
> J = 1 - (3/4df - 1), where df = degrees of freedom, which in two
> independent groups is n1+n2-2
> 
> Essentially, J = 1 - (3/4*((n1+n2)-2) - 1)
> 
> Ultimately, g = J x d, and variance of g (Vg) = J^2 x Vd
> 
> I had some hint by Wolfgang Viechtbauer, but I'm stucked on here
> (essentially, because my poor programming abilities)
> I was stuck on applying the Viechtbauer's hint to my dataset.
> Probably I'm doing something wrong. However, what I get it is not what I
> found with Comprehensive Meta-Analysis.
> In CMA I've found g = -0.49 (95%CI: -0.64 to -0.33).

You won't get the same thing, as CMA does what is described in the book, but 
that's not what metafor does (due to the reasons described above).

> Moreover, I do not know how to apply the J correction for calculating the
> Hedges'g.
> My request is: can anyone check the codes?
> Can anyone help me in adding the J correction?
> What should I multiply for J?
> Should I use the final yi and vi as measures of d and Variance of d?

Why don't you just use what escalc() gives you?

> Thank you in advance,
> Antonello Preti

[code snipped]

Best,
Wolfgang

______________________________________________
R-help@r-project.org mailing list -- To UNSUBSCRIBE and more, see
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.

Reply via email to