Re: [R] significance test for difference of two correlations

2007-07-26 Thread Viechtbauer Wolfgang (STAT)
Let r_1 be the correlation between the two variables for the first group with 
n_1 subjects and let r_2 be the correlation for the second group with n_2 
subjects. Then a simple way to test H0: rho_1 = rho_2 is to convert r_1 and r_2 
via Fisher's variance stabilizing transformation ( z = 1/2 * ln[ (1+r)/(1-r)] ) 
and then calculate:

(z_1 - z_2) / sqrt( 1/(n_1 - 3) + 1/(n_2 - 3) )

which is (approximately) N(0,1) under H0. So, using alpha = .05, you can reject 
H0 if the absolute value of the test statistic above is larger than 1.96.

-- 
Wolfgang Viechtbauer
 Department of Methodology and Statistics
 University of Maastricht, The Netherlands
 http://www.wvbauer.com/



Original Message
From: [EMAIL PROTECTED]
[mailto:[EMAIL PROTECTED] On Behalf Of Timo Stolz Sent:
Thursday, July 26, 2007 16:13 To: r-help@stat.math.ethz.ch
Subject: [R] significance test for difference of two correlations

 Dear R users,
 
 how can I test, whether two correlations differ significantly. (I
 want to prove, that variables are correlated differently, depending
 on the group a person is in.)  
 
 Greetings from Freiburg im Breisgau (Germany),
 Timo Stolz

__
R-help@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] Regarding Bivariate normal distribution.

2007-07-25 Thread Viechtbauer Wolfgang (STAT)
No, x and y are not unique. In fact, there is an infinite number of x and y 
pairs that are roots to the equation P[Xx, Yy] = 0.05.

-- 
Wolfgang Viechtbauer 
 Department of Methodology and Statistics 
 University of Maastricht, The Netherlands 
 http://www.wvbauer.com/ 



-Original Message-
From: [EMAIL PROTECTED] [mailto:[EMAIL PROTECTED] On Behalf Of Arun Kumar Saha
Sent: Wednesday, July 25, 2007 08:58
To: r-help@stat.math.ethz.ch
Subject: [R] Regarding Bivariate normal distribution.


Dear all R gurus,

My question is related to statistics rather directly to R. Suppose
(X,Y) has a bivariate normal distrubution. I want to find two values of X and Y 
say x, and y respectively, such that:

P[Xx, Yy] = 0.05

My questions are :

1. Can x and y be uniquely found?
2. If it is, how I can find them using R

Your help will be highly appreciated.

Thanks and regards,

__
R-help@stat.math.ethz.ch mailing list 
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.

__
R-help@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] converting proc mixed to lme for a random effectsmeta-analysis

2007-06-19 Thread Viechtbauer Wolfgang \(STAT\)
That was going to be my suggestion =)

By the way, lme does not give you the right results because the residual 
variance is not constrained to 1 (and it is not possible to do so).

Best,

-- 
Wolfgang Viechtbauer 
 Department of Methodology and Statistics 
 University of Maastricht, The Netherlands 
 http://www.wvbauer.com/ 



-Original Message-
From: [EMAIL PROTECTED] [mailto:[EMAIL PROTECTED] On Behalf Of Bernd Weiss
Sent: Tuesday, June 19, 2007 14:37
To: Lucia Costanzo
Cc: r-help@stat.math.ethz.ch
Subject: Re: [R] converting proc mixed to lme for a random effectsmeta-analysis


On 19 Jun 2007 at 8:13, Lucia Costanzo wrote:

Date sent:  Tue, 19 Jun 2007 08:13:30 -0400
From:   Lucia Costanzo [EMAIL PROTECTED]
To: r-help@stat.math.ethz.ch
Subject:[R] converting proc mixed to lme for a random 
effects meta-analysis

 I would like to convert the following SAS code for a Random Effects 
 meta-analysis model for use in R but, I am running into difficulties.
 The results are not similar, R should be reporting 0.017 for the 
 between-study variance component, 0.478 for the estimated parameter
 and 
 0.130 for the standard error of the estimated parameter.  I think it
 is 
 the weighting causing problems. Would anyone have any suggestions or
 tips?
 
 Thank you,
 Lucia
 
 *** R CODE ***
 studynum -c(1, 2, 3, 4, 5)
 y -c(0.284, 0.224, 0.360, 0.785, 0.492)
 w -c(14.63, 17.02, 9.08, 33.03, 5.63)
 genData2 -data.frame(cbind(studynum, y, w,v))
 
 re.teo-lme(y~1, data=genData2, random =~1, method=ML,
 weights=varFixed(~w))
 
 


What about using MiMa http://www.wvbauer.com/downloads.html? 

studynum -c(1, 2, 3, 4, 5)
y -c(0.284, 0.224, 0.360, 0.785, 0.492)
w -c(14.63, 17.02, 9.08, 33.03, 5.63)
## without cbind(...)
genData2 -data.frame(studynum, y, w)
mima(genData2$y, 1/genData2$w, mods = c(), method = ML)


Some output:

- Estimate of (Residual) Heterogeneity: 0.0173

- estimate SE   zval  pval   CI_L   CI_U
intrcpt   0.4779 0.1304 3.6657 2e-04 0.2224 0.7334

Looks like what you are looking for...

HTH,

Bernd

__
R-help@stat.math.ethz.ch mailing list 
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.

__
R-help@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


[R] Boostrap p-value in regression [indirectly related to R]

2007-05-21 Thread Viechtbauer Wolfgang \(STAT\)
Hello All,

Despite my preference for reporting confidence intervals, I need to
obtain a p-value for a hypothesis test in the context of regression
using bootstrapping. I have read John Fox's chapter on bootstrapping
regression models and have consulted Efron  Tibshirani's An
Introduction to the Bootstrap but I just wanted to ask the experts here
for some feedback to make sure that I am not doing something wrong.

Let's take a simplified example where the model includes one independent
variable and the idea is to test H0: beta1 = 0 versus Ha: beta1 != 0.



### generate some sample data

n  - 50
xi - runif(n, min=1, max=5)
yi - 0 + 0.2 * xi + rnorm(n, mean=0, sd=1)

### fit simple regression model

mod - lm(yi ~ xi)
summary(mod)
b1  - coef(mod)[2]
t1  - coef(mod)[2] / coef(summary(mod))[2,2]

### 1000 bootstrap replications using (X,Y)-pair resampling

t1.star - rep(NA,1000)

for (i in 1:1000) {

  ids- sample(1:n, replace=TRUE)
  newyi  - yi[ids]
  newxi  - xi[ids]  
  mod- lm(newyi ~ newxi)
  t1.star[i] - ( coef(mod)[2] - b1) / coef(summary(mod))[2,2]

}

### get bootstrap p-value

hist(t1.star, nclass=40)
abline(v=t1, lwd=3)
abline(v=-1*t1, lwd=3)
2 * mean( t1.star  abs(t1) )



As suggested in the chapter on bootstrapping regression models by John
Fox, the bootstrap p-value is 2 times the proportion of bootstrap
t-values (with b1 subtracted so that we get the distribution under H0)
larger than the absolute value of the actual t-value observed in the
data. 

Doesn't this assume that the bootstrap sampling distribution is
symmetric? And if yes, would it then not be more reasonable to
calculate:

mean( abs(t1.star)  abs(t1) )

or in words: the number of bootstrap t-values that are more extreme on
either side of the bootstrap distribution than the actual t-value
observed?

Any suggestions or comments would be appreciated!

-- 
Wolfgang Viechtbauer 
 Department of Methodology and Statistics 
 University of Maastricht, The Netherlands 
 http://www.wvbauer.com

__
R-help@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] coefficients regression

2007-05-21 Thread Viechtbauer Wolfgang \(STAT\)
Try:

regression - lm (biomass ~ poly (temperature, degree=2, raw=TRUE))

See the help page for poly what raw=TRUE does.

Best,

-- 
Wolfgang Viechtbauer
 Department of Methodology and Statistics
 University of Maastricht, The Netherlands
 http://www.wvbauer.com/



Original Message
From: [EMAIL PROTECTED]
[mailto:[EMAIL PROTECTED] On Behalf Of [EMAIL PROTECTED]
Sent: Monday, May 21, 2007 10:37 To: r-help@stat.math.ethz.ch
Subject: [R] coefficients regression

 Hi,
 I would like to calculate a polynomial regression with R, but I don't
 get the same coefficients as when using SPSS. Is there a way to
 transform the coefficients?  
 
 I use:
 regression - lm (biomass ~ poly (temperature, 2))
 
 Thank you,
 Romana Limberger

__
R-help@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] meta-regression, MiMa function, and R-squared

2007-03-12 Thread Viechtbauer Wolfgang \(STAT\)
Dear All,

I am actually in the process of turning the mima function (with additional 
functions for predict, resid, and so on) into a full package.

Making the syntax of the function more like that for lm would indeed be useful. 
However, for that I would have to familiarize myself more with the internals of 
R to understand how exactly I can make use of the formula syntax.

As for calculating (something like) R^2, there are essentially two approaches I 
may suggest. I assume you have a vector of effect size estimates y, the 
corresponding vector of estimated sampling variances v, and you have one or 
more moderator variables x1 through xp.

1) Fit the model containing x1 through xp with the mima function and let tau2 
denote the estimate of residual heterogeneity from that model. Create a new 
variable w - 1/(v + tau2). Note that the mima function does nothing else but 
fit the model with weighted least squares using those weights. So, you could 
actually use lm(y ~ x1 + ... + xp, weights=w) and you should get the exact 
same parameter estimates. Therefore, summary(lm(y ~ x1 + ... + xp, 
weights=w)) will give you R^2. Note that this is the coefficient of 
determination for transformed data whose meaning may not be entirely intuitive. 
See:

Willett, J. B.,  Singer, J. D. (1988). Another cautionary note about R^2: Its 
use in weighted least-squares regression analysis. American Statistician, 
42(3), 236-238.

for a nice discussion of this.

2) Another approach that is used in the meta-analytic context is this. First 
estimate the total amount of heterogeneity by using a model without moderators 
(i.e., a random-effects model). Let that estimate be denoted by tau2.tot. 
Next, fit the model with moderators. Let the estimate of residual heterogeneity 
be denoted by tau2.res. Then (tau2.tot - tau2.res)/tau2.tot is an estimate 
of the proportion of the total amount of heterogeneity that is accounted for by 
the moderators included in the model. This is an intuitive measure that has an 
R^2 flavor to it, but I would not directly call it R^2.

Hope this helps,

-- 
Wolfgang Viechtbauer 
 Department of Methodology and Statistics 
 University of Maastricht, The Netherlands 
 http://www.wvbauer.com/ 



-Original Message-
From: Christian Gold [mailto:[EMAIL PROTECTED] 
Sent: Monday, March 12, 2007 10:59
To: r-help@stat.math.ethz.ch; [EMAIL PROTECTED]
Subject: meta-regression, MiMa function, and R-squared


Dear Wolfgang Viechtbauer and list members:

I have discovered your MiMa function for fitting meta-analytic 
mixed-effects models through an earlier discussion on this list. I think 
it is extremely useful and fills an important gap. In particular, since 
it is programmed so transparently, it is easy to adapt it for one's own 
needs. (For example, I have found it easy to identify and adapt the few 
lines I had to change to make the function fit models without intercept 
- impossible with one of the commercial packages for meta-analysis). I agree 
with Emmanuel Charpentier's suggestion that your function would 
be even more useful if it was more alike lm or glm (some time in the 
future perhaps). For now, one question: How do I calculate the correct 
R-squared for models fitted with MiMa?

Thanks

Christian Gold
University of Bergen
www.uib.no/people/cgo022

__
R-help@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] meta-regression, MiMa function, and R-squared

2007-03-12 Thread Viechtbauer Wolfgang \(STAT\)
Yes, there is indeed a slight difference. The models fitted by lm() using the 
weights option (and this is the same in essentially all other software) assume 
that the weights are known up to a constant. The parameter estimates will be 
exactly the same, but the standard errors of the estimates will differ by 
exactly that constant. If you divide the standard errors that you get from lm() 
with the weights option by the residual standard error, then you get exactly 
the same standard errors as those given by the mima() function. Fortunately, 
that multiplicative constant has no bearing on the value of R^2. You can see 
this by using lm(y ~ x1 + ... + xp, weights=w*10). The value of R^2 is 
unchanged.

Best,

-- 
Wolfgang Viechtbauer 
 Department of Methodology and Statistics 
 University of Maastricht, The Netherlands 
 http://www.wvbauer.com/ 



-Original Message-
From: Christian Gold [mailto:[EMAIL PROTECTED] 
Sent: Monday, March 12, 2007 13:35
To: Viechtbauer Wolfgang (STAT)
Cc: r-help@stat.math.ethz.ch
Subject: Re: meta-regression, MiMa function, and R-squared


Dear Wolfgang

Thanks for your prompt and clear response concerning the R^2. You write:

 Note that the mima function does nothing else but fit the model with
weighted least squares using those weights. So, you could actually use lm(y ~ 
x1 + ... + xp, weights=w) and you should get the exact same parameter 
estimates.  Therefore, summary(lm(y ~ x1 + ... + xp, weights=w)) will give 
you R^2.

Is this really true? I thought that in weighted regression the /relative/ 
weights are assumed known whereas in meta-regression the /actual/ weights are 
assumed known (Higgins  Thompson, 2004, Controlling the risk of spurious 
findings from meta-regression, Statistics in Medicine, 23, p. 1665). Also, I 
did calculate my regression problem with lm using inverse variance weights 
before I discovered your function, and have compared the results now. The 
regression coefficient was the same, but the confidence interval was wider with 
mima. Furthermore, the CI with mima depended on the absolute size of the 
weights (as I assume it should do), whereas with lm it did not. Can you explain?

Thanks

Christian

__
R-help@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] Mixed effects multinomial regression and meta-analysis

2007-03-06 Thread Viechtbauer Wolfgang \(STAT\)
Here is my suggestion. 

Let P_i denote the true proportion in the ith study and p_i the corresponding 
observed proportion based on a sample of size n_i. Then we know that p_i is an 
unbiased estimate of P_i and if n_i is sufficiently large, we know that p_i is 
approximately normally distributed as long as P_i is not too close to 0 or 1. 
Moreover, we can estimate the sampling variance of p_i with p_i(1-p_i)/n_i. 
Alternatively, we can use the logit transformation, given by ln[p_i/(1-p_i)], 
whose distribution is approximately normal and whose sampling variance is 
closely approximated by 1/( n_i p_i (1-p_i) ). 

So, let 

y_i = p_i with the corresponding sampling variance v_i = p_i(1-p_i)/n_i

or let

y_i = ln[p_i/(1-p_i)] with the corresponding sampling variance v_i = 1/( n_i 
p_i (1-p_i) ).

With y_i and v_i, you can use standard meta-analytic methodology (if the 
observed proportions are close to 0 or 1, I would use the logit transformed 
proportions). You can fit the random-effects model, if you want to assume that 
the variability among the P_i values is entirely random (and normally 
distributed) and you are interested in making inferences about the expected 
value of P_i. Or you can try to account for the heterogeneity among the P_i 
values by examining the influence of moderators. 


You might find a function that I have written useful for this purpose. See:

http://www.wvbauer.com/downloads.html

Alternatively, you could fit a logistic regression model with a random 
intercept to these data (i.e., a generalized linear mixed-effects model). In 
other words, knowing p_i and n_i for each study, you actually have access to 
the raw data (consisting of 0's and 1's). This approach is essentially an 
individual patient data meta-analysis. Such a model may or may not contain 
any moderators. You can find a discussion of this approach, for example, in: 

Whitehead (2002). Meta-analysis of controlled clinical trials. Wiley. 

Hope this helps,

-- 
Wolfgang Viechtbauer 
 Department of Methodology and Statistics 
 University of Maastricht, The Netherlands 
 http://www.wvbauer.com/ 



-Original Message-
From: [EMAIL PROTECTED] [mailto:[EMAIL PROTECTED] On Behalf Of Inman, Brant A. 
M.D.
Sent: Tuesday, March 06, 2007 00:56
To: r-help@stat.math.ethz.ch
Cc: Weigand, Stephen D.
Subject: [R] Mixed effects multinomial regression and meta-analysis



R Experts:

I am conducting a meta-analysis where the effect measures to be pooled are 
simple proportions.  For example, consider this  data from Fleiss/Levin/Paik's 
Statistical methods for rates and proportions (2003,
p189) on smokers:

Study  N   Event P(Event)
 1   86   830.965
 2   93   900.968
 3   136 1290.949
 4   82   700.854
Total397 372

A test of heterogeneity for a table like this could simply be Pearson' 
chi-square test.  
--

smoke.data - matrix(c(83,90,129,70,3,3,7,12), ncol=2, byrow=F) 
chisq.test(smoke.data, correct=T)

 X-squared = 12.6004, df = 3, p-value = 0.005585

--

Now this test implies that the data is heterogenous and that pooling might be 
inappropriate. This type of analysis could be considered a fixed effects 
analysis because it assumes that the 4 studies are all coming from one 
underlying population.  But what if I wanted to do a mixed effects (fixed + 
random) analysis of data like this, possibly adjusting for an important 
covariate or two (assuming I had more studies, of course)...how would I go 
about doing it? One thought that I had would be to use a mixed effects 
multinomial logistic regression model, such as that reported by Hedeker (Stat 
Med 2003, 22: 1433), though I don't know if (or where) it is implemented in R.  
I am certain there are also other ways...

So, my questions to the R experts are:

1) What method would you use to estimate or account for the between study 
variance in a dataset like the one above that would also allow you to adjust 
for a variable that might explain the heterogeneity?

2) Is it implemented in R?


Brant Inman
Mayo Clinic

__
R-help@stat.math.ethz.ch mailing list 
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.

__
R-help@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


[R] Distance between x-axis values and title

2006-12-18 Thread Viechtbauer Wolfgang (STAT)
Dear All,

I looked at help(par), but could not figure out which setting controls the 
distance between the x-axis values and the x-axis title. Any pointer would be 
appreciated!

Thanks in advance,

-- 
Wolfgang Viechtbauer 
 Department of Methodology and Statistics 
 University of Maastricht, The Netherlands 
 http://www.wvbauer.com/

__
R-help@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] Distance between x-axis values and title

2006-12-18 Thread Viechtbauer Wolfgang (STAT)
Thanks to all who responded so quickly! Yes, I totally overlooked par(mpg). 
Exactly what I was looking for.

-- 
Wolfgang Viechtbauer 
 Department of Methodology and Statistics 
 University of Maastricht, The Netherlands 
 http://www.wvbauer.com/ 


 -Original Message-
 From: John Kane [mailto:[EMAIL PROTECTED]
 Sent: Monday, December 18, 2006 18:45
 To: Viechtbauer Wolfgang (STAT); r-help@stat.math.ethz.ch
 Subject: Re: [R] Distance between x-axis values and title
 
 
 --- Viechtbauer Wolfgang (STAT)
 [EMAIL PROTECTED] wrote:
 
  Dear All,
 
  I looked at help(par), but could not figure out
  which setting controls the distance between the
  x-axis values and the x-axis title. Any pointer
  would be appreciated!
 
  Thanks in advance,
 
  ?mpg probably
 Is this what you want
 
 catb - c( 1,2,3,4,5,6)
 dogb - c(2,4,6,8,10, 12)
 plot(catb,dogb, mgp=c(3,1,0))
 # vs
 plot(catb,dogb, mgp=c(2,1,0))

__
R-help@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] Meta-regression with lmer() ? If so, how ?

2006-11-10 Thread Viechtbauer Wolfgang (STAT)
I guess I'll chip in, since I wrote that function (which is going to be
updated thoroughly in the near future -- I will probably expand it to an
entire package).

  Have a look at MiMa at Wolfgang Viechtbauer's page. Is that what
  you are looking for?
 
  http://www.wvbauer.com/downloads.html
 
 As far as I can tell, mima does what I mean to do, but there are some
 limits :
 
 - mima works on effects, and therefore has an unusual form in R
models

The dependent variable to be used with the mima function can be any
measure for which we have a known sampling variance (or approximately
so) and that is (approximately) normally distributed. So, the dependent
variable could be log odds ratios, log risk ratios, standardized mean
differences, and so on. Are you looking for the option to input the
results from each study arm individually? (e.g., the log odds for a
control and a treatment group). You could also use mima then (with an
appropriately coded moderator). However, it would then make more sense
to assume a common (but random) intercept for all the arms from a single
study. At this point, the function isn't set up that way, but I think I
could rewrite it to do that.
 
 - as far as I can tell, mima allows to asses the effect of variables
 *nesting* studies, but not of variables *crossed* in each study ;
 therefore, ypou cannot directly test the effect of such variables ;

I am not sure if I understand this point. I think this may relate to the
fact that (if I understand it correctly), you want to input the results
from each arm separately.

 - as far as I can tell, the variables of interest (moderators, in
mima
 parlance) can be either two-level factors, booleans or numeric
 variables, i. e variables having a single regression coeffiient : mima
 builds an estimator for the regression coefficient of each variable
and
 its variance, and tests by a Z-test. This is not applicable to
n-valued
 factors (n2) or ordered factors, which could be tested by
 {variance|deviance} analysis.

You can also test for blocks of moderators with the mima function. Let's
say you have two dummy variables that are coded to indicate differences
between three groups (e.g., low, medium, and high quality studies). Now
you want to test if quality makes at all a difference (as opposed to
testing the two dummy variables individually). Use the out=yes option
and then do the following:

1) from $b, take the (2x1) subset of the parameter estimates
corresponding to the two dummy variables; denote this vector with b.sub
2) from $vb, take the (2x2) subset from the variance-covariance matrix
corresponding to the two dummy variables (i.e., their variances and the
covariance); denote this vector with vb.sub
3) then t(b.sub) %*% solve(vb.sub) %*% b.sub is approximately chi-square
distributed under H0 with 2 degrees of freedom.

I am also going to add to the function the option to output the log
likelihood value. Then likelihood ratio tests are a snap to do with full
versus reduced models. But for now, the above should work.

Feel free to get in touch with me via e-mail.

Best,

-- 
Wolfgang Viechtbauer 
 Department of Methodology and Statistics 
 University of Maastricht, The Netherlands 
 http://www.wvbauer.com/

__
R-help@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] Testing the equality of correlations

2006-09-28 Thread Viechtbauer Wolfgang (STAT)
It's more complicated than that, since Phi(X1,X2), Phi(X1,X3), and Phi(X1,X4) 
are dependent. Take a look at:

Olkin, I.,  Finn, J. D. (1990). Testing correlated correlations. Psychological 
Bulletin, 108(2), 330-333.

and

Meng, X., Rosenthal, R.,  Rubin, D. B. (1992). Comparing correlated 
correlation coefficients. Psychological Bulletin, 111(1), 172-175.

You will probably have to implement these tests yourself. 

Best,

-- 
Wolfgang Viechtbauer 
 Department of Methodology and Statistics 
 University of Maastricht, The Netherlands 
 http://www.wvbauer.com/ 


 -Original Message-
 From: [EMAIL PROTECTED] [mailto:r-help-
 [EMAIL PROTECTED] On Behalf Of Paul Hewson
 Sent: Wednesday, September 27, 2006 17:40
 To: Marc Bernard; r-help@stat.math.ethz.ch
 Subject: Re: [R] Testing the equality of correlations
 
 Off the top my head (i.e. this could all be horribly wrong), I think
 Anderson gave an asymptotic version for such a test, whereby under the
 null hypothesis, the difference between Fisher's z for each sample, z1 -
 z2, is normal with zero mean.   

 -Original Message-
 From: [EMAIL PROTECTED]
 [mailto:[EMAIL PROTECTED] On Behalf Of Marc Bernard
 Sent: 27 September 2006 14:42
 To: r-help@stat.math.ethz.ch
 Subject: [R] Testing the equality of correlations
 
 Dear All,
 
   I wonder if there is any  implemented statistical test in R to test
 the  equality between many correlations. As an example, let X1, X2, X3
 X4 be four random  variables.  let
   Phi(X1,X2) , Phi(X1,X3) and Phi(X1,X4) be the corresponding
 correlations.
   How to test Phi(X1,X2) = Phi(X1,X3) = P(X1,X4)?
 
   Many thanks in advance,
 
   Bernard

__
R-help@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] Combination of Bias and MSE ?

2006-04-05 Thread Viechtbauer Wolfgang (STAT)
The MSE of an estimator X for a parameter theta is defined as E(X - theta)^2, 
which is equal to Var[X] + (Bias[X])^2, so in that sense, the MSE is already 
taking the bias of X into account.

Hope this helps,

-- 
Wolfgang Viechtbauer 
 Department of Methodology and Statistics 
 University of Maastricht, The Netherlands 
 http://www.wvbauer.com/ 


 -Original Message-
 From: [EMAIL PROTECTED] [mailto:r-help-
 [EMAIL PROTECTED] On Behalf Of Amir Safari
 Sent: Wednesday, April 05, 2006 5:20 PM
 To: R-help@stat.math.ethz.ch
 Subject: [R] Combination of Bias and MSE ?
 
 
 
 Dear R Users,
   My question is overall and not necessarily related to R.
   Suppose we face to a situation in which MSE( Mean Squared Error) shows
 desired results but Bias shows undesired ones, Or in advers. How can we
 evaluate the results. And suppose, Both MSE and Bias are important for
 us.
   The ecact question is that, whether there is any combined measure of two
 above metrics.
   Thank you so much for any reply.
   Amir Safari
 
 
 
 
 
 -
 
   [[alternative HTML version deleted]]
 
 __
 R-help@stat.math.ethz.ch mailing list
 https://stat.ethz.ch/mailman/listinfo/r-help
 PLEASE do read the posting guide! http://www.R-project.org/posting-
 guide.html

__
R-help@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html


Re: [R] meta / lme

2006-03-13 Thread Viechtbauer Wolfgang (STAT)
Hello Stephen,

As far as I know, the meta package will not allow you to include moderator 
variables in the model. However, I have written a script for R/S-Plus that will 
allow you to fit such models (essentially, these are mixed-effects models with 
a random intercept). You can find the script here: 
http://www.wvbauer.com/downloads.html

Specifically, if you scroll down a bit, you will find the mima function with 
a tutorial that explains how it can be used. I hope you find this useful.

Best wishes,

-- 
Wolfgang Viechtbauer 
 Department of Methodology and Statistics 
 University of Maastricht, The Netherlands 
 http://www.wvbauer.com/

-- 
Wolfgang Viechtbauer 
 Department of Methodology and Statistics 
 University of Maastricht, The Netherlands 
 http://www.wvbauer.com/ 


 -Original Message-
 From: [EMAIL PROTECTED] [mailto:r-help-
 [EMAIL PROTECTED] On Behalf Of Stephen
 Sent: Sunday, March 12, 2006 11:55 AM
 To: r-help@stat.math.ethz.ch
 Subject: [R] meta / lme
 
 Hi
 
 
 
 I'm conducing a meta-analysis using the meta package.
 
 
 
 Here's a bit of code that works fine -
 
 tmp - metacont(samplesize.2, pctdropout.2, sddropout.2,
 
 samplesize.1, pctdropout.1, sddropout.1,
 
  data=Dataset, sm=WMD)
 
 
 
 I would now like to control for a couple of variables (continuous and
 categorical) that aren't in the equation.
 
 
 
 Is meta inappropriate for these purposes? If so, based on the above
 code, how would I add variables to the equation?
 
 
 
 Perhaps I should use lme weighting on sample size?
 
 
 
 Thoughts appreciated
 
 
 
 Thanks S.
 
 
 
 PS
 
  version
 
  _
 
 platform i386-pc-mingw32
 
 arch i386
 
 os   mingw32
 
 system   i386, mingw32
 
 status
 
 major2
 
 minor2.1
 
 year 2005
 
 month12
 
 day  20
 
 svn rev  36812
 
 language R

__
R-help@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html


[R] Doubly Non-Central F-Distribution

2005-11-01 Thread Viechtbauer Wolfgang (STAT)
Hello All,

Has anyone written a function for the distribution function of a
*doubly* non-central F-distribution? I looked through the archives, but
didn't find anything. Thanks!

Wolfgang

__
R-help@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html