Let r_1 be the correlation between the two variables for the first group with
n_1 subjects and let r_2 be the correlation for the second group with n_2
subjects. Then a simple way to test H0: rho_1 = rho_2 is to convert r_1 and r_2
via Fisher's variance stabilizing transformation ( z = 1/2 *
No, x and y are not unique. In fact, there is an infinite number of x and y
pairs that are roots to the equation P[Xx, Yy] = 0.05.
--
Wolfgang Viechtbauer
Department of Methodology and Statistics
University of Maastricht, The Netherlands
http://www.wvbauer.com/
-Original
That was going to be my suggestion =)
By the way, lme does not give you the right results because the residual
variance is not constrained to 1 (and it is not possible to do so).
Best,
--
Wolfgang Viechtbauer
Department of Methodology and Statistics
University of Maastricht, The
Hello All,
Despite my preference for reporting confidence intervals, I need to
obtain a p-value for a hypothesis test in the context of regression
using bootstrapping. I have read John Fox's chapter on bootstrapping
regression models and have consulted Efron Tibshirani's An
Introduction to the
Try:
regression - lm (biomass ~ poly (temperature, degree=2, raw=TRUE))
See the help page for poly what raw=TRUE does.
Best,
--
Wolfgang Viechtbauer
Department of Methodology and Statistics
University of Maastricht, The Netherlands
http://www.wvbauer.com/
Original Message
From:
Dear All,
I am actually in the process of turning the mima function (with additional
functions for predict, resid, and so on) into a full package.
Making the syntax of the function more like that for lm would indeed be useful.
However, for that I would have to familiarize myself more with the
: Christian Gold [mailto:[EMAIL PROTECTED]
Sent: Monday, March 12, 2007 13:35
To: Viechtbauer Wolfgang (STAT)
Cc: r-help@stat.math.ethz.ch
Subject: Re: meta-regression, MiMa function, and R-squared
Dear Wolfgang
Thanks for your prompt and clear response concerning the R^2. You write:
Note
Here is my suggestion.
Let P_i denote the true proportion in the ith study and p_i the corresponding
observed proportion based on a sample of size n_i. Then we know that p_i is an
unbiased estimate of P_i and if n_i is sufficiently large, we know that p_i is
approximately normally distributed
Dear All,
I looked at help(par), but could not figure out which setting controls the
distance between the x-axis values and the x-axis title. Any pointer would be
appreciated!
Thanks in advance,
--
Wolfgang Viechtbauer
Department of Methodology and Statistics
University of Maastricht,
[mailto:[EMAIL PROTECTED]
Sent: Monday, December 18, 2006 18:45
To: Viechtbauer Wolfgang (STAT); r-help@stat.math.ethz.ch
Subject: Re: [R] Distance between x-axis values and title
--- Viechtbauer Wolfgang (STAT)
[EMAIL PROTECTED] wrote:
Dear All,
I looked at help(par), but could
I guess I'll chip in, since I wrote that function (which is going to be
updated thoroughly in the near future -- I will probably expand it to an
entire package).
Have a look at MiMa at Wolfgang Viechtbauer's page. Is that what
you are looking for?
http://www.wvbauer.com/downloads.html
It's more complicated than that, since Phi(X1,X2), Phi(X1,X3), and Phi(X1,X4)
are dependent. Take a look at:
Olkin, I., Finn, J. D. (1990). Testing correlated correlations. Psychological
Bulletin, 108(2), 330-333.
and
Meng, X., Rosenthal, R., Rubin, D. B. (1992). Comparing correlated
The MSE of an estimator X for a parameter theta is defined as E(X - theta)^2,
which is equal to Var[X] + (Bias[X])^2, so in that sense, the MSE is already
taking the bias of X into account.
Hope this helps,
--
Wolfgang Viechtbauer
Department of Methodology and Statistics
University of
Hello Stephen,
As far as I know, the meta package will not allow you to include moderator
variables in the model. However, I have written a script for R/S-Plus that will
allow you to fit such models (essentially, these are mixed-effects models with
a random intercept). You can find the script
Hello All,
Has anyone written a function for the distribution function of a
*doubly* non-central F-distribution? I looked through the archives, but
didn't find anything. Thanks!
Wolfgang
__
R-help@stat.math.ethz.ch mailing list
15 matches
Mail list logo