C is an R function for setting contrasts in a factor. Hence the funky
error message.
?C
Use choose() for your C(N,k)
?choose
choose(200,2)
19900
choose(200,100)
9.054851e+58
N=200; k=100; m=50; p=.6; q=.95
choose(N,k)*p^k*(1-p)^(N-k)*choose(k,m)*q^m*(1-q)^(k-m)
6.554505e-41
-Original
R and SPSS are using different but equivalent statistics. R is using
the rank sum of group1 adjusted for the mean rank. SPSS is using the
rank sum of group2 adjusted for the mean rank.
Example.
G1=group1
G2=group2[-length(group2)] #get rid of the NA
n1=length(G1) #n1=28
n2=length(G2) #n2=27
From the website:
BOA is an R/S-PLUS program for carrying out convergence diagnostics and
statistical and graphical analysis of Monte Carlo sampling output. It can be
used as an output processor for the BUGS software or for any other program
which produces sampling output.
See
Suddenly (e.g. yesterday) all my functions that have na.rm= as a
parameter (e.g., mean(), sd(), range(), etc.) have been reporting
warnings with na.rm=T. The message is Warning message: the condition
has length 1 and only the first element will be used in: if (na.rm) x
- x[!is.na(x)] . This has
Brilliant! Yesterday, I created a table called T. Dumb. Removing it
solves the problem. Thanks.
-Original Message-
From: Duncan Murdoch [mailto:[EMAIL PROTECTED]
Sent: Thursday, June 14, 2007 10:01 AM
To: Lucke, Joseph F
Cc: r-help@stat.math.ethz.ch
Subject: Re: [R] Problems
As a newbie to RODBC (Windows XP), I find that the commands aren't
working quite as expected.
After
Library(RODBC)
I had planned to use the two-step process
myConn = odbcConnectExcel(Dates.xls)
sqlQuery(myConn,SELECT ID, ADM_DATE, ADM_TIME FROM A) #A is the Excel
spreadsheet name
X =
Most standard tests, such as t-tests and ANOVA, are fairly resistant to
non-normalilty for significance testing. It's the sample means that have
to be normal, not the data. The CLT kicks in fairly quickly. Testing
for normality prior to choosing a test statistic is generally not a good
idea.
One issue is whether you want your estimators to be based on central
moments (covariances) or on non-central moments. Removing the intercept
changes the statistics from central to non-central moments. The
adjusted R2, by which I think you mean Fisher's adjusted R2, is based on
central moments
The log likelihood is
const + n/2* [ log(det(Sigma^-1)) - trace(Sigma^-1*S) ] where Sigma is the
population covariance matrix
-Original Message-
From: [EMAIL PROTECTED] [mailto:[EMAIL PROTECTED] On Behalf Of Spencer Graves
Sent: Friday, May 04, 2007 9:20 PM
To: R-devel mailing list
Might there be an (semi-)automated procedure to create a minimal,
personal package, for my eyes only, that I can load with a
libray(MyStuff) command? This would be preferable to having to
source() the files. Is there already such a procedure?
Joe
-Original Message-
From: [EMAIL
Sean
Both Bill V and Peter D are right regarding traditional
repeated-measures ANOVA that assumes equality of the variance-covariance
matrices across groups and compound symmetry of the covariance matrix.
However, there is a multivariate approach to repeated measures that does
not require the
A re-interpretation of Zorn's lemma?
-Original Message-
From: [EMAIL PROTECTED]
[mailto:[EMAIL PROTECTED] On Behalf Of Jim Lemon
Sent: Thursday, April 12, 2007 5:14 AM
To: [EMAIL PROTECTED]
Subject: Re: [R] Reasons to Use R
Charilaos Skiadas wrote:
A new fortune candidate perhaps?
You can (and I have) fit survival data with logistic regression. Agresti (1990,
pp 189--196) has an introductory discussion.
The issue is whether the occurrence of the event is of interest or whether the
time-to-event is of interest. If the study lasts 180 days (as in my case)
logistic
What is the standard name for a Bernoulli-like sequence (independent,
binary random variables) but with possibly different probabilities of
success on each trial, i.e. Pr( X_i = 1) = theta_i
Joseph F. Lucke, PhD
Biostatistician
Center for Clinical Research and Evidence-based Medicine
Statistical computing perhaps is not so much a single topic as a family
of related topics (a la Wittgenstein) that share a lot in common but
perhaps very little is common to all. For example,
1. Statistical computing in contrast to statistical theory.
6. Statistical computing as a supplement to
Does anyone have code for the 3F2 hypergeometric function? I am looking
for code similar to the 2F1 hypergeometric function implemented as
hyperg_2F1 in the GSL package. TIA. ---Joe
__
R-help@stat.math.ethz.ch mailing list
Continuing off topic:
1. The range of alpha -infinity alpha 1.
2. Alpha is NOT reliability
3. There are trivial examples of alpha 1 with reliability approaching
1.
4. There are trivial examples of alpha = 0 with reliability approaching
1.
5. Alpha cannot assess dimensionality.
Lucke, Joseph F
Somehow, I've managed to have my .Rdata files become `disassociated'
from the R program. I am running Windows XP Pro. I have re-installed R
2.4 in an attempt to have it re-associate itself with .Rdata files, but
to no avail. .Rdata files are now associated with a file compression
program or
the changes, and now double (left) click on the
.Rdata file to see if it opens in R's GUI.
Cheers,
Francisco
Lucke, Joseph F wrote:
Somehow, I've managed to have my .Rdata files become `disassociated'
from the R program. I am running Windows XP Pro. I have re-installed
R
2.4 in an attempt
Has anyone developed a function to generate a forest plot, the one used
a lot in meta-analysis?
Joseph F. Lucke, PhD
Biostatistician
Center for Clinical Research and Evidence-based Medicine
Department of Pediatrics
School of Medicine
University of Texas Health Science Center at Houston
Voice:
Jenny
This following example works:
real.d - rep(NA,30)
real.b - rep(NA,30)
b1=runif(1); b2=runif(1); t1=runif(1); t2=runif(1)
if (length(real.d)=30 length(real.b)=30
b1*b2*t1*t20){bool=TRUE}
bool
[1] TRUE
But this one doesn't:
real.d - rep(NA,30)
real.b - rep(NA,30)
b1=runif(1);
James
The main reason for the adjusted R^2 (Fisher) is that it is less biased than
the ordinary R^2. The ordinary R^2 has a positive bias that is a function of
the true Rho^2, the number of predictors p, and the sample size n. The maximum
bias occurs at Rho^2 = 0, where the expected R^2 is
Assume Type 1 SS and no interaction.
Under Model 1, your sums of squares (SS) is partitioned SS(M), SS(L|M),
SS(E1|L,M). In Model 2 it is SS(L), SS(M|L), SS(E2|L,M). The total SS
in both Model 1 2 are equal, and SS(E1|L,M) = SS(E2|L,M). [ If the
design had been orthogonal then also SS(M)=
I just did this last night for a class. It's very simplistic and could
be improve, but it did the job. First I did the normal. Of course means
of increasing large samples from a normal stay normal. This setup the
students. Then I did means from an exponential. For n=1 you get the
exponential
Jens
I'm not sure what you intend by predefined assumptions.
1. If you merely want to conduct an exploratory rather than confirmatory
analysis for the relevant paths, there are ways within SEM to do this. (In
this case you could use John Fox's SEM package).
2. If you do not wish to assume
One might begin by considering _conditional_ p-values as elaborated by
Hubbard and Bayarri and especially Sellke, Bayarri, and Berger.
Record Number: 1545
@article{
Hubbard2003,
Author = {Hubbard, R. and Bayarri, M. J.},
Title = {Confusion over measures of evidence ($p$)'s versus errors
26 matches
Mail list logo