> Sent: Mittwoch, 27. Februar 2019 22:53
> To: Meyners, Michael
> Cc: r-help
> Subject: Re: [R] Randomization Test
>
> Dear Kind List,
>
> I am still battling with this. I have, however, made some progress with the
> suggestions of Micheal and others. At least, I h
Ogbos,
You do not seem to have received a reply over the list yet, which might be due
to the fact that this seems rather a stats than an R question. Neither got your
attachment (Figure) through - see posting guide.
I'm not familiar with epoch analysis, so not sure what exactly you are doing /
Juan,
Your question might be borderline for this list, as it ultimately rather seems
a stats question coming in R disguise.
Anyway, the short answer is that you *expect* to get a different p value from a
permutation test unless you are able to do all possible permutation and
therefore use
Apologies if I missed any earlier replies - did you check
multcompLetters in package {multcompView}?
It allows you to get connecting letters reports (if that's what you are after,
I didn't check what exactly agricolae is providing here). May have to add some
manual steps to combine this with any
Did you search the internet? At first attempt,
vegdist {vegan}(worked well for me in the past) and
dist.binary {ade4}
seem to offer what you need.
HTH, Michael
-Original Message-
From: R-help [mailto:r-help-boun...@r-project.org] On Behalf Of sreenath
Sent: Mittwoch, 15.
...@virgilio.it
Cc: Meyners, Michael; r-help@r-project.org
Subject: Re: [R] R: RE: Bonferroni post hoc test in R for repeated measure
ANOVA with mixed within and between subjects design
Dear Angelo,
The Bonferroni p-value is just the ordinary p-value times the number of
tests, so, since R
Untested, but if anything, your best bet is likely something like
summary(glht(lme_H2H, linfct=mcp(Emotion = Tukey)),
test=adjusted(bonferroni))
should work (despite the question why you'd want to use Bonferroni rather than
Tukey
For a reference, see the book on the topic by the package
solution found, so all fine :-)
Cheers, Michael
-Original Message-
From: Martin Maechler [mailto:maech...@stat.math.ethz.ch]
Sent: Montag, 8. Juni 2015 16:43
To: Meyners, Michael
Cc: r-help@r-project.org
Subject: Re: [R] mismatch between match and unique causing ecdf (well
-Original Message-
From: Meyners, Michael
Sent: Montag, 8. Juni 2015 12:02
To: 'r-help@r-project.org'
Subject: mismatch between match and unique causing ecdf (well,
approxfun) to fail
All,
I encountered the following issue with ecdf which was originally on a vector
of length 10,000
All,
I encountered the following issue with ecdf which was originally on a vector of
length 10,000, but I have been able to reduce it to a minimal reproducible
example (just to avoid questions why I'd want to do this for a vector of length
2...):
test2 = structure(list(X817 =
Not sure about JMP 11, but remember that JMP 10 did not run with R version =
3.0.0
It depends a bit on the changes that come with new R versions; with JMP 10,
several versions of the 2.x series were compatible even though JMP officially
only supported earlier versions. I had hoped that with
You don't need a constraint (rbinom won't give xn), but you need to make sure
you are using the n you want to use: try
x - cbind(x,rbinom(300,n[i],p[i]))# mind the [i] after the n
at the respective line. Furthermore, you need to remove one transformation of x
to make sure you divide by the
Jochen,
a) this is an English-spoken mailing list; other languages are not encouraged
nor will they typically generate a lot of replies...
b) your code is fine, so this is not an R-issue; you are rather stuck with some
of the stats background -- you might want to see a friendly local
Impossible to say w/o a reproducible example, but to start with let me suggest
looking at the exact= (both functions) and correct= (wilcox.test) arguments.
Experience shows that some change of the default settings allows you to
reproduce results from other software (and the help pages will
All,
I realize from the archive that the sort argument in merge has been subject to
discussion before, though I couldn't find an explanation for this behavior. I
tried to simplify this to (kind of) minimal code from a real example to the
following (and I have no doubts that there are smart
-
From: Rui Barradas [mailto:ruipbarra...@sapo.pt]
Sent: Dienstag, 4. September 2012 14:01
To: Meyners, Michael
Cc: r-help
Subject: Re: [R] unexpected (?) behavior of sort=TRUE in merge function
Hello,
Inline.
Em 04-09-2012 12:24, Meyners, Michael escreveu:
All,
I realize from
the time to reply.
Cheers, Michael
-Original Message-
From: Rui Barradas [mailto:ruipbarra...@sapo.pt]
Sent: Dienstag, 4. September 2012 16:58
To: Meyners, Michael
Cc: r-help
Subject: Re: [R] unexpected (?) behavior of sort=TRUE in merge function
Hello,
You're right I had missed
Kel,
in addition, and depending on how you define similarity, you might want to
look into the RV coefficient as a measure of it (it is actually related to a
correlation, so similarity would rather mean similar information though not
necessarily small Euclidean distance); coeffRV in FactoMineR
No, the authors are correct: the individuals (i.e. the 17 individuals) you have
need to be independent (i.e. no correlation between them, let alone any
individual running through your temporal experiment more than once, as
indicated in the citation), while the *observations* are of course
The devil is in the details (and in the arguments in Lukasz code). The defaults
for the two functions are different: wilcox.test uses an exact test (which is
not available in kruskal.test afaik) for your data, and uses the continuity
correction if the normal approximation is requested (neither
Dan,
It depends on what you want to achieve. I suspect you just want to remove
missing values before summing; if so, consider
sapply(x, sum, na.rm=TRUE)
instead. To make your code running, try
sapply(x, function(x) sum(!is.na(x)))
However, this would just count the number of non-missing
I’m not aware of any, but if you really want this, it should be possible to
modify the code of any of the functions you propose and delete the part doing
the translation. I’m not sure that this is a good idea, though; either your
matrices are *truly* centered, then it doesn’t make a difference,
Not sure what you want to test here with two matrices, but reading the manual
helps here as well:
y a vector; ignored if x is a matrix.
x and y are matrices in your example, so it comes as no surprise that you get
different results. On top of that, your manual calculation is not correct
it works here). Note that
something like
chisq.test(as.vector(x), as.vector(y))
will give a different test, i.e. based on a contingency table of x cross y).
M.
-Original Message-
From: r-help-boun...@r-project.org [mailto:r-help-bounces@r-
project.org] On Behalf Of Meyners, Michael
-project.org
Cc: Meyners, Michael
Subject: RE: [R] Pearson chi-square test
Dear Michael,
Thanks very much for your answers!
The purpose of my analysis is to test whether the contingency table x is
different from the contingency table y.
Or, to put it differently, whether there is a significant difference
Vijayan,
I cannot find an error in your code, but I had a look at the code of
triangle.test -- unless I'm missing something, it contains a bug. If you study
the way in which the matrix pref is updated, you find that the vector
preference is compared to 1, 2 and 3 instead of X, Y and Z as it
)
}
** end updated code for triangle.test
-Original Message-
From: Francois Husson
Sent: Wednesday, June 08, 2011 14:43
To: Meyners, Michael
Subject: Re: [R] library(SenoMineR)- Triangle Test Query
Dear Vijayan, dear Michael,
Indeed there was an error in the function
I suspect you need to give more information/background on the data (though this
is not primarily an R-related question; you might want to try other resources
instead). Unless I'm missing something here, I cannot think of ANY reasonable
test: A permutation (using permtest or anything else) would
John,
Why would you want to fit the model without intercept if you seemingly need it?
Anyway, I assume that the intercept from your first model just moves into the
random effects -- you have intercepts there for worker and day, so any of these
(or both) will absorb it. No surprise that the
I assume you installed WinEdt 6. See
https://stat.ethz.ch/pipermail/r-help/2010-May/238540.html
and related messages. RWinEdt does not work (yet) with WinEdt 6, so you'll have
to downgrade back to WinEdt 5.x (or use another editor for the time being).
HTH, Michael
-Original Message-
Strange, the following works reproducibly on my machine (Windows 2000
Pro):
options(scipen = 50, digits = 5)
x = c(1e7, 2e7)
?barplot
barplot(x)
while I also get scientific with your code. After I called ?barplot
once, I'm incapable of getting the scientific notation again, though...
But I
-Original Message-
From: r-help-boun...@r-project.org
[mailto:r-help-boun...@r-project.org] On Behalf Of Uwe Dippel
Sent: Freitag, 29. Januar 2010 11:57
To: r-help@r-project.org
Subject: [R] Explanation w.r.t. rbind, please!
This is what I tried:
num.vec -
?p.adjust
Apply that to a vector containing all p values you get from
wilcox.exact. Alternatively, multiply each p value by the number of
comparisons you perform, see any textbook for that. You might want to
consider a less conservative correction, though.
HTH, Michael
-Original Message-
(i) you EITHER correct the p value (by multiplying by 8 in your case) OR
you use the Bonferroni-threshold of 0.05/8, not both. If you correct the
p values, your threshold remains 0.05. If you use 0.05/8, you use the
original p values.
(ii) Yes, if the p value is 0.15, then the corrected one for 8
Etienne,
I don't see the point in avoiding some 'special' packages. If you are
willing to change your mind in this regard, try something like
library()
-Original Message-
From: r-help-boun...@r-project.org
[mailto:r-help-boun...@r-project.org] On Behalf Of Etienne Stockhausen
Sorry, wrong button. Below a hopefully more helpful solution...
Etienne,
I don't see the point in avoiding some 'special' packages. If you are
willing to change your mind in this regard, try one of the following
solutions that work for me:
library(combinat)
apply(mat, 2, function(x)
Well, seems it's an assignment, so you should REALLY get them on your
own and not enquire the list. Foolish me...
M.
-Original Message-
From: einohr2...@web.de [mailto:einohr2...@web.de]
Sent: Mittwoch, 20. Januar 2010 19:32
To: Meyners,Michael,LAUSANNE,AppliedMathematics; r-help@r
Are you sure you called
library(Quantreg)
before calling any function?
M.
-Original Message-
From: r-help-boun...@r-project.org
[mailto:r-help-boun...@r-project.org] On Behalf Of Gough Lauren
Sent: Donnerstag, 7. Januar 2010 13:44
To: r-help@r-project.org
Subject: [R] Quantreg -
(asdf)
with asdf replaced by the correct package name, and mind the spelling!
Michael
-Original Message-
From: David Winsemius [mailto:dwinsem...@comcast.net]
Sent: Donnerstag, 7. Januar 2010 14:22
To: Meyners,Michael,LAUSANNE,AppliedMathematics
Cc: Gough Lauren; r-help@r-project.org
Moreno,
I leave the discussion on the mixed models to others (you might consider
the SIG group on mixed models as well for this), but try a few hints to
make your code more accessible:
* The . in updating a formula is substituted with the respective old
formula (depending on the side), but is
Or, more general (if you need to include more than just one variable from
TestData), something like
by(TestData, LEAID, function(x) median(x$RATIO))
Agreed, this is less appealing for the given example than Ista's code, but
might help to better understand by and to generalize its use to other
Daniel,
quick guess: There has been a major change in the package some time ago,
with quite a few functions vanishing and others added. So I guess your
recycled code was based on an older version of multcomp. My current
version (1.1-0) does not have csimtest anymore either. I guess you want
to
Julia, see
http://www.r-project.org/ - Documentation - Manuals (- An introduction to R)
(or use: http://cran.r-project.org/manuals.html)
for a starting point. In addition, you might want to check the annotated list
of books to see which one might best fit your needs:
FAQ 7.31:
http://cran.r-project.org/doc/FAQ/R-FAQ.html#Why-doesn_0027t-R-think-the
se-numbers-are-equal_003f
-Original Message-
From: r-help-boun...@r-project.org
[mailto:r-help-boun...@r-project.org] On Behalf Of Peter Tillmann
Sent: Mittwoch, 4. November 2009 13:50
To:
You don't tell us which function you use, but fixing the zlim argument
in image or related functions should do the trick.
M.
-Original Message-
From: r-help-boun...@r-project.org
[mailto:r-help-boun...@r-project.org] On Behalf Of Marion Dumas
Sent: Dienstag, 3. November 2009 14:38
Or you open a new graphics window / device as part of the loop, see, e.g.,
?windows. Alternatively, you may write the content of the graphics device to a
file for each iteration, see, e.g., ?savePlot (but you'd want to make sure that
you have a different filename in each iteration, otherwise,
-Original Message-
From: r-help-boun...@r-project.org
[mailto:r-help-boun...@r-project.org] On Behalf Of Christoph Heuck
Sent: Donnerstag, 15. Oktober 2009 17:51
To: r-help@r-project.org
Subject: [R] calculating p-values by row for data frames
Hello R-users,
I am looking for
It's always worthwhile to look at the articles by Pitman (and maybe the
textbook by Fisher, if you have access to it); Welch is a nice paper,
too, but might be pretty technical to learn about the area. I don't
know any of the textbooks except Edgington (which is in its 4th edition
now with
Robert,
you can do the corresponding paired comparisons using wilcox.test. As far as I
know, there is no such general correction as Tukey's HSD for the
Kruskal-Wallis-Test. However, if you have indeed only 3 groups (resulting in 3
paired comparisons), the intersection-union principle and the
Message-
From: Robert Kalicki
Sent: Mittwoch, 14. Oktober 2009 14:11
To: Meyners,Michael,LAUSANNE,AppliedMathematics
Subject: RE: [R] post-hoc test with kruskal.test()
Hi Michael,
Thank you very much for your clear and prompt answer.
Is it still valid if I use an unpaired comparison
a local statistician to clarify these
issues.
HTH, Michael
From: Lina Rusyte [mailto:liner...@yahoo.co.uk]
Sent: Dienstag, 29. September 2009 18:03
To: Meyners,Michael,LAUSANNE,AppliedMathematics
Subject: RE: [R] Probability of data
?paste
-Original Message-
From: r-help-boun...@r-project.org
[mailto:r-help-boun...@r-project.org] On Behalf Of Dr. Arie Ben David
Sent: Mittwoch, 30. September 2009 10:22
To: r-help@r-project.org
Subject: [R] How do I do simple string concatenation in R?
Dear R gurus
How do I
Lina, check whether something like
data.frame(density(rnorm(10))[1:2])
contains the information you want. Otherwise, try to be (much) more specific in
what you want so that we do not need to guess (and of course provide minimal,
self-contained, reproducible code). That has a higher chance to
Let's assume you have just three observations, and x-- = 1:3 for your
observations.
Predictor 1:y = x^2
Predictor 2:y = 1 if x=1
y = 4 if x=2
y = 9 if x=3
y = 0 elsewhere
These predictors are obviously not the same,
See the respective help files. The continuity correction only affects
the normal approximation in wilcox.test. With this small samples sizes,
the default evaluation is exact, so it doesn't change anything. In
contrast, kruskal.test is incapable to compute exact values but always
uses the
Alex,
It's mainly speculation, as I cannot check the Excel add-in nor Vassar, but
I'll give it a try.
For the Friedman-test: Results of R coincide with those reported by Hollander
Wolfe, which I'd take as a point in favor of R. In any case, my guess is that
ties are handled differently
Taking the example from ?biplot.princomp with reproducible code (hint,
hint):
biplot(princomp(USArrests))
biplot(princomp(USArrests), col=c(2,3), cex=c(1/2, 2))
clearly changes the color and font size on my system. For changing the
point symbols (which are text by default), try setting xlabs
-Original Message-
From: r-help-boun...@r-project.org
[mailto:r-help-boun...@r-project.org] On Behalf Of Alex Roy
Sent: Freitag, 14. August 2009 12:05
To: r-help@r-project.org
Subject: [R] Permutation test and R2 problem
Hi,
I have optimized the shrinkage parameter
Nancy (?),
see ?chisq.test (in particular the examples and the comment on expected
frequency there).
A rule of thumb (see any basic text book) for the chisquared
approximation being okay is that the expected value in each cell is at
least 5. The warning tells you that this does not hold true for
Hi unknown,
As a quick patch, try something like
mydata$Set - 1
mydata$Set[c(sample(mydata$ID[mydata$Type==A],
ifelse(runif(1)1/2,2,3)), sample(mydata$ID[mydata$Type==B], 3) )[1:5]
] - 2
HTH, Michael
-Original Message-
From: r-help-boun...@r-project.org
SD in y is more than 15 times (!!!) larger than in x and z,
respectively, and hence SD of the mean y is also larger. 100,000 values
are not enough to stabilize this. You could have obtained 0.09 or even
larger as well. Try y with different seeds, y varies much more than x
and z do. Or check the
Robert, Tom, Peter and all,
If I remember correctly (don't have my copy at hand right now),
Edgington and Onghena differentiate between randomization tests and
permutation tests along the following lines:
Randomization test: Apply only to randomized experiments, for which we
consider the
Rolf,
as you explicitly asked for a comment on your proposal: It is generally
equivalent to McNemar's test and maybe even more appropriate because of
the asymptotics involved in the chi-squared distribution, which might
not be too good with n=12. In some more detail:
McNemar's test basically
63 matches
Mail list logo