See library(multcomp) and
?glht
?contrMat
for several procedures for multiple comparisons.
The Newman Keuls test is not on the list. The related Tukey method
is on the list.
__
R-help@stat.math.ethz.ch mailing list
Hello,
I send the message again with the data file as txt because it seems not to
be accepted as csv in the R-help list.
Data comes from a multiyear field experiment in which 4 levels of a
treatment (2, 3, 4, 6) are compared to see the effect on yield. It is a
randomized complete block design.
Dear R users,
I need a professional help:
i am a relatively new R user and I am just writing my diploma tesis where I
have to conduct some multiple comparison.
I am searching a method which include interaction between fixed factors. The
following is my model:
Hello,
Data comes from a multiyear field experiment in which 4 levels of a
treatment (2, 3, 4, 6) are compared to see the effect on yield. It is a
randomized complete block design.
The SAS code follows:
options ls=95;
data uno;
infile 'data.csv' delimiter=';' firstobs=2;
input
Data is here.
I'm sorry.
Is it possible to do this analysis in R?
Yes, it is possible. The syntax isn't in place yet.
If you send me the complete SAS code and data for an example using slice,
I will duplicate it for you in the multcomp package in R. I will send
that
to the R-help list
Dear list,
I have to do an ANOVA analysis with one fixed effect A and one random
effect SUBJECT. TO do this I used aov in the form
aov.m1 - aov(depvar ~ A + Error(SUBJECT/(A)));
My question is if I obtain significant differences within the strata,
does it make any sense to make multiple
As I understand from the WoodEnergy example in package HH, You are
proposing to compute a separate lm for each level of YEAR factor to
compare TIL means.
This is the way I used to do this kind of analysis.
But now, it is also possible, with PROC GLM, to adjust only the general
model (variable ~
Yes, it can be done. It is not currently easy because multcomp doesn't
have the syntax yet. Making this easy is on Torsten's to-do list for the
multcomp package.
See the MMC.WoodEnergy example in the HH package. The current version on
CRAN
is HH_1.17. Please see the discussion of this example
In the model:
lm.1 - lm(variable ~ BLOC + TIL * YEAR , data=selvanera)
I found TIL*YEAR interaction significant. Then I am trying to compare
means of the different levels of TIL inside every YEAR using:
mc.2 - glht(lm.1, linfct = mcp(TIL*YEAR=Tukey))
summary(mc.2, test = univariate())
To account for the strong serial correlation you
could try the lme() function of the nlme package.
There you can apply different covariance
structures in your linear model such as a
first-order autoregressive covariance structure (AR1).
example:
model.fit - lme(response ~ condition * time,
Kyle,
You might try the Wilcoxon Rank Sum test (and there is also the paired
rank sum test) that may be useful. Both are found in R. There is an
application of the test in the textbook by Loucks, D.P., Stedinger J.R.,
and Haith, D., 1981. Water Resources Systems Planning and Analysis,
PAIRWISE KOLMOGOROV-SMIRNOV:
I don't know, but it looks like you could just type pairwise.t.test
at a command prompt, copy the code into an R script file, and create a
function pairwise.ks.test just by changing the call to t.test with
one to ks.test. Try it. If you have trouble
I am interested in a statistical comparison of multiple (5) time series'
generated from modeling software (Hydrologic Simulation Program Fortran). The
model output simulates daily bacteria concentration in a stream. The multiple
time series' are a result of varying our representation of the
Dear all
I am still fishing for help on this theme. In Zar 1999 page 563-565 he
describes a Tukey-type multiple comparison for testing among proportions.
It involves comparisons of ranked proportions transformed to degrees. In the
following pages there are a couple of similar comparisons.
Dear all
I have an experiment where plots either have or have not regrown (in 40
plots) after receiving 12 different herbicide treatments and a control (no
treatment). The data are significant with a Chi2, but to later distinguish
if the differences are significant between each of the 12
Dear R-help list,
I would like to perform multiple comparisons for lme. Can you report to me
if my way to is correct or not? Please, note that I am not nor a
statistician nor a mathematician, so, some understandings are sometimes
quite hard for me. According to the previous helps on the
Michaël Coeurdassier [EMAIL PROTECTED] writes:
summary(csimtest(vect,vcov(lm1),cmatrix=contrMat(table(treatment),type=Tukey),df=59))
Coefficients:
Estimate t value Std.Err. p raw p Bonf p adj
Al800-Al100 -2.253 -10.4670.213 0.000 0.000 0.000
Al600-Al100
Dear R-help list,
I would like to perform multiple comparisons for lme. Can you report to me
if my way to is correct or not? Please, note that I am not nor a
statistician nor a mathematician, so, some understandings are sometimes
quite hard for me. According to the previous helps on the
I need to do multiple comparisons following nlme analysis (Compare
the effects of different treatments on a response measured
repeatedly over time;
fixed = response ~ treat*time).
If you have an interaction it does not really make sense to conduct a
multiple comparison because the difference
Dear Madam or Sir,
I need to do multiple comparisons following nlme analysis (Compare the
effects of different treatments on a response measured repeatedly over time;
fixed = response ~ treat*time). On the web I found the notion that one might
use the L argument from ANOVA. Do you have an
Thanks Rolf and Thomas,
It looks to me like what you are doing is trying to judge
significance of differences by non-overlap of single-sample
confidence intervals. While this is appealing, it's not quite
right.
Yes, this is what I am trying to do. Apparently, when the replicates are
the
Dear all,
I am conducting a full factorial analysis. I have one factor consisting
in algorithms, which I consider my treatments, and another factor made
of the problems I want to solve. For each problem I obtain a response
variable which is stochastic. I replicate the measure of this response
It looks to me like what you are doing is trying to judge
significance of differences by non-overlap of single-sample
confidence intervals. While this is appealing, it's not quite
right.
I just looked into my copy of Applied Nonparametric Statistics
(second ed.) by Wayne W. Daniel (Duxbury,
Great summary, Rolf.
Just one minor issue that recently bit me: In a data mining
application with hundred of p-values, people want to make subtle
distinctions based on extremely small p-values. In such applications,
even a modest amount of skewness (to say nothing of outliers)
Barry Rowlingson [EMAIL PROTECTED] writes:
Liaw, Andy wrote:
Stupid me: fell into this trap:
0 == 0 == 0
[1] FALSE
Ouch!
Python's comparison operators don't have this trap, since they
unravel each comparison pair in a chain so that:
(A op1 B op2 C)
becomes:
(A
Peter Dalgaard [EMAIL PROTECTED] writes:
Barry Rowlingson [EMAIL PROTECTED] writes:
Liaw, Andy wrote:
Stupid me: fell into this trap:
0 == 0 == 0
[1] FALSE
Ouch!
Python's comparison operators don't have this trap, since they
unravel each comparison pair in a chain
On 25-Jul-04 Gabor Grothendieck wrote:
Don't know how Python does it but its not the only one and
I believe its often done like this. Rather than have a Boolean
type, NULL is defined to be false and anything else is true.
If the comparison is TRUE then the right argument is returned;
Ted.Harding at nessie.mcc.ac.uk writes:
:
: On 25-Jul-04 Gabor Grothendieck wrote:
: Don't know how Python does it but its not the only one and
: I believe its often done like this. Rather than have a Boolean
: type, NULL is defined to be false and anything else is true.
: If the comparison
Gabor Grothendieck [EMAIL PROTECTED] writes:
Ted.Harding at nessie.mcc.ac.uk writes:
:
: On 25-Jul-04 Gabor Grothendieck wrote:
: Don't know how Python does it but its not the only one and
: I believe its often done like this. Rather than have a Boolean
: type, NULL is defined to be
Liaw, Andy wrote:
Stupid me: fell into this trap:
0 == 0 == 0
[1] FALSE
Ouch!
Python's comparison operators don't have this trap, since they unravel
each comparison pair in a chain so that:
(A op1 B op2 C)
becomes:
(A op1 B) and (B op2 C)
If you want:
(A op1 B) op2 C
you have to put the
Patrick
I'm not familiar with the multcomp package from s-plus, but there is a
package available through bioconductor (www.bioconductor.org) called
multtest that has some functions for multiple testing procedures and
adjusting p-values and computing false discovery rates.
Sean
--
Sean Davis,
Is there a fonction for multiple comparison tests (similar to multicomp in Splus) in
a package of R?
Thanks in advance for any hint...
Cheers,
Patrick Giraudoux
University of Franche-Comté
Department of Environmental Biology
EA3184 af. INRA
F-25030 Besançon Cedex
tel.: +33 381 665 745
There is now a package in R called multcomp for general multiple
comparisons that does things similar to the Splus library you mentioned.
BTW, a search of the help archives for multicomp or multiple
comparisons brings this up.
HTH, Andy
__
[EMAIL
I've never seen anything written about multiple comparisons,
as in the multcomp package or with TukeyHSD, but using a glm.
Do such procedures exist? Are they sensible?
Are there any packages in R that implement such comparisons?
Thank you.
--
Ken Knoblauch
Inserm U371
Cerveau et Vision
18
On Wednesday 05 November 2003 17:28, Ken Knoblauch wrote:
I've never seen anything written about multiple comparisons,
as in the multcomp package or with TukeyHSD, but using a glm.
Do such procedures exist? Are they sensible?
Are there any packages in R that implement such comparisons?
I've never seen anything written about multiple comparisons,
as in the multcomp package or with TukeyHSD, but using a glm.
Do such procedures exist? Are they sensible?
Are there any packages in R that implement such comparisons?
since version 0.4-0 in `multcomp':
0.4-0 (13.08.2003)
I'm having trouble finding an R equivalent to the S-Plus multicomp
function, which does post-hoc comparisons of treatments means in
ANOVAs. Am I missing something obvious?
Thanks, Peter
Peter Adler, PhD
Dept. Ecology, Evolution and Marine Biology
University of
On 10/07/03 09:05, Peter Adler wrote:
I'm having trouble finding an R equivalent to the S-Plus multicomp
function, which does post-hoc comparisons of treatments means in
ANOVAs. Am I missing something obvious?
The package called multcomp? I don't know if it is the same.
--
Jonathan Baron,
On Tue, 7 Oct 2003, Jonathan Baron wrote:
On 10/07/03 09:05, Peter Adler wrote:
I'm having trouble finding an R equivalent to the S-Plus multicomp
function, which does post-hoc comparisons of treatments means in
ANOVAs. Am I missing something obvious?
The package called multcomp? I
39 matches
Mail list logo