Jim,
In the glm object I can find the contrasts of the main treats vs the
first i.e. 2v1, 3v1 and
4v1 ... however I would like to get the complete set including 3v2, 4v2,
and 4v3 ... along with
the Std. Errors of all contrasts.
Your best all round approach would be to use the multcomp
--286).
The models you need to compare are the following:
##
Aov.mod - aov(Y ~ V * N + Error(B/V/N), data = oats)
Lme.mod - lme(Y ~ V * N, random = ~1 | B/V/N, data = oats)
Lmer.mod - lmer(Y~ V * N +(1|B)+(1|B:V)+(1|B:N), data = oats)
summary(Aov.mod)
anova(Lme.mod)
anova(Lmer.mod)
HTH, Mark Difford
Hi Anna,
How can I change the barplot so that the left hand axis scales from 0 to
15 and the right hand
axis from 0 to 5?
Try this:
par(mfrow=c(1,1), mai=c(1.0,1.0,1.0,1.0))
Plot1-barplot(rbind(Y1,Y2), beside=T, axes=T, names.arg=c(a,b),
ylim=c(0,15), xlim=c(1,9), space=c(0,1),
Hi Selthy,
I'd like to use a Wilcoxon Rank Sum test to compare two populations of
values. Further, I'd like
to do this simultaneously for 114 sets of values.
Well, you read your data set into R using:
##
?read.table
?read.csv
There are other ways to bring in data. Save the import to a
Jane,
Does someone know how to do fa and cfa with strong skewed data?
Your best option might be to use a robustly estimated covariance matrix as
input (see packages robust/robustbase).
Or you could turn to packages FAiR or lavaan (maybe also OpenMx). Or you
could try soft modelling via
Hi Raquel,
routine in R to compute polychoric matrix to more than 2 categorical
variables? I tried polycor
package, but it seems to be suited only to 2-dimensional problems.
Bur surely ?hetcor (in package polycor) does it.
Regards, Mark.
--
View this message in context:
Hi Nicola,
In few word: does this row indicate a global effect of the predictor
'cat'
or a more specific passage?
It indicates a more specific passage. Use anova(m7) for global/omnibus.
Check this for yourself by fitting the model with different contrasts. The
default contrasts in R are
Hi Petar,
I dunno why, but I cannot make randtes[t].coinertia() from ade4 package
working. I have two nice distance matrices (Euclidean):
Could anyone help with this?
Yes (sort of). The test has not yet been implemented for dudi.pco, as the
message at the end of your listing tells you.
Guy,
For a partial least squares approach look at packages plspm and pathmox.
Also look at sem.additions.
Regards, Mark.
--
View this message in context:
http://r.789695.n4.nabble.com/path-analysis-tp2528558p2530207.html
Sent from the R help mailing list archive at Nabble.com.
I'd prefer to stick with JPEG, TIFF, PNG, or the like. I'm not sure EPS
would fly.
Preferring to stick with bitmap formats (like JPEG, TIFF, PNG) is likely to
give you the jagged lines and other distortions you profess to want to
avoid.
EPS (encapsulated postscript, which handles
Hi Paul,
I have a data set for which PCA based between group analysis (BGA) gives
significant results but CA-BGA does not.
I am having difficulty finding a reliable method for deciding which
ordination
technique is most appropriate.
Reliability really comes down to you thinking about
Hi Liviu,
tmp - latex(.object, cdec=c(2,2), title=)
class(tmp)
[1] latex
html(tmp)
/tmp/RtmprfPwzw/file7e72f7a7.tex:9: Warning: Command not found:
\tabularnewline
Giving up command: \...@hevea@amper
/tmp/RtmprfPwzw/file7e72f7a7.tex:11: Error while reading LaTeX:
This
to the preamble of the *.tex file:
\providecommand{\tabularnewline}{\\}
Regards, Mark.
Liviu Andronic wrote:
Hello
On 10/3/09, Mark Difford mark_diff...@yahoo.co.uk wrote:
This has nothing to do with Hmisc or hevea.
Although I have LyX installed, I don't quite understand where LyX
comes
Hi Steve,
However, I am finding that ... the trendline ... continues to run beyond
this data segment
and continues until it intersects the vertical axes at each side of the
plot.
Your best option is probably Prof. Fox's reg.line function in package car.
##
library(car)
?reg.line
reg.line
Peng Yu wrote:
Some webpage has described prcomp and princomp, but I am still not
quite sure what the major difference between them is.
The main difference, which could be extracted from the information given in
the help files, is that prcomp uses the singular value decomposition [i.e.
does
Hi Michael,
How do you control what is the (intercept) in the model returned by the
lme function and is there a way to still be able to refer to all groups
and
timepoints in there without referring to intercept?
Here is some general help. The intercept is controlled by the contrasts that
Peter Flom wrote:
I am puzzled by the performance of LME in situations where there are
missing data. As I
understand it, one of the strengths of this sort of model is how well it
deals with missing
data, yet lme requires nonmissing data.
You are confusing missing data with an
it
deals with missing
data, yet lme requires nonmissing data.
Mark Difford replied
You are confusing missing data with an unbalanced design. A strengths of
LME
is that it copes with the latter, which aov() does not.
Thanks for your reply, but I don't believe I am confused with respect
Hi David,
Now when I turn on R again the script is now completely blank.
This happened to me about 4--5 months ago under Vista. I cannot quite
remember what I did but I think I got the script working by opening it in
another editor (a hex editor would do) and removing either the first few
Hi Phil,
So far for logistic regression I've tried glm(MASS) and lrm (Design) and
found there is a big
difference.
Be sure that you mean what you say, that you are saying what you mean, and
that you know what you mean when making such statements, especially on this
list. glm is not in
Hi Chris,
My ideal would be to gather the information onto the clipboard so I
could paste it into Excel and do the formatting there, but any approach
would be better than what I have now.
I would never use Excel for this since there are far superior tools
available. But it is very easy to
tdm wrote:
OK, I think I've figured it out, the predict of lrm didn't seem to pass
it through the logistic
function. If I do this then the value is similar to that of lm. Is this
by design? Why would it
be so?
Please take some time to read the help files on these functions so that you
at
Elisabeth,
You should listen to Ted (Harding). He answered your question with:
the vertical axis is scaled logarithmically with the
numerical annotations corresponding to the *raw* values of Y,
not to their log-transformed values. Therefore it does not matter
what base of logarithms is
Hi All,
You can also add a line using lines() if you transform in the call using the
same log-base---but not via R's log=y argument (because of what's stored
in par(yaxp)).
##
par(mfrow=c(1,3))
plot(1:10, log=y)
lines(log10(1:10))
par(yaxp)
plot(log10(1:10), yaxt=n)
axis(side=2,
in advance for your help.
Regards,
Mark Difford.
--
View this message in context:
http://www.nabble.com/Multiple-comparisons--rank-based-anova-tf4516025.html#a12881037
Sent from the R help mailing list archive at Nabble.com.
__
R-help@r-project.org mailing
Also look at the mgp option under ?par.
This allows one to set the margin line for axis title, axis labels and axis
line...
Regards,
Mark Difford.
Marc Schwartz wrote:
On Tue, 2007-09-25 at 15:39 +0200, Christian Schäfer wrote:
Hi,
my x-axis contains labels that consist of two lines
permutation tests of the match.
I hope this is useful.
Regards,
Mark Difford.
Simon Pickett wrote:
This is a general statistics question so I'm sorry if its outside the
field of r help.
Anyway, I have a suite of female and male traits and I have made a matrix
of correlation coefficients using
Hi Edna,
When creating a matrix, is it better to use the structure function or the
matrix function...?
I hope you have a huge (empty) jar in the kitchen, and that your pantry is
empty.
R isn't too difficult, except if you're trying to do stats (and don't know
what you are doing --- though I
Hi Rthoughts,
I am currently discouraged by the use of r. I cannot figure out how to
use it despite
extensive searches. Can anyone help me with getting started? How can
import
a txt file with series...
There are piles of documents that you could (and should) read. I am
surprised that you
platforms.
I will look further into them.
As for everyone else who sent e-mails, thank you. I have printed them out
and will look into them.
Mark Difford wrote:
Hi Rthoughts,
I am currently discouraged by the use of r. I cannot figure out how to
use it despite
extensive searches. Can
of computers but command lines for many programs is
something that cna throw me sometimes!
Regrads, Seb.
Mark Difford wrote:
Hi Rthoughts,
It isn't clear what you mean. When you install R, the installation
program usually puts an icon on your desktop that you can click on to run
Hi Stiffler,
I was wondering why the plot() command ignores the datatype when
displaying axis labels...
plot() doesn't ignore the datatype:
x - as.integer(c(1,2,3))
y -x
typeof(x)
[1] integer
mode(x)
[1] numeric
plot(x,y) calls xy.coords(), which recasts x as: x = as.double(x), which is
Hi Yianni,
This just proves that you should be using R as your calculator, and not the
other one!
Regards, Mark.
gatemaze wrote:
Hello,
on a simple linear model the values produced from the fitted(model)
function
are difference from manually calculating on calc. Will anyone have a
Hi Conny,
It still isn't clear what your question is, but a density plot simply
shows you the distribution of your data, say a set of measurements of
something. Think of it as a modern replacement for the histogram.
See
http://en.wikipedia.org/wiki/Density_estimation
for greater insight.
, make perfect sense.
HTH, Mark.
Duncan Murdoch-2 wrote:
On 19/02/2008 5:40 PM, Stiffler wrote:
Mark Difford wrote:
I was wondering why the plot() command ignores the datatype when
displaying axis labels...
plot() doesn't ignore the datatype:
[...]
plot(x,y) calls xy.coords(), which
Hi Stephen,
Hopefully you will get an answer from one of the experts on mixed models who
subscribe to this list. However, you should know that both lme() and lmer()
currently have anova() methods. The first will give you p-values (but no
SS), and the second will give you SS (but no p-values).
Hi Stephen,
Also i have read in Quinn and Keough 2002, design and analysis of
experiments for
biologists, that a variance component analysis should only be conducted
after a rejection
of the null hypothesis of no variance at that level.
Once again the caveat: there are experts on this list
Hi Stephen,
Slip of the dactylus: lm() does not, of course, take a fixed=arg. So you
need
To recap:
mod.rand - lme(fixed=y ~ x, random=~x|Site, data=...)
mod,fix - lm(y ~ x, data=...) ## or
##mod,fix - lm(formula=y ~ x, data=...)
Bye.
Mark Difford wrote:
Hi Stephen,
Also i have
Hi Ivan,
It appears that xYplot, unlike standard xyplot (or coplot to that
matter)
does not accept factors as x variable in formula.
To add to what you have said. It may not be too well documented in ?xYplot,
but xYplot() is really designed to do a number of very useful things with
two sets
Hi Mirela,
Are the relative R^2 values the CP values?
No. CP is your complexity parameter.
I’ve read that the R^2 = 1-rel error, so I am assuming that in my case
this
would be 1-0.64949. Is this correct?
Yes. See ?rsq.rpart, and run the example, which I've copied below.
##
par(ask=T)
Hi Everyone,
Please don't denigrate the capabilities of GNUplot (Louise excluded). It
can, in fact, do some truly awesome stuff.
http://linuxgazette.net/133/luana.html
The PDF is worth a shot.
Cheers, Mark.
Louise Hoffman-3 wrote:
If you still want to then read ?write.table, that can
Hi Jaap,
Could anybody please direct me in finding an updated version of this
document, or help me
correct the code given in the file. The (out-of-date) code is as follows:
You are not helping yourself, or anyone else, by not including the error
messages you get when trying to execute your
Hi Rory,
There are several. Have a look at the gR Task Views. There you will also
find a link to the statnet suite, where you will find links to a dedicated
set of jstatsoft articles.
Regards, Mark.
Rory Winston wrote:
Hi all
On page 39 of this paper [1] by Andrew Lo there is a very
Hi Aditi,
Parts of _your_ code for the solution offered by Jerome Goudet are wrong;
see my comments.
famfit-lmer(peg.no~1 + (1|family), na.action=na.omit, vcdf) ## use:
na.action=na.exclude
resfam-residuals(famfit)
for( i in 1:length(colms))
+ {
+ print (Marker, i)
+
Perhaps I should have added the following: To see that it works, run the
following:
famfit-lmer(peg.no~1 + (1|family), na.action=na.exclude, vcdf)
resfam-residuals(famfit)
for( i in 1:length(colms))
{
print(coef(lm(resfam~colms[,i])))
}
Regards, Mark.
A Singh wrote:
Dear All,
I am
Hi Jean-Paul,
... since R is not able to extract residuals?
R can extract the residuals, but they are a hidden in models with an error
structure
##
str(aov(PH~Community*Mowing*Water + Error(Block)))
residuals(aov(PH~Community*Mowing*Water + Error(Block))$Block)
Hi Jean-Paul,
However, I've tried both solutions on my model, and I got different
residuals :...
What could be the difference between the two?
There is no difference. You have made a mistake.
##
tt - data.frame(read.csv(file=tt.csv, sep=)) ## imports your data set
T.aov -
Hi Timo,
I need functions to calculate Yule's Y or Cramérs Index... Are such
functions existing?
Also look at assocstats() in package vcd.
Regards, Mark.
Timo Stolz wrote:
Dear R-Users,
I need functions to calculate Yule's Y or Cramérs Index, in order to
correlate variables that are
Hi Rainer,
the question came up if it would be possible to add a picture
(saved on the HDD) to a graph (generated by plot()), which
we could not answer.
Yes. Look at package pixmap and, especially, at the examples sub s.logo() in
package ade4.
Regards, Mark.
Rainer M Krug-6 wrote:
Hi
Hi Tom,
For example, if I want to use the xy-pair bootstrap how do I indicate
this in summary.rq?
The general approach is documented under summary.rq (sub se option 5).
Shorter route is boot.rq, where examples are given.
## ?boot.rq
y - rnorm(50)
x - matrix(rnorm(100),50)
fit - rq(y~x,tau =
Hannes,
been trying to read a text file that contains heading in the first line
in to R but cant.
You want the following:
##
TDat - read.csv(small.txt, sep=\t)
TDat
str(TDat)
See ?read.csv
Regards, Mark.
hannesPretorius wrote:
Ok i feel pretty stupid.. been trying to read a text file
economically
--
David
On Aug 2, 2009, at 2:10 PM, Mark Difford wrote:
Hannes,
been trying to read a text file that contains heading in the first
line
in to R but cant.
You want the following:
##
TDat - read.csv(small.txt, sep=\t)
TDat
str(TDat)
See ?read.csv
Regards
And I meant to add, but somehow forgot, that the default for read.csv is
header=TRUE (which is different from read.table, where it is FALSE).
Regards, Mark.
Mark Difford wrote:
Hi David,
I think he may also need to add the header=TRUE argument:
No! The argument header= is not required
Hannes,
When I read the entire text file in I get the following message
Then you have not followed the very simple instructions I gave you above,
which I repeat below. Or you have changed small.txt.
##
TDat - read.csv(small.txt, sep=\t)
TDat
str(TDat)
Mark.
hannesPretorius wrote:
When
Emmanuel,
somewhat incomplete help pages : what in h*ll are valid arguments to
mcp() beyond Tukey ??? Curently, you'll have to dig in the source to
learn that...).
Not so: they are clearly stated in ?contrMat.
Regards, Mark.
Emmanuel Charpentier-3 wrote:
Le jeudi 30 juillet 2009 à
Hi Arthur,
This can be done quite easily using the appropriate arguments listed under
?par; and there are other approaches. Ready-made functions exist in several
packages. I tend to use ?add.scatter from package ade4. It's a short
function, so it's easy to customize it, but it works well
Hi Michael,
Pulling my hair out here trying to get something very simple to work. ...
I can't quite see what you are trying to do [and I am not sure that you
clearly state it], but you could make things easier and simpler by (1)
creating a factor to identify your groups of rows more cleanly
Hi Arthur,
I was wondering if there was a package that can make pretty R tables to
pdf.
You got through TeX/LateX, but PDF could be your terminus. Package Hmisc:
? summary.formula
and its various arguments and options. You can't get much better.
Hi Arthur,
Sorry, sent you down the wrong track: this will help you to get there:
http://biostat.mc.vanderbilt.edu/twiki/pub/Main/StatReport/summary.pdf
Regards, Mark.
Arthur Roberts wrote:
Hi, all,
All your comments have been very useful. I was wondering if there was
a package
Hi David,
Specifically, within each panel, I want to set the limits for x and y
equal to each other since it is paired data (using the max value of the
two).
In addition to the code Chuck Cleland sent you, you may want to square
things up by adding the argument: aspect = iso before the
Hi Birgitle,
... my variables are dichotomous factors, continuous (numerical) and
ordered factors. ...
Now I am confused what I should use to calculate the correlation using
all my variables
and how I could do that in R.
Professor Fox's package polycor will do this for you in a very nice
Hi Jörg,
I haven't found anything in par()...
No? Well don't bet your bottom $ on it (almost never in R). ?par (sub mgp).
Mark.
Jörg Groß wrote:
Hi,
How can I make the distance between an axis-label and the axis bigger?
I haven't found anything in par()...
Hi Birgitle,
You need to get this right if someone is going to spend their time helping
you. Your code doesn't work: You have specified more columns in colClasses
than you have in the provided data set.
TestPart-read.table(TestPart.txt, header=TRUE,row.names=1,
na.strings=NA ,colClasses =
Hi Birgitle,
It seems to be failing on those columns that have just a single entry (i.e
= 1, with the rest as 0; having just 1, an NA, and then 0s gets you
through). And there are other reasons for failure (in the call to get a
positive definite matrix).
The main problem lies in the calculation
missing values.
But I will try which variables I can finally use.
Many thanks again.
B.
Mark Difford wrote:
Hi Birgitle,
It seems to be failing on those columns that have just a single entry
(i.e = 1, with the rest as 0; having just 1, an NA, and then 0s gets
you through
Hi Kevin,
Where is the archive?
Start with this:
?RSiteSearch
HTH, Mark.
rkevinburton wrote:
I seem to remember this topic coming up before so I decided to look at the
archive and realized that I didn't know where it was. Is there a
searchable archive for this list? Thank you.
My
Hi Megan,
I would like to have an X-axis where the labels for the years line up
after every two bars
in the plot (there is one bar for hardwood, and another for softwood).
It isn't clear to me from your description what you really want (I found no
attachment)? What you seem to be trying to
Hi Tom,
1|ass%in%pop%in%fam
This is non-standard, but as you have found, it works. The correct
translation is in fact
1|fam/pop/ass
and not 1|ass/pop/fam as suggested by Harold Doran. Dropping %,
ass%in%pop%in%fam reads [means] as: nest ass in pop [= pop/ass], and then
nest this in fam ==
Hi Brandon,
...is it sufficient to leave the values as they are or should I generate
unique names for all
combinations of sleeve number and temperature, using something like
data$sleeve.in.temp - factor(with(data, temp:sleeve)[drop=TRUE])
You might be luckier posting this on
what is the problem?
A solution is:
plot(1,2, ylab=expression(paste(insects , m^2)))
The problem is very much more difficult to determine.
stephen sefick wrote:
plot(1,2, ylab= paste(insects, expression(m^2), sep= ))
I get insects m^2
I would like m to the 2
what is the problem?
Hi Nikolaos,
My question again is: Why can't I reproduce the results? When I try a
simple anova without any random factors:
Lack of a right result probably has to do with the type of analysis of
variance that is being done. The default in R is to use so-called Type I
tests, for good reason.
Hi ...
Sorry, an e was erroneously elided from Ripley...
Mark Difford wrote:
Hi Nikolaos,
My question again is: Why can't I reproduce the results? When I try a
simple anova without any random factors:
Lack of a right result probably has to do with the type of analysis of
variance
Hi Lorenzo,
...but I would like to write that 5=k=15.
This is one way to do what you want
plot(1,1)
legend(topright, expression(paste(R[g]~k^{1/d[f]^{small}}~5=k, {}=15)))
HTH, Mark.
Lorenzo Isella wrote:
Dear All,
I am sure that what I am asking can be solved by less than a
Hi Lorenzo,
I may (?) have left something out. It isn't clear what ~ is supposed to
mean; perhaps it is just a spacer, or perhaps you meant the following:
plot(1,1)
legend(topright, expression(paste(R[g] %~~% k^{1/d[f]^{small}},~5=k,
{}=15)))
HTH, Mark.
Mark Difford wrote:
Hi Lorenzo
Hi Daren,
Small progress, ...
m4 - list(m1=m1, m2=m2, m3=m3)
boxplot(m4)
It's always a good idea to have a look at your data first (assuming you
haven't). This shows that the reliable instrument is m2.
HTH, Mark.
Daren Tan wrote:
Small progress, I am relying on levene test to check
Have you read the documentation to either of the functions you are using?
?bartlett.test
Performs Bartlett's test of the null that the variances in each of the
groups (samples) are the same.
This explicitly tells you what is being tested, i.e. the null tested is that
var1 = var2.
?rnorm
Hi Richard,
The tests give different Fs and ps. I know this comes up every once in a
while on R-help so I did my homework. I see from these two threads:
This is not so, or it is not necessarily so. The error structure of your two
models is quite different, and this is (one reason) why the F-
...
To pick up on what Mark has said: it strikes me that this is related to the
simplex, where the bounded nature of the vector space means that normal
arithmetical operations (i.e. Euclidean) don't work---that is, they can be
used, but the results are wrong. Covariances and correlations for
Hi Jean-Pierre,
A general comment is that I think you need to think more carefully about
what you are trying to get out of your analysis. The random effects
structure you are aiming for could be stretching your data a little thin.
It might be a good idea to read through the archives of the
Hi Bill,
Since x, y,and z all have measurement errors attached, the proper way
to do the fit is with principal components analysis, and to use the
first component (called loadings in princomp output).
The easiest way for you to do this is to use the pcr [principal component
regression]
Hi Danilo,
I need to do a model II linear regression, but I could not find out how!!
The smatr package does so-called model II (major axis) regression.
Regards, Mark.
Danilo Muniz wrote:
I need to do a model II linear regression, but I could not find out how!!
I tryed to use the lm
Hi Danilo,
I need to do a model II linear regression, but I could not find out how!!
The smatr package does so-called model II (major axis) regression.
Regards, Mark.
Danilo Muniz wrote:
I need to do a model II linear regression, but I could not find out how!!
I tryed to use the lm
Hi Hadley,
There is also locfit, which is very highly regarded by some authorities
(e.g. Hastie, Tibs, and Friedman).
Cheers, Mark.
hadley wrote:
Hi all,
Do any packages implement density estimation in a modelling framework?
I want to be able to do something like:
dmodel -
often I have
seen analysts put the (usually) inaccurately determined analyte on x and the
spec reading on y.
HTH, Mark.
Dylan Beaudette-2 wrote:
On Friday 29 August 2008, Mark Difford wrote:
Hi Danilo,
I need to do a model II linear regression, but I could not find out
how!!
The smatr
Hi Stephen,
See packages:
coin
nparcomp
npmc
There is also kruskalmc() in package pgirmess
Regards, Mark.
stephen sefick wrote:
I have insect data from twelve sites and like most environmental data
it is non-normal mostly. I would like to preform an anova and a means
seperation like
Hi Lara,
And I cant for the life of me work out why category one (semio1) is being
ignored, missing
etc.
Nothing is being ignored Lara --- but you are ignoring the fact that your
factors have been coded using the default contrasts in R, viz so-called
treatment or Dunnett contrasts. That is,
And perhaps I should also have added: fit your model without an intercept and
look at your coefficients. You should be able to work it out from there
quite easily. Anyway, you now have the main pieces.
Regards, Mark.
Mark Difford wrote:
Hi Lara,
And I cant for the life of me work out why
Hi Amin,
First, does R have a package that can implement the multimodality test,
e.g., the Silverman test, DIP test, MAP test or Runt test.
Jeremy Tantrum (a Ph.D. student of Werner Steutzle's, c. 2003/04) did some
work on this. There is some useful code on Steutzle's website:
Whoops! I think that should be Stuetzle --- though I very much doubt that he
reads the list.
Mark Difford wrote:
Hi Amin,
First, does R have a package that can implement the multimodality test,
e.g., the Silverman test, DIP test, MAP test or Runt test.
Jeremy Tantrum (a Ph.D. student
Hi Amin,
And I have just remembered that there is a function called curveRep in Frank
Harrell's Hmisc package that might be useful, even if not quite in the
channel of your enquiry. curveRep was added to the package after my
struggles, so I never used it and so don't know how well it performs
Genentech
-Original Message-
From: [EMAIL PROTECTED] [mailto:[EMAIL PROTECTED]
On
Behalf Of Mark Difford
Sent: Tuesday, September 09, 2008 1:23 PM
To: r-help@r-project.org
Subject: Re: [R] Modality Test
Hi Amin,
And I have just remembered that there is a function called curveRep
Hi Agustin,
Is there any way of having a greyed (ghosted) text...
Yes!
##
plot(1, type=n)
text(1, Old Grey Whistle Test, col=slategray4, cex=2)
text(1, y=1.2, OH!, col=grey95, cex=4)
Then plot what you want on top. If you export or plot to a PDF/ps device the
last-plotted items will overlie
Hi Yihui,
That's good, I like it! Very nice site.
Regards, Mark.
Yihui Xie wrote:
Well, his talk seems to have attracted a lot of people... You may
simply use gray text in your plot. Here is an example:
##
x = runif(10)
y = runif(10)
z
Hi Rodrigo,
I would like to use something like squares, triangles and circles (filled
and empty).
You would normally add this using points():
?points
##
plot(1:10, type=n)
points(1:5, pch=21:25, bg=1:5)
points(6:10, pch=21:25, bg=c(1,darkgrey,cyan, bisque))
points(6:10, y=rep(6,5), pch=1:5)
.
--
From: Mark Difford [EMAIL PROTECTED]
Sent: Saturday, September 13, 2008 9:21 AM
To: r-help@r-project.org
Subject: Re: [R] Symbols on a capscale object plot
Hi Rodrigo,
I would like to use something like squares, triangles and circles
(filled
and empty).
You
Hi Rogrido,
Sorry: The first points() call was missing a vital comma. It should have
been.
points(ord.obj$scores[mydf$Site==MarkerA, ], pch=21, bg=red)
See ?[
Mark Difford wrote:
Hi Rodrigo,
Maybe if I can define a factor that specifies these groups and use this
factor to assign
Hi Roberto,
but I can't figure out the /(Lobe*Tissue) part...
This type of nesting is easier to do using lmer(). To do it using lme() you
have to generate the crossed factor yourself. Do something like this:
##
tfac - with(vslt, interaction(Lobe, Tissue, drop=T))
str(tfac); head(tfac)
Hi Roberto,
It's difficult to comment further on specifics without access to your data
set. A general point is that the output from summary(aov.object) is not
directly comparable with summary(lme.object). The latter gives you a summary
of a fitted linear regression model, not an analysis of
Hi Roberto,
The other thing you can do --- if you don't wish to step across to lmer(),
where you will be able to exactly replicate the crossed-factor error
structure --- is stay with aov(... + Error()), but fit the factor you are
interested in last. Assume it is Sex. Then fit your model as
Hi Rodrigo,
[apropos of Ward's method]
... we saw something like You must use it with Euclidean Distance...
Strictly speaking this is probably correct, as Ward's method does an
analysis of variance type of decomposition and so doesn't really make much
sense (I think) unless Euclidean
1 - 100 of 309 matches
Mail list logo