Hi Bo,
I can't seem to find the right set of commands to enable me to do perform
a
regression with cluster-adjusted standard-errors.
Frank Harrell's Design package has ?bootcov and ?robcov, which will both do
it.
Regards, Mark.
Bo Cowgill wrote:
I can't seem to find the right set of
Hi Roberto,
The other thing you can do --- if you don't wish to step across to lmer(),
where you will be able to exactly replicate the crossed-factor error
structure --- is stay with aov(... + Error()), but fit the factor you are
interested in last. Assume it is Sex. Then fit your model as
Hi Rodrigo,
[apropos of Ward's method]
... we saw something like You must use it with Euclidean Distance...
Strictly speaking this is probably correct, as Ward's method does an
analysis of variance type of decomposition and so doesn't really make much
sense (I think) unless Euclidean
Hi Meir,
It's part of Prof. Ripley's package tree, but is not exported.
library(tree)
ls(asNamespace(tree))
RSiteSearch(tree.matrix)
Regards, Mark.
Meir Preiszler wrote:
Hi,
Does anyone know where such a function can be found?
Thanks
Meir
Hi Roberto,
but I can't figure out the /(Lobe*Tissue) part...
This type of nesting is easier to do using lmer(). To do it using lme() you
have to generate the crossed factor yourself. Do something like this:
##
tfac - with(vslt, interaction(Lobe, Tissue, drop=T))
str(tfac); head(tfac)
Hi Roberto,
It's difficult to comment further on specifics without access to your data
set. A general point is that the output from summary(aov.object) is not
directly comparable with summary(lme.object). The latter gives you a summary
of a fitted linear regression model, not an analysis of
Hi Rodrigo,
I would like to use something like squares, triangles and circles (filled
and empty).
You would normally add this using points():
?points
##
plot(1:10, type=n)
points(1:5, pch=21:25, bg=1:5)
points(6:10, pch=21:25, bg=c(1,darkgrey,cyan, bisque))
points(6:10, y=rep(6,5), pch=1:5)
.
--
From: Mark Difford [EMAIL PROTECTED]
Sent: Saturday, September 13, 2008 9:21 AM
To: r-help@r-project.org
Subject: Re: [R] Symbols on a capscale object plot
Hi Rodrigo,
I would like to use something like squares, triangles and circles
(filled
and empty).
You
Hi Rogrido,
Sorry: The first points() call was missing a vital comma. It should have
been.
points(ord.obj$scores[mydf$Site==MarkerA, ], pch=21, bg=red)
See ?[
Mark Difford wrote:
Hi Rodrigo,
Maybe if I can define a factor that specifies these groups and use this
factor to assign
Hi Agustin,
Is there any way of having a greyed (ghosted) text...
Yes!
##
plot(1, type=n)
text(1, Old Grey Whistle Test, col=slategray4, cex=2)
text(1, y=1.2, OH!, col=grey95, cex=4)
Then plot what you want on top. If you export or plot to a PDF/ps device the
last-plotted items will overlie
Hi Yihui,
That's good, I like it! Very nice site.
Regards, Mark.
Yihui Xie wrote:
Well, his talk seems to have attracted a lot of people... You may
simply use gray text in your plot. Here is an example:
##
x = runif(10)
y = runif(10)
z
Genentech
-Original Message-
From: [EMAIL PROTECTED] [mailto:[EMAIL PROTECTED]
On
Behalf Of Mark Difford
Sent: Tuesday, September 09, 2008 1:23 PM
To: r-help@r-project.org
Subject: Re: [R] Modality Test
Hi Amin,
And I have just remembered that there is a function called curveRep
Hi Amin,
First, does R have a package that can implement the multimodality test,
e.g., the Silverman test, DIP test, MAP test or Runt test.
Jeremy Tantrum (a Ph.D. student of Werner Steutzle's, c. 2003/04) did some
work on this. There is some useful code on Steutzle's website:
Whoops! I think that should be Stuetzle --- though I very much doubt that he
reads the list.
Mark Difford wrote:
Hi Amin,
First, does R have a package that can implement the multimodality test,
e.g., the Silverman test, DIP test, MAP test or Runt test.
Jeremy Tantrum (a Ph.D. student
Hi Amin,
And I have just remembered that there is a function called curveRep in Frank
Harrell's Hmisc package that might be useful, even if not quite in the
channel of your enquiry. curveRep was added to the package after my
struggles, so I never used it and so don't know how well it performs
Hi Lara,
And I cant for the life of me work out why category one (semio1) is being
ignored, missing
etc.
Nothing is being ignored Lara --- but you are ignoring the fact that your
factors have been coded using the default contrasts in R, viz so-called
treatment or Dunnett contrasts. That is,
And perhaps I should also have added: fit your model without an intercept and
look at your coefficients. You should be able to work it out from there
quite easily. Anyway, you now have the main pieces.
Regards, Mark.
Mark Difford wrote:
Hi Lara,
And I cant for the life of me work out why
Hi Danilo,
I need to do a model II linear regression, but I could not find out how!!
The smatr package does so-called model II (major axis) regression.
Regards, Mark.
Danilo Muniz wrote:
I need to do a model II linear regression, but I could not find out how!!
I tryed to use the lm
Hi Danilo,
I need to do a model II linear regression, but I could not find out how!!
The smatr package does so-called model II (major axis) regression.
Regards, Mark.
Danilo Muniz wrote:
I need to do a model II linear regression, but I could not find out how!!
I tryed to use the lm
Hi Hadley,
There is also locfit, which is very highly regarded by some authorities
(e.g. Hastie, Tibs, and Friedman).
Cheers, Mark.
hadley wrote:
Hi all,
Do any packages implement density estimation in a modelling framework?
I want to be able to do something like:
dmodel -
often I have
seen analysts put the (usually) inaccurately determined analyte on x and the
spec reading on y.
HTH, Mark.
Dylan Beaudette-2 wrote:
On Friday 29 August 2008, Mark Difford wrote:
Hi Danilo,
I need to do a model II linear regression, but I could not find out
how!!
The smatr
Hi Stephen,
See packages:
coin
nparcomp
npmc
There is also kruskalmc() in package pgirmess
Regards, Mark.
stephen sefick wrote:
I have insect data from twelve sites and like most environmental data
it is non-normal mostly. I would like to preform an anova and a means
seperation like
Hi Bill,
Since x, y,and z all have measurement errors attached, the proper way
to do the fit is with principal components analysis, and to use the
first component (called loadings in princomp output).
The easiest way for you to do this is to use the pcr [principal component
regression]
Hi Jean-Pierre,
A general comment is that I think you need to think more carefully about
what you are trying to get out of your analysis. The random effects
structure you are aiming for could be stretching your data a little thin.
It might be a good idea to read through the archives of the
...
To pick up on what Mark has said: it strikes me that this is related to the
simplex, where the bounded nature of the vector space means that normal
arithmetical operations (i.e. Euclidean) don't work---that is, they can be
used, but the results are wrong. Covariances and correlations for
Hi Richard,
The tests give different Fs and ps. I know this comes up every once in a
while on R-help so I did my homework. I see from these two threads:
This is not so, or it is not necessarily so. The error structure of your two
models is quite different, and this is (one reason) why the F-
Have you read the documentation to either of the functions you are using?
?bartlett.test
Performs Bartlett's test of the null that the variances in each of the
groups (samples) are the same.
This explicitly tells you what is being tested, i.e. the null tested is that
var1 = var2.
?rnorm
Hi Daren,
Small progress, ...
m4 - list(m1=m1, m2=m2, m3=m3)
boxplot(m4)
It's always a good idea to have a look at your data first (assuming you
haven't). This shows that the reliable instrument is m2.
HTH, Mark.
Daren Tan wrote:
Small progress, I am relying on levene test to check
Hi Nikolaos,
My question again is: Why can't I reproduce the results? When I try a
simple anova without any random factors:
Lack of a right result probably has to do with the type of analysis of
variance that is being done. The default in R is to use so-called Type I
tests, for good reason.
Hi ...
Sorry, an e was erroneously elided from Ripley...
Mark Difford wrote:
Hi Nikolaos,
My question again is: Why can't I reproduce the results? When I try a
simple anova without any random factors:
Lack of a right result probably has to do with the type of analysis of
variance
Hi Lorenzo,
...but I would like to write that 5=k=15.
This is one way to do what you want
plot(1,1)
legend(topright, expression(paste(R[g]~k^{1/d[f]^{small}}~5=k, {}=15)))
HTH, Mark.
Lorenzo Isella wrote:
Dear All,
I am sure that what I am asking can be solved by less than a
Hi Lorenzo,
I may (?) have left something out. It isn't clear what ~ is supposed to
mean; perhaps it is just a spacer, or perhaps you meant the following:
plot(1,1)
legend(topright, expression(paste(R[g] %~~% k^{1/d[f]^{small}},~5=k,
{}=15)))
HTH, Mark.
Mark Difford wrote:
Hi Lorenzo
what is the problem?
A solution is:
plot(1,2, ylab=expression(paste(insects , m^2)))
The problem is very much more difficult to determine.
stephen sefick wrote:
plot(1,2, ylab= paste(insects, expression(m^2), sep= ))
I get insects m^2
I would like m to the 2
what is the problem?
Hi Brandon,
...is it sufficient to leave the values as they are or should I generate
unique names for all
combinations of sleeve number and temperature, using something like
data$sleeve.in.temp - factor(with(data, temp:sleeve)[drop=TRUE])
You might be luckier posting this on
Hi Tom,
1|ass%in%pop%in%fam
This is non-standard, but as you have found, it works. The correct
translation is in fact
1|fam/pop/ass
and not 1|ass/pop/fam as suggested by Harold Doran. Dropping %,
ass%in%pop%in%fam reads [means] as: nest ass in pop [= pop/ass], and then
nest this in fam ==
Hi Megan,
I would like to have an X-axis where the labels for the years line up
after every two bars
in the plot (there is one bar for hardwood, and another for softwood).
It isn't clear to me from your description what you really want (I found no
attachment)? What you seem to be trying to
Hi Birgitle,
You need to get this right if someone is going to spend their time helping
you. Your code doesn't work: You have specified more columns in colClasses
than you have in the provided data set.
TestPart-read.table(TestPart.txt, header=TRUE,row.names=1,
na.strings=NA ,colClasses =
Hi Birgitle,
It seems to be failing on those columns that have just a single entry (i.e
= 1, with the rest as 0; having just 1, an NA, and then 0s gets you
through). And there are other reasons for failure (in the call to get a
positive definite matrix).
The main problem lies in the calculation
missing values.
But I will try which variables I can finally use.
Many thanks again.
B.
Mark Difford wrote:
Hi Birgitle,
It seems to be failing on those columns that have just a single entry
(i.e = 1, with the rest as 0; having just 1, an NA, and then 0s gets
you through
Hi Kevin,
Where is the archive?
Start with this:
?RSiteSearch
HTH, Mark.
rkevinburton wrote:
I seem to remember this topic coming up before so I decided to look at the
archive and realized that I didn't know where it was. Is there a
searchable archive for this list? Thank you.
My
Hi Birgitle,
... my variables are dichotomous factors, continuous (numerical) and
ordered factors. ...
Now I am confused what I should use to calculate the correlation using
all my variables
and how I could do that in R.
Professor Fox's package polycor will do this for you in a very nice
Hi Jörg,
I haven't found anything in par()...
No? Well don't bet your bottom $ on it (almost never in R). ?par (sub mgp).
Mark.
Jörg Groß wrote:
Hi,
How can I make the distance between an axis-label and the axis bigger?
I haven't found anything in par()...
Hi David,
Specifically, within each panel, I want to set the limits for x and y
equal to each other since it is paired data (using the max value of the
two).
In addition to the code Chuck Cleland sent you, you may want to square
things up by adding the argument: aspect = iso before the
Hi Michael,
Pulling my hair out here trying to get something very simple to work. ...
I can't quite see what you are trying to do [and I am not sure that you
clearly state it], but you could make things easier and simpler by (1)
creating a factor to identify your groups of rows more cleanly
Hi Arthur,
I was wondering if there was a package that can make pretty R tables to
pdf.
You got through TeX/LateX, but PDF could be your terminus. Package Hmisc:
? summary.formula
and its various arguments and options. You can't get much better.
Hi Arthur,
Sorry, sent you down the wrong track: this will help you to get there:
http://biostat.mc.vanderbilt.edu/twiki/pub/Main/StatReport/summary.pdf
Regards, Mark.
Arthur Roberts wrote:
Hi, all,
All your comments have been very useful. I was wondering if there was
a package
Hi Arthur,
This can be done quite easily using the appropriate arguments listed under
?par; and there are other approaches. Ready-made functions exist in several
packages. I tend to use ?add.scatter from package ade4. It's a short
function, so it's easy to customize it, but it works well
Hi Ronaldo,
... lmer p-values
There are two packages that may help you with this and that might work with
the current implementation of lmer(). They are languageR and RLRsim.
HTH, Mark.
Bugzilla from [EMAIL PROTECTED] wrote:
Hi,
I have a modelo like this:
Yvar - c(0, 0, 0, 0, 1, 0,
Hi Chunhao,
I google the website and I found that there are three ways to perform
repeated measure ANOVA: aov, lme and lmer.
It's also a good idea to search through the archives.
I use the example that is provided in the above link and I try
Hi Chunhao,
If you carefully read the posting that was referred to you will see that
lme() and not lmer() was used as an example (for using with the multcomp
package). lmer() was only mentioned as an aside... lmer() is S4 and doesn't
work with multcomp, which is S3.
Apropos of specifying random
Hi Miki and Chunhao,
Rusers (Anna, and Mark {thank you guys}) provide me a vary valuable
information.
Also see Gavin Simpson's posting earlier today: apparently multcomp does now
work with lmer objects (it's gone through phases of not working, then
working: it's still being developed).
Hi Kevin,
The documentation indicates that the bw is essentially the sd.
d - density(rnorm(1000))
Not so. The documentation states that the following about bw: The kernels
are scaled such that this is the standard deviation of the smoothing
kernel..., which is a very different thing.
The
to the negative one-fifth power (= Silverman's ‘rule of
thumb’
But how does that relate to say a Poisson distribution or a two-parameter
distribution like a normal, beta, or binomial distribution?
Thank you.
Kevin
Mark Difford [EMAIL PROTECTED] wrote:
Hi Kevin
, and
then...
HTH, Mark.
rkevinburton wrote:
Sorry I tried WikiPedia and only found:
Wikipedia does not have an article with this exact name.
I will try to find some other sources of information.
Kevin
Mark Difford [EMAIL PROTECTED] wrote:
Hi Kevin,
I still have my original
Hi Murali,
I am interested in plotting my regression analysis results(regression
coefficients and
standard errors obtained through OLS and Tobit models) in the form of
graphs.
plot(obj$lm) will give you a set of diagnostic plots. What you seem to be
after is ?termplot. Also look at John
Hi Ileana,
See this thread:
http://www.nabble.com/R-package-install-td18636993.html
HTH, Mark.
Somesan, Ileana wrote:
Hello,
I want to install the package multiv which is not maintained any
more (found in the archive: multiv_1.1-6.tar.gz from 16 July 2003). I
have installed an older
Hi Jinsong and Thierry,
(x1 + x2 + x3) ^2 will give you the main effects and the interactions.
Although it wasn't specifically requested it is perhaps important to note
that (...)^2 doesn't expand to give _all_ interaction terms, only
interactions to the second order, so the interaction term
Hi Kevin,
Can anyone give me a short tutorial on the formula syntax? ... I am sorry
but I could not
glean this information from the help page on lm.
You can give yourself a very good tutorial by reading ?formula and Chapter
12 of
Hi Robin,
I ... can't get lm to work despite reading the help. I can get it to work
with a single
explanatory variable, EG model - lm(data$admissions~data$maxitemp)
I'll answer just the second of your questions. Advice: don't just read the
help file, look at the examples and run them; look
Hi Edna,
Because I am always subsetting, I keep the following function handy
mydata[] - lapply(mydata, function(x) if(is.factor(x)) x[,drop=T] else x)
This will strip out all factor levels that have been dropped by a previous
subsetting operation. For novice users of R (though I am not
Hi Ben,
Sorry (still a little out-of-tune), perhaps what you really need to know
about is ?[
HTH, Mark.
Mark Difford wrote:
Hi Ben,
If you wouldn't mind, how do I access the individual components inside
coefficients matrix?
What you want to know about is ?attributes
Hi Ben,
If you wouldn't mind, how do I access the individual components inside
coefficients matrix?
What you want to know about is ?attributes
##
attributes(model)
model$coefficients
model$coefficients[1]
model$coefficients[2:4]
model$coefficients[c(1,5)]
HTH, Mark.
ascentnet wrote:
Hi All,
It really comes down to a question of attitude: you either want to learn
something fundamental or core and so bootstrap yourself to a better place
(at least away from where you are), or you don't. As Marc said, Michal seems
to have erected a wall around his thinking.
I don't think it's
Hi Angelo,
Look carefully at package vcd; and at log-linear models (+ glm(...,
family=poisson)). For overdispersion there are more advanced methods.
HTH, Mark.
Angelo Scozzarella wrote:
Hi,
how can I treat data organised in classes and frequencies?
Ex.
class frequency
Hi Jaap,
Great stuff! As the old adage went, Go well, go
Bye, Mark.
Van Wyk, Jaap wrote:
Thanks, Mark, for the response.
The problem is vith SciViews. It is not stable under the latest version of
R.
I found a solution by downloading the latest version of Tinn-R, which
communicates
Hi Jaap,
With all those packages loading it could take some time, unless it's a known
problem (?). Why don't you do a vanilla start (add switch --vanilla to
startup) and do some simple core-related stuff. Then add packages
one-by-one...
Or: search through the source code of the packages for
Hi Daniela,
Spencer (? Graves) is not at home. Seriously, this is a list that many
people read and use. If you wish to elicit a response, then you would be
wise to give a better statement of what your difficulty is.
The function you enquire about is well documented with an example, see
##
Hi Andreas,
It's because you are dealing with binary or floating point calculations, not
just a few apples and oranges, or an abacus (which, by the way, is an
excellent calculating device, and still widely used in some [sophisticated]
parts of the world).
Hi Ptit,
I would like to fit data with the following formula :
y=V*(1+alpha*(x-25))
where y and x are my data, V is a constant and alpha is the slope I'm
looking for.
Priorities first: lm() or ordinary least-square regression is a basically a
method for finding the best-fitting straight
Hi willemf,
Glad to hear that it helped. Years ago (late-90s) I Linuxed, but have since
been forced into the Windows environment (where, however, I have the great
pleasure of being able to use MiKTeX and LyX, i.e. TeX/LaTeX). I therefore
can't help you further, except to say that I have never
, it's not possible to help further.
Of course, you could send me the data and a script showing how you want it
plotted, and I would send you a PDF in return, showing you what R can do ;).
HTH, Mark.
Mark Difford wrote:
Hi willemf,
Glad to hear that it helped. Years ago (late-90s) I Linuxed
Hi Dylan,
I am curious about how to interpret the table produced by
anova(ols(...)), from the Design package.
Frank will perhaps come in with more detail, but if he doesn't then you can
get an understanding of what's being tested by doing the following on the
saved object from your OLS call
Hi wf
I just cannot believe that R does not have a good command of this.
Curious. I find R's graphical output matchless. Almost without exception I
use postscript and find the controls available under base graphics (?par) or
lattice adequate (to understate). Very occassionally I fiddle with
Hi Caroline,
is.na(strptime(19810329012000,format=%Y%m%d%H%M%S))
[1] TRUE
The problem was to do with daylight saving time. I need to specify a
time zone as this time doesn't exist in my operating system's current
time zone. I still think this is odd behaviour though! When you look
at
Hi Pavel,
First, annonations should have the same cex-size on each axis. That said,
the way that this is implemented is not too cexy (ouch!). You need to plot
without axes, e.g. plot(obj, axes=F), then you add your axes afterwards
using your own specifications.
?axes
Also see ?par (sub ann)
Hi Pavel,
And perhaps read the entry for cex.axis a little more carefully. And bear in
mind that labels, main, and sub are distinct, having their own cex.-
settings.
HTH, Mark.
Mark Difford wrote:
Hi Pavel,
First, annonations should have the same cex-size on each axis. That said
Hi Caroline,
Because POSIXlt is a complicated structure: you are dealing with a list, not
with what you think you are. Maybe this will help you to see more clearly.
strptime(19800604062759, format=%Y%m%d%H%M%S)
[1] 1980-06-04 06:27:59
str(strptime(19800604062759, format=%Y%m%d%H%M%S))
Hi Daren,
Can R (out)do Emacs? I think you just need to ?Sweave a little.
Mark.
Daren Tan wrote:
I have a folder full of pngs and jpgs, and would like to consolidate them
into a pdf with appropriate title and labels. Can this be done via R ?
Hi Paul,
Duncan has shown you how to do it. There is often a simpler route that is
worth knowing about. Whether it works depends on how the function was coded.
In this case it works:
## Example
par(cex.main = 3)
spineplot (table (tbl$DAY, tbl$SEX), main='TIPS')
par(cex.main = 1.2)
spineplot
Hi Gundala,
Suppose I have 2 matrices A and B.
And I want to measure how good each of this matrix is.
You really want to be using Robert Escoufier's RV-coefficient (A unifying
tool for linear multivariate statistical methods: The $RV$-coefficient Appl.
Statist., 1976, 25, 257-265).
Several
Hi Denis,
h = c(3h30, 6h30, 9h40, 11h25, 14h00,
15h55, 23h)
I could not figure out how to use chron to import this into times, so
I tried to extract the hours and minutes on my own.
Look at ?strptime for this:
##
strptime(6h30, format=%Ih%M)
[1] 2008-06-21
Hi Stefan,
Is it possible to combine both PCAs in order to get only one set of
eigenvectors?
Yes there is: statis() in the ade4 package is probably what you want. In
short, it does a k-table analysis that will give you a common
ordination/position. It also shows how each time-set deviates
Hi Ullrich,
The model is
RT.aov - aov(RT~Cond + Error(Subj/Cond), WMU3C)
I understand that TukeyHSD only works with an aov object, but that
RT.aov is an aovlist object.
You want to use lme() in package nlme, then glht() in the multcomp package.
This will give you multiplicity adjusted
Hi Ullrich,
# what does '~1 | Subj/Cond' mean?
It is equivalent to your aov() error structure [ ... +Error(Subj/Cond) ].
It gives you a set of random intercepts, one for each level of your nesting
structure.
## To get some idea of what's being done it helps to have a continuous
covariate in
Hi Scotty,
Can't give an answer from what you've provided, but one temp. work-around
that might work is to get onto CRAN -- packages and download the packages
you need from your web browser as zip files, then do an Install package(s)
from local zip files... from the Packages menu.
HTH, Mark.
Hi Ivan,
It appears that xYplot, unlike standard xyplot (or coplot to that
matter)
does not accept factors as x variable in formula.
To add to what you have said. It may not be too well documented in ?xYplot,
but xYplot() is really designed to do a number of very useful things with
two sets
Hi Mirela,
Are the relative R^2 values the CP values?
No. CP is your complexity parameter.
I’ve read that the R^2 = 1-rel error, so I am assuming that in my case
this
would be 1-0.64949. Is this correct?
Yes. See ?rsq.rpart, and run the example, which I've copied below.
##
par(ask=T)
Hi Everyone,
Please don't denigrate the capabilities of GNUplot (Louise excluded). It
can, in fact, do some truly awesome stuff.
http://linuxgazette.net/133/luana.html
The PDF is worth a shot.
Cheers, Mark.
Louise Hoffman-3 wrote:
If you still want to then read ?write.table, that can
Hi Stephen,
Also i have read in Quinn and Keough 2002, design and analysis of
experiments for
biologists, that a variance component analysis should only be conducted
after a rejection
of the null hypothesis of no variance at that level.
Once again the caveat: there are experts on this list
Hi Stephen,
Slip of the dactylus: lm() does not, of course, take a fixed=arg. So you
need
To recap:
mod.rand - lme(fixed=y ~ x, random=~x|Site, data=...)
mod,fix - lm(y ~ x, data=...) ## or
##mod,fix - lm(formula=y ~ x, data=...)
Bye.
Mark Difford wrote:
Hi Stephen,
Also i have
Hi Stephen,
Hopefully you will get an answer from one of the experts on mixed models who
subscribe to this list. However, you should know that both lme() and lmer()
currently have anova() methods. The first will give you p-values (but no
SS), and the second will give you SS (but no p-values).
, make perfect sense.
HTH, Mark.
Duncan Murdoch-2 wrote:
On 19/02/2008 5:40 PM, Stiffler wrote:
Mark Difford wrote:
I was wondering why the plot() command ignores the datatype when
displaying axis labels...
plot() doesn't ignore the datatype:
[...]
plot(x,y) calls xy.coords(), which
Hi Rthoughts,
I am currently discouraged by the use of r. I cannot figure out how to
use it despite
extensive searches. Can anyone help me with getting started? How can
import
a txt file with series...
There are piles of documents that you could (and should) read. I am
surprised that you
platforms.
I will look further into them.
As for everyone else who sent e-mails, thank you. I have printed them out
and will look into them.
Mark Difford wrote:
Hi Rthoughts,
I am currently discouraged by the use of r. I cannot figure out how to
use it despite
extensive searches. Can
of computers but command lines for many programs is
something that cna throw me sometimes!
Regrads, Seb.
Mark Difford wrote:
Hi Rthoughts,
It isn't clear what you mean. When you install R, the installation
program usually puts an icon on your desktop that you can click on to run
Hi Stiffler,
I was wondering why the plot() command ignores the datatype when
displaying axis labels...
plot() doesn't ignore the datatype:
x - as.integer(c(1,2,3))
y -x
typeof(x)
[1] integer
mode(x)
[1] numeric
plot(x,y) calls xy.coords(), which recasts x as: x = as.double(x), which is
Hi Yianni,
This just proves that you should be using R as your calculator, and not the
other one!
Regards, Mark.
gatemaze wrote:
Hello,
on a simple linear model the values produced from the fitted(model)
function
are difference from manually calculating on calc. Will anyone have a
Hi Conny,
It still isn't clear what your question is, but a density plot simply
shows you the distribution of your data, say a set of measurements of
something. Think of it as a modern replacement for the histogram.
See
http://en.wikipedia.org/wiki/Density_estimation
for greater insight.
Hi All,
Thanjuvar wrote:
model2-lm(lavi~age+sex+age*race+diabetes+hypertension, data=tb1)
David wrote:
in the second equation you are only including the interaction of
age*race,
the main effect of age, but not the main effect of race which is what
came out significant
in your first
Hi Silvia,
What I need is exactly what I get using biplot (pca.object) but for other
axes.
You need to look at ?biplot.prcomp (argument: choices=)
## Try
biplot(prcomp(USArrests), choices=c(1,2)) ## plots ax1 and ax2
biplot(prcomp(USArrests), choices=c(1,3)) ## plots ax1 and ax3
201 - 300 of 309 matches
Mail list logo