in your example.)
Regards, Mark.
-
Mark Difford (Ph.D.)
Research Associate
Botany Department
Nelson Mandela Metropolitan University
Port Elizabeth, South Africa
--
View this message in context:
http://r.789695.n4.nabble.com/custom-graphing-of-box-and-whisker-plots-tp4634826p4634845.html
Sent
this is called a zero-inflated loglinear
continuous dependent variable.
Look at package gamlss, where you might find something. It has a number of
zero-inflated and zero-adjusted distributions. Package VGAM might also fit
this.
Regards, Mark.
-
Mark Difford (Ph.D.)
Research Associate
Botany
(T.lm)^2)
[1] 9.865225
sqrt(sum(resid(T.lm)^2)/18)
[1] 0.7403162
sqrt(sum(resid(T.lm)^2)/20) ## RMSE (y = 20)
[1] 0.7023256
## OR
sqrt(mean((y-fitted(T.lm))^2))
[1] 0.7023256
Regards, Mark.
-
Mark Difford (Ph.D.)
Research Associate
Botany Department
Nelson Mandela Metropolitan University
a
full range of options for carrying out principal component analysis using
matrices with missing values.
Regards, Mark.
-
Mark Difford (Ph.D.)
Research Associate
Botany Department
Nelson Mandela Metropolitan University
Port Elizabeth, South Africa
--
View this message in context:
http://r
-Mead is unreliable: use Brent or optimize() directly
The warning message tells you to use Brent rather than the default
Nelder-Mead. So do that.
##
?optim
est.chi[i,]-c( fitdistr(as.numeric(data2[,i]), densfun=chi-squared,
start=list(df=1), method=Brent)$estimate)
Regards, Mark.
-
Mark Difford
- lm(Y ~ A*B + x*A, dat))
'log Lik.' -13.22186 (df=11)
logLik(m3 - lm(Y ~ x*A + A*B, dat))
'log Lik.' -13.22186 (df=11)
Regards, Mark Difford
-
Mark Difford (Ph.D.)
Research Associate
Botany Department
Nelson Mandela Metropolitan University
Port Elizabeth, South Africa
--
View this message
to it until
tomorrow or the day after. I will send it to you off list.
Note: It would be nice to have a real name and affiliation.
Regards Mark.
-
Mark Difford (Ph.D.)
Research Associate
Botany Department
Nelson Mandela Metropolitan University
Port Elizabeth, South Africa
--
View
statistics:
The function plot produces a graphical representation of the results (white
for non siginficant, light grey for negative sgnificant and dark grey for
positive suignficant relationships).
Regards, Mark.
-
Mark Difford (Ph.D.)
Research Associate
Botany Department
Nelson Mandela
library(nparcomp)
npar - nparcomp(breeding ~ habitat, data = mydata, type = Tukey)
npar
Regards, Mark.
-
Mark Difford (Ph.D.)
Research Associate
Botany Department
Nelson Mandela Metropolitan University
Port Elizabeth, South Africa
--
View this message in context:
http://r.789695.n4.nabble.com
On Dec 28, 2011 at 3:47am T.M. Rajkumar wrote:
I need a way to get at the Variance Extracted information. Is there a
simple way to do the calculation. Lavaan
does not seem to output this.
It does. See:
library(lavaan)
?inspect
inspect( fit, rsquare )
Regards, Mark.
-
Mark Difford
.
-
Mark Difford (Ph.D.)
Research Associate
Botany Department
Nelson Mandela Metropolitan University
Port Elizabeth, South Africa
--
View this message in context:
http://r.789695.n4.nabble.com/PCA-on-high-dimentional-data-tp4180467p4180890.html
Sent from the R help mailing list archive
/max(k)
s.arrow(coocol, clab = clab.col, add.p = TRUE, sub = sub,
possub = bottomright)
add.scatter.eig(x$eig, x$nf, xax, yax, posi = posieig, ratio = 1/4)
}
environment: namespace:ade4
Regards, Mark.
-
Mark Difford (Ph.D.)
Research Associate
Botany Department
Nelson Mandela
,random=~1|id,na.action=na.omit,
data=Tdf)
summary(m.final)
summary(m.finalI)
glht(m.finalI, linfct=mcp(IntFac = Tukey))
Regards, Mark.
-
Mark Difford (Ph.D.)
Research Associate
Botany Department
Nelson Mandela Metropolitan University
Port Elizabeth, South Africa
--
View this message in context
the by argument (increase it). Twenty to
thirty rows should be sufficient.
myPartData - myData[seq(1, nrow(myDat), by=3), ]
dput(myPartData)
Regards, Mark.
-
Mark Difford (Ph.D.)
Research Associate
Botany Department
Nelson Mandela Metropolitan University
Port Elizabeth, South Africa
--
View
homepage.
Regards, Mark.
[1] R is case-sensitive: the package is called siar, not SIAR. Please
respect the author's designation.
-
Mark Difford (Ph.D.)
Research Associate
Botany Department
Nelson Mandela Metropolitan University
Port Elizabeth, South Africa
--
View this message in context:
http
16.52414 14.07126 19.40460 16.52414 14.07126 19.40460 16.52414
14.07126
Regards, Mark.
-
Mark Difford (Ph.D.)
Research Associate
Botany Department
Nelson Mandela Metropolitan University
Port Elizabeth, South Africa
--
View this message in context:
http://r.789695.n4.nabble.com/Estimate
On Nov 07, 2011 at 9:04pm Mark Difford wrote:
So here the intercept represents the estimated counts...
Perhaps I should have added (though surely unnecessary in your case) that
exponentiation gives the predicted/estimated counts, viz 21 (compared to 18
for the saturated model).
##
exp
of treatment (i.e.
treatment = 1).
predict(glm.D93, newdata=data.frame(outcome=1, treatment=1))
1
3.044522
Regards, Mark.
-
Mark Difford (Ph.D.)
Research Associate
Botany Department
Nelson Mandela Metropolitan University
Port Elizabeth, South Africa
--
View this message in context
.
Oftentimes a corpse is not necessary, as here.
Regards, Mark.
-
Mark Difford (Ph.D.)
Research Associate
Botany Department
Nelson Mandela Metropolitan University
Port Elizabeth, South Africa
--
View this message in context:
http://r.789695.n4.nabble.com/Estimate-of-intercept-in-loglinear-model
- princomp(deug$tab, cor=F)
qqplot(predict(deug.princ)[,1], tt[,1])
rm(tt, deug.dudi, deug.princ)
Note that in the code given above, as.matrix(deug.dudi$tab) %*%
as.matrix(deug.dudi$c1) is based on how stats:::predict.princomp does it.
Regards, Mark.
-
Mark Difford (Ph.D.)
Research Associate
On Nov 04, 2011 at 6:55pm Katherine Stewart wrote:
Is there a way to determine r2 values for an SEM in the SEM package or
another way to get
these values in R?
Katherine,
rsquare.sem() in package sem.additions will do it for you.
Regards, Mark.
-
Mark Difford (Ph.D.)
Research
John,
There is a good example of one way of doing this in multcomp-examples.pdf
of package multcomp. See pages 8 to 10.
Regards, Mark.
-
Mark Difford (Ph.D.)
Research Associate
Botany Department
Nelson Mandela Metropolitan University
Port Elizabeth, South Africa
--
View this message
a function to calculate
Levenshtein distances.
Regards, Mark.
-
Mark Difford (Ph.D.)
Research Associate
Botany Department
Nelson Mandela Metropolitan University
Port Elizabeth, South Africa
--
View this message in context:
http://r.789695.n4.nabble.com/Levenshtein-Distance-tp3920951p3921252.html
() function
in his car package (for b).
Regards, Mark.
-
Mark Difford (Ph.D.)
Research Associate
Botany Department
Nelson Mandela Metropolitan University
Port Elizabeth, South Africa
--
View this message in context:
http://r.789695.n4.nabble.com/Ordered-probit-model-marginal-effects
},
journal = {Statistical Science},
year = {1998},
volume = {13},
pages = {307--336},
abstract = {}
}
Regards, Mark.
-
Mark Difford (Ph.D.)
Research Associate
Botany Department
Nelson Mandela Metropolitan University
Port Elizabeth, South Africa
--
View this message in context:
http://r.789695
of
logistic regression model like PC1 of PCA?
Hi Kohkichi,
If you want to do this, i.e. PCA-type analysis with different
variable-types, then look at dudi.mix() in package ade4 and homals() in
package homals.
Regards, Mark.
-
Mark Difford (Ph.D.)
Research Associate
Botany Department
Nelson
, viz
dudi.mix and dudi.hillsmith in package ade4. De Leeuw's homals method takes
this a step further, doing amongst other things, a non-linear version of PCA
using any type of variable.
Regards, Mark.
-
Mark Difford (Ph.D.)
Research Associate
Botany Department
Nelson Mandela Metropolitan
(model2)
## Type II SS
library(car)
Anova(model1, type=II)
Anova(model2, type=II)
Regards, Mark.
-
Mark Difford (Ph.D.)
Research Associate
Botany Department
Nelson Mandela Metropolitan University
Port Elizabeth, South Africa
--
View this message in context:
http://r.789695.n4.nabble.com/Getting
) by Giovanni Petris
Regards, Mark.
-
Mark Difford (Ph.D.)
Research Associate
Botany Department
Nelson Mandela Metropolitan University
Port Elizabeth, South Africa
--
View this message in context:
http://r.789695.n4.nabble.com/Regression-how-to-deal-with-past-values-tp3745817p3747302.html
Sent from
test that I advised
you to do is carried out.
-
Mark Difford (Ph.D.)
Research Associate
Botany Department
Nelson Mandela Metropolitan University
Port Elizabeth, South Africa
--
View this message in context:
http://r.789695.n4.nabble.com/GLM-different-results-with-the-same-factors
.
-
Mark Difford (Ph.D.)
Research Associate
Botany Department
Nelson Mandela Metropolitan University
Port Elizabeth, South Africa
--
View this message in context:
http://r.789695.n4.nabble.com/a-question-about-glht-function-tp3695038p3695067.html
Sent from the R help mailing list archive
or install the car package and use its Anova function.
Regards, Mark.
-
Mark Difford (Ph.D.)
Research Associate
Botany Department
Nelson Mandela Metropolitan University
Port Elizabeth, South Africa
--
View this message in context:
http://r.789695.n4.nabble.com/GLM-different-results
$means,'}$', sep=)
Hope this gets you going.
Regards, Mark.
-
Mark Difford (Ph.D.)
Research Associate
Botany Department
Nelson Mandela Metropolitan University
Port Elizabeth, South Africa
--
View this message in context:
http://r.789695.n4.nabble.com/Latex-Table-Help-on-R-tp3682951p3683038.html
it something sensible. Look at your matrix:
x-matrix(0.5,30,30)
x
Try the following:
x - rmultinom(30, size = 30, prob=rep(c(0.1,0.2,0.8), 30))
PcaCov(x)
Regards, Mark.
-
Mark Difford (Ph.D.)
Research Associate
Botany Department
Nelson Mandela Metropolitan University
Port Elizabeth, South Africa
like to plot one if I could, if only for the sake of
pictorial consistency.
Ouch! for the rod that is likely to come. Advice? Collect more data, for the
sake of pictorial consistency. And if you can't then you can't. What you
have are the (available) facts.
Regards, Mark.
-
Mark Difford
this. Suggest you do some work in that area. Look especially at how model
formulas are used/specified. This is at least one area where you have gone
wrong, as the error message clearly tells you.
Good luck.
Mark.
-
Mark Difford (Ph.D.)
Research Associate
Botany Department
Nelson Mandela Metropolitan
7 8 9 10 11 12 13
T.xyplot$index.cond[[1]] - c(13, 1:12)
print(T.xyplot)
Hope this helps to solve your problem.
Regards, Mark.
-
Mark Difford (Ph.D.)
Research Associate
Botany Department
Nelson Mandela Metropolitan University
Port Elizabeth, South Africa
--
View this message
-12, October 2003
http://cran.r-project.org/doc/Rnews/Rnews_2003-2.pdf
Hope this helps.
Regards, Mark.
-
Mark Difford (Ph.D.)
Research Associate
Botany Department
Nelson Mandela Metropolitan University
Port Elizabeth, South Africa
--
View this message in context:
http://r.789695.n4
On May 01 (2011) Harold Doran wrote:
Can anyone point me to examples with R code where bwplot in lattice is
used to order the boxes in
ascending order?
You don't give an example and what you want is not entirely clear.
Presumably you want ordering by the median (boxplot, and based on the
Apr 08, 2011; 11:05pm dgmaccon wrote:
I get the same error:
Error in function (classes, fdef, mtable) :
unable to find an inherited method for function lmList, for signature
formula, nfnGroupedData
I get no such error. You need to provide more information (platform c.)
##
library(nlme)
On Mar 30, 2011; 11:41am Mikhail wrote:
I'm wondering if there's any way to do the same in R (lme can't deal
with this, as far as I'm aware).
You can do this using the pscl package.
Regards, Mark.
--
View this message in context:
Mar 25, 2011; 12:58am Simon Bate wrote:
I've been happily using the TukeyHSD function to produce Tukeys HSD tests
but have decided to try
out Multcomp instead. However when I carry out the test repeatedly I have
found that Multcomp
produces slightly different values each time. (see code
On Mar 19, 2011; 01:39am Andrzej Galecki wrote:
I agree with you that caution needs to be exercised. Simply because
mathematically the same
likelihood may be defined using different constant.
Yes. But this is ensured by the implementation. If the call to anova() is
made with the lm$obj first
Apologies to all for the multiple posting. Don't know what caused it. Maybe
it _is_ time to stop using Nabble after all...
Regards, Mark.
--
View this message in context:
http://r.789695.n4.nabble.com/lmm-WITHOUT-random-factor-lme4-tp3384054p3386833.html
Sent from the R help mailing list
On Mar 18, 2011; 10:55am Thierry Onkelinx wrote:
Furthermore, I get an error when doing an anova between a lm() and a
lme() model.
Hi Thierry,
You get this error because you have not done the comparison the way I said
you should, by putting the lme$obj model first in the call to anova(). This
On Mar 17, 2011; 11:43am Baugh wrote:
Question: can I simply substitute a dummy var (e.g. populated by zeros)
for ID to run the model
without the random factor? when I try this R returns values that seem
reasonable, but I want to be sure
this is appropriate.
If you can fit the model using
On Mar 17, 2011; 04:29pm Thierry Onkelinx wrote:
You cannot compare lm() with lme() because the likelihoods are not the
same. Use gls() instead of lm()
Hi Thierry,
Of course, I stand subject to correction, but unless something dramatic has
changed, you can. gls() can be used if you need to
On Mar 17, 2011; 04:29pm Thierry Onkelinx wrote:
You cannot compare lm() with lme() because the likelihoods are not the
same. Use gls() instead of lm()
And perhaps I should have added the following:
First para on page 155 of Pinheiro Bates (2000) states, The anova method
can be used to
On Mar 13, 2011; 03:44pm Gaurav Ghosh wrote:
I have been working through the examples in one of the vignettes
associated with the 'mlogit'
package, 'Kenneth Train's exercises using the mlogit package for R.' In
spite of using the code
unchanged, as well as the data used in the examples, I
On Mar 09, 2011; 11:09am Mark Seto wrote:
How can I extract the adjusted R^2 value from an ols object (using rms
package)?
library(rms)
x - rnorm(10)
y - x + rnorm(10)
ols1 - ols(y ~ x)
##
ols1$stats
ols1$stats[4]
Regards, Mark.
--
View this message in context:
Marcel,
Here is one way:
spplot(meuse.grid, zcol = part.a, par.settings =
list(panel.background=list(col=grey)))
##
trellis.par.get()
trellis.par.get()$panel.background
Regards, Mark.
On 03/05/2011 01:06 PM, Marcel J. wrote:
Hi!
How does one change the background color of the
My previous posting seems to have got mangled. This reposts it.
On Mar 01, 2011; 03:32pm gmacfarlane wrote:
workdata.csv
The code I posted is exactly what I am running. What you need is this
data. Here is the code again.
hbwmode-mlogit.data(worktrips.csv, shape=long, choice=CHOSEN,
On Feb 28, 2011; 10:33pm Gregory Macfarlane wrote:
It seems as though the mlogit.data command tries to reassign my
row.names,
and doesn't do it right. Is this accurate? How do I move forward?
Take the time to do as the posting guide asks you to do (and maybe consider
the possibility that you
On Feb 23, 2011; 03:32pm Matthieu Stigler wrote:
I want to have a rectangular plot of size 0.5*0.3 inches. I am having
surprisingly a difficult time to do it...
...snip...
If I specifz this size in pdf(), I get an error...
pdf(try.pdf, height=0.3, width=0.5)
par(pin=c(0.5,
On 2011-02-20 20:02, Karmatose wrote:
I'm trying to include multiple variables in a non-parametric analysis
(hah!). So far what I've managed to
figure out is that the NPMC package from CRAN MIGHT be able to do what I
need...
Also look at packages nparcomp and coin (+ multcomp). Both use
Deniz,
There are 3 F statistics, R2 and p-values. But I want just one R2 and
pvalue for my multivariate
regression model.
Which is as it should.
Maybe the following will help, but we are making the dependent variables the
independent variables, which may or may not be what you really have
When I came to David's comment, I understood the theory, but not the
numbers in his answer. I wanted to see the MASS mca answers match
up with SAS, and the example did not (yet).
I am inclined to write, O yea of little faith. David showed perfectly well
that when the results of the two
Hi Frank,
I believe that glmnet scales variables by their standard deviations.
This would not be appropriate for categorical predictors.
That's an excellent point, which many are likely to forget (including me)
since one is using a model matrix. The default argument is to standardize
inputs,
Finn,
But when I use 'principal' I do not seem to be able to get the same
results
from prcomp and princomp and a 'raw' use of eigen:
...snip...
So what is wrong with the rotations and what is wrong with 'principal'?
I would say that nothing is wrong. Right at the top of the help file
Does anyone know what I am doing wrong?
Could be a lot or could be a little, but we have to guess, because you
haven't given us the important information. That you are following Crawley
is of little or no interest. We need to know what _you_ did.
What is model and what's in it?
##
str(model)
Wayne,
I don't know how to assign a name for the df, or what to put for fac,
and what is worse,
I get an error message saying that the program cannot find the
discrimin.coa command.
Before you can use a package you have downloaded you need to activate it.
There are different ways of doing
Bob,
Does anybody know how to eliminate the double quotes so that I can use
the
variable name (generated with the paste function) further in the code...
?noquote should do it.
##
varName
[1] varName
noquote(varName)
[1] varName
Regards, Mark.
--
View this message in context:
Lilith,
No the big mystery is the Tukey test. I just can't find the mistake, it
keeps telling me, that
there are less than two groups
...
### Tukey test ##
summary(glht(PAM.lme, linfct = mcp(Provenancef = Tukey)))
Error message:
Fehler in glht.matrix(model = list(modelStruct =
Hi Liviu,
However, I'm still confused on how to compute the scores when rotations
(such as 'varimax' or other methods in GPArotation) are applied.
PCA does an orthogonal rotation of the coordinate system (axes) and further
rotation is not usually done (in contrast to factor analysis).
Hi He Zhang,
Is the following right for extracting the scores?
...
pca$loadings
pca$score
Yes.
But you should be aware that the function principal() in package psych is
standardizing your data internally, which you might not want. That is, the
analysis is being based on the correlation
Hi Raquel,
routine in R to compute polychoric matrix to more than 2 categorical
variables? I tried polycor
package, but it seems to be suited only to 2-dimensional problems.
Bur surely ?hetcor (in package polycor) does it.
Regards, Mark.
--
View this message in context:
Hi Selthy,
I'd like to use a Wilcoxon Rank Sum test to compare two populations of
values. Further, I'd like
to do this simultaneously for 114 sets of values.
Well, you read your data set into R using:
##
?read.table
?read.csv
There are other ways to bring in data. Save the import to a
Jane,
Does someone know how to do fa and cfa with strong skewed data?
Your best option might be to use a robustly estimated covariance matrix as
input (see packages robust/robustbase).
Or you could turn to packages FAiR or lavaan (maybe also OpenMx). Or you
could try soft modelling via
Hi Anna,
How can I change the barplot so that the left hand axis scales from 0 to
15 and the right hand
axis from 0 to 5?
Try this:
par(mfrow=c(1,1), mai=c(1.0,1.0,1.0,1.0))
Plot1-barplot(rbind(Y1,Y2), beside=T, axes=T, names.arg=c(a,b),
ylim=c(0,15), xlim=c(1,9), space=c(0,1),
--286).
The models you need to compare are the following:
##
Aov.mod - aov(Y ~ V * N + Error(B/V/N), data = oats)
Lme.mod - lme(Y ~ V * N, random = ~1 | B/V/N, data = oats)
Lmer.mod - lmer(Y~ V * N +(1|B)+(1|B:V)+(1|B:N), data = oats)
summary(Aov.mod)
anova(Lme.mod)
anova(Lmer.mod)
HTH, Mark Difford
Jim,
In the glm object I can find the contrasts of the main treats vs the
first i.e. 2v1, 3v1 and
4v1 ... however I would like to get the complete set including 3v2, 4v2,
and 4v3 ... along with
the Std. Errors of all contrasts.
Your best all round approach would be to use the multcomp
I'd prefer to stick with JPEG, TIFF, PNG, or the like. I'm not sure EPS
would fly.
Preferring to stick with bitmap formats (like JPEG, TIFF, PNG) is likely to
give you the jagged lines and other distortions you profess to want to
avoid.
EPS (encapsulated postscript, which handles
Guy,
For a partial least squares approach look at packages plspm and pathmox.
Also look at sem.additions.
Regards, Mark.
--
View this message in context:
http://r.789695.n4.nabble.com/path-analysis-tp2528558p2530207.html
Sent from the R help mailing list archive at Nabble.com.
Hi Petar,
I dunno why, but I cannot make randtes[t].coinertia() from ade4 package
working. I have two nice distance matrices (Euclidean):
Could anyone help with this?
Yes (sort of). The test has not yet been implemented for dudi.pco, as the
message at the end of your listing tells you.
Hi Nicola,
In few word: does this row indicate a global effect of the predictor
'cat'
or a more specific passage?
It indicates a more specific passage. Use anova(m7) for global/omnibus.
Check this for yourself by fitting the model with different contrasts. The
default contrasts in R are
Elisabeth,
You should listen to Ted (Harding). He answered your question with:
the vertical axis is scaled logarithmically with the
numerical annotations corresponding to the *raw* values of Y,
not to their log-transformed values. Therefore it does not matter
what base of logarithms is
Hi All,
You can also add a line using lines() if you transform in the call using the
same log-base---but not via R's log=y argument (because of what's stored
in par(yaxp)).
##
par(mfrow=c(1,3))
plot(1:10, log=y)
lines(log10(1:10))
par(yaxp)
plot(log10(1:10), yaxt=n)
axis(side=2,
Hi Phil,
So far for logistic regression I've tried glm(MASS) and lrm (Design) and
found there is a big
difference.
Be sure that you mean what you say, that you are saying what you mean, and
that you know what you mean when making such statements, especially on this
list. glm is not in
Hi Chris,
My ideal would be to gather the information onto the clipboard so I
could paste it into Excel and do the formatting there, but any approach
would be better than what I have now.
I would never use Excel for this since there are far superior tools
available. But it is very easy to
tdm wrote:
OK, I think I've figured it out, the predict of lrm didn't seem to pass
it through the logistic
function. If I do this then the value is similar to that of lm. Is this
by design? Why would it
be so?
Please take some time to read the help files on these functions so that you
at
Hi David,
Now when I turn on R again the script is now completely blank.
This happened to me about 4--5 months ago under Vista. I cannot quite
remember what I did but I think I got the script working by opening it in
another editor (a hex editor would do) and removing either the first few
Peter Flom wrote:
I am puzzled by the performance of LME in situations where there are
missing data. As I
understand it, one of the strengths of this sort of model is how well it
deals with missing
data, yet lme requires nonmissing data.
You are confusing missing data with an
it
deals with missing
data, yet lme requires nonmissing data.
Mark Difford replied
You are confusing missing data with an unbalanced design. A strengths of
LME
is that it copes with the latter, which aov() does not.
Thanks for your reply, but I don't believe I am confused with respect
Hi Michael,
How do you control what is the (intercept) in the model returned by the
lme function and is there a way to still be able to refer to all groups
and
timepoints in there without referring to intercept?
Here is some general help. The intercept is controlled by the contrasts that
Peng Yu wrote:
Some webpage has described prcomp and princomp, but I am still not
quite sure what the major difference between them is.
The main difference, which could be extracted from the information given in
the help files, is that prcomp uses the singular value decomposition [i.e.
does
Hi Steve,
However, I am finding that ... the trendline ... continues to run beyond
this data segment
and continues until it intersects the vertical axes at each side of the
plot.
Your best option is probably Prof. Fox's reg.line function in package car.
##
library(car)
?reg.line
reg.line
Hi Liviu,
tmp - latex(.object, cdec=c(2,2), title=)
class(tmp)
[1] latex
html(tmp)
/tmp/RtmprfPwzw/file7e72f7a7.tex:9: Warning: Command not found:
\tabularnewline
Giving up command: \...@hevea@amper
/tmp/RtmprfPwzw/file7e72f7a7.tex:11: Error while reading LaTeX:
This
to the preamble of the *.tex file:
\providecommand{\tabularnewline}{\\}
Regards, Mark.
Liviu Andronic wrote:
Hello
On 10/3/09, Mark Difford mark_diff...@yahoo.co.uk wrote:
This has nothing to do with Hmisc or hevea.
Although I have LyX installed, I don't quite understand where LyX
comes
Hi Paul,
I have a data set for which PCA based between group analysis (BGA) gives
significant results but CA-BGA does not.
I am having difficulty finding a reliable method for deciding which
ordination
technique is most appropriate.
Reliability really comes down to you thinking about
P. Branco wrote:
I have used the dudi.mix method from the ade4 package, but when I do the
$index it shows
me that R has considered my variables as quantitative.
What should I do?
You should make sure that they are encoded as ordered factors, which has
nothing to do with ade4's
andreiabb wrote:
the message that I am getting is
Error in AFDM (all_data_sub.AFDM, type=c(rep(s,1), rep(n,1), rep(n,
:
unused arguments (s) (type=c(s, n,n))
Can someone help me?
If you are in hel[l] then it is entirely your own fault. The error message
is clear and would have become
Hi Zhu,
could not find function Varcov after upgrade of R?
Frank Harrell (author of Design) has noted in another thread that Hmisc has
changed... The problem is that functions like anova.Design call a function
in the _old_ Hmisc package called Varcov.default. In the new version of
Hmisc this
Hi Brian,
I am trying to get fitted/estimated values using kernel regression and a
triangular kernel.
Look at Loader's locfit package. You are likely to be pleasantly surprised.
Regards, Mark.
Bryan-65 wrote:
Hello,
I am trying to get fitted/estimated values using kernel regression
The scale function will return the mean and sd of the data.
By default. Read ?scale.
Mark.
Noah Silverman-3 wrote:
I think I just answered my own question.
The scale function will return the mean and sd of the data.
So the process is fairly simple.
scale training data varaible
Hi John,
When Group is entered as a factor, and the factor has two levels, the
ANOVA table gives a p value for each level of the factor.
This does not (normally) happen so you are doing something strange.
## From your first posting on this subject
I must say that this is slightly odd behavior to require both
na.action= AND exclude=. Does anyone know of a justification?
Not strange at all.
?options
na.action, sub head Options set in package stats. You need to override the
default setting.
ws-7 wrote:
xtabs(~wkhp, x, exclude=NULL,
Yichih,
Answer 2 is correct, because your indexing specification for 1 is wrong.
You also seem to have left out a comma.
##
mu1990$wage[mu1990$edu==2|mu1990$edu==3|mu1990$edu==4, ] ## like this
mu1990$wage[mu1990$edu%in%2:4, ]
You really could have worked this out for yourself by looking at
Hi John,
Has a test for bimodality been implemented in R?
You may find the code at the URL below useful. It was written by Jeremy
Tantrum (a PhD of Werner Stuetzle's). Amongst other things there is a
function to plot the unimodal and bimodal Gaussian smoothers closest to the
observed data. A
Hi David, Phil,
Phil Spector wrote:
David -
Here's the easiest way I've been able to come up with.
Easiest? You are making unnecessary work for yourselves and seem not to
understand the purpose of ?naresid (i.e. na.action = na.exclude). Why not
take the simple route that I gave, which really
.
Thanks for your help Emma
Mark Difford wrote:
Hi Emma,
R gives you the tools to work this out.
## Example
set.seed(7)
TDat - data.frame(response = c(rnorm(100, 5, 2), rnorm(100, 20, 2)))
TDat$group - gl(2, 100, labels=c(A,B))
with(TDat, boxplot(split(response, group
1 - 100 of 309 matches
Mail list logo