On Fri, 2004-05-14 at 00:08, Peter Nelson wrote:
> I've been unable to find a R package that provides the means of
> performing Clarke & Ainsworth's BIO-ENV procedure or something
> comparable. Briefly, they describe a method for comparing two separate
> sample ordinations, one from species data
Jens Oehlschlägel asked:
>
> Can someone point me to literature and/or R software to solve the following
> problem:
>
> Assume n true scores t measured as x with uncorrelated errors e , i.e.
> x = t + e
> and assume each true score to a have a certain amount of correlation with
> some of the other
Greetings all!
This problem occurs using R 1.8.1 on Windows XP. I downloaded the
binaries for R and all packages, including the VR bundle, in December 2003.
The data consists of NZ$ prices and attributes for 643 cars.
> summary(price)
Min. 1st Qu. MedianMean 3rd Qu.Max.NA's
14
> Hi, I would like to plot a graph which sits in the background as a
> watermark with other plots in the foreground - on top.
"par()" is where most of these questions lead. Try
?par
particularly the "new" argument.
Cheers
Jason
__
[EMAIL PROTECTED]
Hi, I would like to plot a graph which sits in the background as a
watermark with other plots in the foreground - on top. I have looked
through the threads on the r-project website but they seem to concern
background colours rather than actual background plots. I have also
searched through th
> On 13 May 2004 at 16:30, Brittany Erin Laine wrote:
>
> > Is there any way that I can see the step by step code for functions in
> > the base package? For instance the dexp function. I am a student
> > working on writing my own function for something that is similar to
> > this dexp function and
For dexp() need to look at the C source code for R. It's part of
the nmath library.
-roger
Brittany Erin Laine wrote:
Is there any way that I can see the step by step code for functions in
the base package? For instance the dexp function. I am a student
working on writing my own function for
Is it possible that you somehow have a copy of predict.rpart in your global
environment, that is overriding the one in the package?
Andy
[ps: R-1.7.3? Where did you find such a version?]
> From: Giles Hooker
>
> I installed from the CRAN website:
>
> install.packages(rpart)
>
> and also chec
Have you considered making contour plots, e.g., using "contour"?
See also Venables and Ripley, Modern Applied Statistics with S.
Your function is a parabola opening up. For this, you can obtain
the axes of the ellipse, etc., using eigen. The eigenvalues can then
tell you how far o
On 13 May 2004 at 16:30, Brittany Erin Laine wrote:
> Is there any way that I can see the step by step code for functions in
> the base package? For instance the dexp function. I am a student
> working on writing my own function for something that is similar to
> this dexp function and I would lik
Dear R-users,
I'm trying to solve the following equation, to get a set of values
satisfying it and I'd like to know if its possible to do that in R.
0.85*(54.6-X)^2 + 4.65*(25.2 - Y)^2 -2*0.84*(54.6-X)*(25.2-Y) <= 8.29
Thanks a lot,
Eduardo Armas
---
[[alternative HTML version d
I've been unable to find a R package that provides the means of
performing Clarke & Ainsworth's BIO-ENV procedure or something
comparable. Briefly, they describe a method for comparing two separate
sample ordinations, one from species data and the second from
environmental data. The analysis in
As another respondent already mentioned, Lattice is probably the way to
go on this one but if you do want to use tapply try this:
names(Pot) <- SGruppo
dummy <- tapply(Pot,SGruppo,function(x)hist(x,main=names(x)[1],xlab=NULL))
Vittorio virgilio.it> writes:
:
: I'm learning how to use tapply.
If you want to see the code (function definition) just type the function
without any parenthesizes or arguments. In this case, the dexp function
calls C code since the .Internal function is called. To see internal C code
you have to look at the R source.
> dexp
function (x, rate = 1, log = FALSE
Is there any way that I can see the step by step code for functions in
the base package? For instance the dexp function. I am a student
working on writing my own function for something that is similar to
this dexp function and I would like to see the step by step code.
Brittany Laine
GTA WVU St
On Wednesday 12 May 2004 18:39, Neil Desnoyers wrote:
> I am attempting to produce a bar chart and am having some trouble
> with the panel.barchart command.
That's not really helpful. If you expect us to help, you will have to
tell us exactly what you did and why you are unhappy with the resul
> ...
> # Histograms by technology
> par(mfrow=c(2,3))
> tapply(Pot,SGruppo,hist)
> detach(dati)
>
> It all works great but tapply(Pot,SGruppo,hist) produces 6 histograms
> with
> the titles and the xlab labels in a generic form, something like
> integer[1],
> integ
[EMAIL PROTECTED] wrote:
Hi,
I would like to create a list of data frames that I could access via index
manipulation. An array pointer of dataframe...
for (i in 1:length(InputFilelist))
{
# create data.frame
temp <- read.table (file = InputFilelist[i] , header = T, skip = 4)
# appen
[EMAIL PROTECTED] writes:
> Hi,
>
> I would like to create a list of data frames that I could access via index
> manipulation. An array pointer of dataframe...
>
> for (i in 1:length(InputFilelist))
> {
> # create data.frame
> temp <- read.table (file = InputFilelist[i] , header = T,
hi all,
In the library ade4, there are two eigenanalysis which enable the ordination
of the categorical variables.
1- Multiple Correspondence Analysis (MCA, Tenenhaus & Young 1985) performs the
multiple correspondence analysis of a factor table (see the
function dudi.acm).
2- the mixed
Open Position for Statistician.
Come join Affinnova, an exciting and growing company in market research!
Our primary distinction is the use of genetic algorithms (GA) in product
development. These GAs are applied to online surveys, in which
respondents 'evolve' and reduce the number of product de
Hi,
I would like to create a list of data frames that I could access via index
manipulation. An array pointer of dataframe...
for (i in 1:length(InputFilelist))
{
# create data.frame
temp <- read.table (file = InputFilelist[i] , header = T, skip = 4)
# append data.frame
InputD
Dear Allan,
I assume that the categorical data are ordinal. There are methods for
factor analyzing ordinal data (e.g., using polychoric correlations) and
mixed ordinal and interval data, but as far as I know, these aren't
implemented in R.
John
On Thu, 13 May 2004 18:32:11 +0200
allan clark <[E
I installed from the CRAN website:
install.packages(rpart)
and also checked
old.packages()
which returned NULL.
More clues?
Giles Hooker
> Giles Hooker wrote:
> > I have just upgraded from R 1.7.3 to R 1.9.0 and have found that the
> > predict function no longer works for rpart:
> >
> >
>
Dear Jens,
If you're willing to assume multinormality but don't have in mind a
specific model to estimate, then why not use factanal? The uniquenesses
are estimates of the unreliability of the items.
I hope this helps,
John
On Thu, 13 May 2004 17:09:14 +0200 (MEST)
"Jens Oehlschlägel" <[EMAIL
I'm learning how to use tapply.
Now I'm having a go at the following code in which dati contains almost 600
lines, Pot - numeric - are the capacities of power plants and SGruppo - text
- the corresponding six technologies ("CCC", "CIC","TGC", "CSC","CPC", "TE").
Giles Hooker wrote:
I have just upgraded from R 1.7.3 to R 1.9.0 and have found that the
predict function no longer works for rpart:
predict(hmmm,sim3[1:10,])
Error in predict.rpart(hmmm, sim3[1:10, ]) :
couldn't find function "pred.rpart"
I have re-installed the rpart package to no avail.
I have just upgraded from R 1.7.3 to R 1.9.0 and have found that the
predict function no longer works for rpart:
> predict(hmmm,sim3[1:10,])
Error in predict.rpart(hmmm, sim3[1:10, ]) :
couldn't find function "pred.rpart"
I have re-installed the rpart package to no avail. Any ideas?
Gil
Arne,
There are several database access packages for R. I use RMySQL, but there
are several others (ODBC, etc.). However, as far as I know (and I may be
wrong), there is not a persistence API for R objects (in a relational DB
sense), but you could certainly deconstruct the objects and store thei
On Thu, 13 May 2004 [EMAIL PROTECTED] wrote:
> I'd like to use DBI to store lm objects in a database. I've to analyze
> many of linear models and I cannot store them in a single R-session (not
> enough memory). Also it'd be nice to have them persistent.
>
> Maybe it's possible to create a compact
Hello,
I'd like to use DBI to store lm objects in a database. I've to analyze many of linear
models and I cannot store them in a single R-session (not enough memory). Also it'd be
nice to have them persistent.
Maybe it's possible to create a compact binary representation of the object (the kind
Dear John,
Dear Joseph,
Thank you for your quick answers and the pointer to semnet.
I try to clarify on my assumptions:
- yes, I am willing to assume multivariate normality
- no, I don't want to assume a single factor model
- I assume there is an unknown number of factors, and I do not know whi
<[EMAIL PROTECTED]> writes:
> Three related questions on LMEs and GLMMs in R:
>
> (1) Is there a way to fix the dispersion parameter (at 1) in either
> glmmPQL (MASS) or GLMM (lme4)?
>
> Note: lme does not let you fix any variances in advance (presumably
> because it wants to "profile out" an
On Thu, 13 May 2004, Paul Johnson wrote:
>
> I still don't quite understand your point about the reason that coxph
> crashes. Why does the difference of scale between the variables cause
> trouble?
Because exp(beta*x) is very, very, large for many values of beta when x
is that large. If exp(beta
Dear R users,
Is there something like predict (..., type= 'response') for glmmPQL objects
or how would I get fitted values on the scale of the response variable for
the binomial and the poisson family?
Any pointers are appreciated.
Thanks, Lorenz
-
Lorenz Gygax, Dr. sc. nat.
Tel: +41 (0)52 368
[EMAIL PROTECTED] writes:
> Hello, everybody,
>
> I met a big problem. What I want is to use c code in R. I have load
> a test code conv.c in R and run it successfully. However, in my real
> code, I use matlab c library (Matrix Inverse function). After I use
> command R CMD SHLIB to share library
On Monday 10 May 2004 05:46, Stephan Moratti wrote:
> I tried following commands:
>
> amp~time|subject/trial #this was the grouping structure of the data
>
>
> plot(dip,inner=~condition,layout=c(2,2))
>
> after the plot command I obtained this error message:
>
> Error in if(!any(cond.max.level -
Hi,
Feature selection in which context?. regression,
supervised classification, clustering?
Edgar Acuna
UPRM
On Thu, 13 May 2004 [EMAIL PROTECTED] wrote:
> Hi,
>I would like to find some methods about feature selection, but I
> only know the package "randomForest" after searching for a while
Jens
I'm not sure what you intend by "predefined assumptions".
1. If you merely want to conduct an exploratory rather than confirmatory
analysis for the relevant paths, there are ways within SEM to do this. (In
this case you could use John Fox's SEM package).
2. If you do not wish to assume multiv
Bret,
At a second thought, probably what you want is the following:
> sum(apply(outer(0:3,l*recruit.f,dpois),2,function(x) sample(0:3,1,prob=x)))
In this case the distribution of offsprings for each female is Poisson,
conditional on # of offsprings <= 3.
Giovanni
> Date: Wed, 12 May 2004 16
Your problem is that some bootstrap samples have no variation in x at all.
How can you expect a sensible answer from such a sample?
On Thu, 13 May 2004, Hwange Project wrote:
> I'm fighting with the following problem :
> I want to do bootstrapping on a Kendall correlation with the following cod
Hi,
I would like to find some methods about feature selection, but I
only know the package "randomForest" after searching for a while.
Could you recommend some other packages of feature selection?
Thank you very much.
Sincerely yours,
Chad Yang.
=
Dear Jens,
It sounds as if you're postulating a single-factor model underlying the x's
(which is an assumption about the structure of the variables). If that's the
case, then you could either use the factanal function in the stats package
or the sem function in the sem package to estimate the mode
Dear R-helpers,
I'm fighting with the following problem :
I want to do bootstrapping on a Kendall correlation with the following code
:
> cor.function <- function(data,i) cor(data[i, 1], data[i,
2],method="kendall")
> boot.ci <- boot.ci(boot.cor <- boot(cbind(x,y),cor.function,
R=1000),conf=c
On Thu, 13 May 2004, Ingmar Visser wrote:
> Dear Prof Ripley,
> Thanks for your answer.
>
> On 5/12/04 2:52 PM, "Prof Brian Ripley" <[EMAIL PROTECTED]> wrote:
>
> > These normally occur (at that point) from having written off one end of an
> > array. (If you are using .C/.Fortran, they try to co
Dear Prof Ripley,
Thanks for your answer.
On 5/12/04 2:52 PM, "Prof Brian Ripley" <[EMAIL PROTECTED]> wrote:
> These normally occur (at that point) from having written off one end of an
> array. (If you are using .C/.Fortran, they try to copy back the
> arguments.) Compiling with bounds checking
Hi Mark,
> (2) Is there a way to tell lme (either in nlme or lme4) to just use a
> specified design matrix Z for the random effects, rather than
> constructing one itself from factors? Sometimes I would really like to
> use my own funny-looking Z matrix (e.g. with non-integer coefficients),
> and
Can someone point me to literature and/or R software to solve the following
problem:
Assume n true scores t measured as x with uncorrelated errors e , i.e.
x = t + e
and assume each true score to a have a certain amount of correlation with
some of the other true scores.
The correlation matrix
On Thu, 13 May 2004 [EMAIL PROTECTED] wrote:
> Three related questions on LMEs and GLMMs in R:
>
> (1) Is there a way to fix the dispersion parameter (at 1) in either
> glmmPQL (MASS) or GLMM (lme4)?
not glmmPQL in R (can be done in S-PLUS).
> Note: lme does not let you fix any variances in adv
Three related questions on LMEs and GLMMs in R:
(1) Is there a way to fix the dispersion parameter (at 1) in either glmmPQL (MASS) or
GLMM (lme4)?
Note: lme does not let you fix any variances in advance (presumably because it wants
to "profile out" an overall sigma^2 parameter) and glmmPQL rep
I'm trying to use trace() on an S4 coerce method, but get the error
Error in bindingIsLocked(what, whereM) : no binding for "coerce"
What am I doing wrong? Example code follows.
(I've googled the R mailing lists for "trace coerce" and "trace
bindingisLocked" without finding anything relevant
On Thu, May 13, 2004 at 02:10:35AM -0500, Paul Johnson wrote:
> Dear Goran (and others)
>
> I did not know about the "eha" package, but reading the docs, I see many
> things I've been looking for, including the parametric hazard model with
> Weibull baseline. Thanks for the tip, and the package
Dear Goran (and others)
I did not know about the "eha" package, but reading the docs, I see many
things I've been looking for, including the parametric hazard model with
Weibull baseline. Thanks for the tip, and the package.
I still don't quite understand your point about the reason that coxph
53 matches
Mail list logo