data_main[ match(src,data_main$V1), ]
and the compliment of src (call it srcc)
data_main[ match(srcc,data_main$V1), ]
...this only works so long as there is only one occurrance of each item in
V1 in V1.
--Adam
On Tue, 9 Sep 2008, Gundala Viswanath wrote:
Dear all,
Suppose I have this data
Hi Tolga,
in SNOW you have to start a cluster with the command
library(snow)
cluster - makeCluster(#nodes)
The object cluster is a list with an object for each node and each
object again is a list with all informations (rank, comm, tags)
The size of the cluster is the length of the list.
thomastos wrote:
Hi R,
I am familiar with the basics of R.
To learn more I would like how to get data from Yahoo!finance directly into
R. So basically I want a data frame or matrix to do some data analysis.
How do I do this?
RSiteSearch(yahoo)
get.hist.quote() from tseries
yahooSeries()
Hi all,
While dat['a1',] and dat['a10',] produce the same results in the
following example, I'd like dat['a1',] to return NAs.
dat - data.frame(x1 = paste(letters[1:5],10, sep=''), x2=rnorm(5))
rownames(dat) - dat$x1
dat['a1',]
dat['a10',]
sessionInfo()
R version 2.7.2 (2008-08-25)
Hi,
I have following kind of dataset (all are dates) in my Excel sheet.
09/08/08
09/05/08
09/04/08
09/02/08
09/01/08
29/08/2008
28/08/2008
27/08/2008
26/08/2008
25/08/2008
22/08/2008
21/08/2008
20/08/2008
18/08/2008
14/08/2008
13/08/2008
08/12/08
08/11/08
08/08/08
08/07/08
However I want to use
Dear all,
I am trying to apply kmeans clusterring on a data file (size is about 300
Mb)
I read this file using
x=read.table('file path' , sep= )
then i do kmeans(x,25)
but the process stops after two minutes with an error :
Error: cannot allocate vector of size 907.3 Mb
when i read the
This is completely wrong: min _is_ defined for date-times:
min(.leap.seconds)
[1] 1972-07-01 01:00:00 BST
Please do study the posting guide and do your homework before posting: you
seem unaware of what the POSIXct class is, so ?DateTimeClasses is one
place you need to start. And
Check out ?match, ?%in%
x - c(1,2,3,4)
y - c(1,2,4)
match(y,x)
[1] 1 2 4
--Adam
On Mon, 8 Sep 2008, Andrew Barr wrote:
Hi all,
I want to get the index numbers of all elements of a vector which match any
of a long series of possible values. Say x - c(1,2,3,4) and I want to know
which
This is a side-effect of lapply being in the base namespace and not
evaluating its arguments, as explained on its help page which also points
out that using a wrapper is sometimes needed. It also points out that
code has been written that relies on the current behaviour.
On Mon, 8 Sep 2008,
On Mon, 8 Sep 2008, Qiong Yang wrote:
The standard error from logistic regression is slightly different from the
naive SE from GEE under independence working correlation structure.
Shouldn't they be identical? Anyone has insight about this?
They are computed quantities from iterations with
First thanks for Jinsong's suggestions
I would like to do a bootstrap in a nonlinear model. But it fails to
converge in most of time. (it did converge if I just use nls without
boot). Thus, I use try function to resolve my problem. This
following code is from Jinsong's suggestion.
Dear Matthew,
First of all I'm forwarding this to R-SIG-Mixed, which is a more
appropriate list for your question.
Using a mixed effect with only 5 levels is a borderline situation.
Douglas Bates recommends at least 6 levels in order to get a more or
less reliable estimate. So I would consider
Returning NA (of the correct length, not length 1) will not help you, as
all the derived statistics from the bootstrap runs will be NA.
But here you never looked at the result of try.
On Tue, 9 Sep 2008, [EMAIL PROTECTED] wrote:
First thanks for Jinsong's suggestions
I would like to do a
Hi,
I wish to compute multivariate test statistics for a within-subjects repeated
measures design with anova.mlm.
This works great if I only have two factors, but I don't know how to compute
interactions with more than two factors.
I suspect, I have to create a new grouping factor and then
Dear Everyone,
I try to create a cvs-file with different results form the table function.
Imagine a data-frame with two vectors a and b where b is of the class factor.
I use the tapply function to count a for the different values of b.
tapply(a,b,table)
and I use the table function to have a
René Sachse wrote:
Damien schrieb:
I'm looking into opening an url on a server which requires
authentication.
Under a Windows Operating System you could try to start R with the
--internet2 option. This worked in my case.
Thanks René it did the trick for me too!
Best Regards,
Damien
On 8 Sep, 20:15, Prof Brian Ripley [EMAIL PROTECTED] wrote:
On Mon, 8 Sep 2008, Damien wrote:
Hi all,
I'm looking into opening an url on a server which requires
authentication.
After failing to find some kind of connection structure to fill in I
turned to explicitly stating the
On Mon, 8 Sep 2008, Megh Dal wrote:
Hi,
I have following kind of dataset (all are dates) in my Excel sheet.
09/08/08
09/05/08
09/04/08
09/02/08
09/01/08
29/08/2008
28/08/2008
27/08/2008
26/08/2008
25/08/2008
22/08/2008
21/08/2008
20/08/2008
18/08/2008
14/08/2008
13/08/2008
08/12/08
08/11/08
rami batal skrev:
Dear all,
I am trying to apply kmeans clusterring on a data file (size is about 300
Mb)
I read this file using
x=read.table('file path' , sep= )
then i do kmeans(x,25)
but the process stops after two minutes with an error :
Error: cannot allocate vector of size
Schadwinkel, Stefan skrev:
Hi,
I wish to compute multivariate test statistics for a within-subjects repeated
measures design with anova.mlm.
This works great if I only have two factors, but I don't know how to compute
interactions with more than two factors.
I suspect, I have to create
As suggested in ?[.data.frame, try:
dat[match('a1', rownames(dat)),]
Haris Skiadas
Department of Mathematics and Computer Science
Hanover College
On Sep 9, 2008, at 2:41 AM, Xianming Wei wrote:
Hi all,
While dat['a1',] and dat['a10',] produce the same results in the
following example, I'd
Is this what you want:
my.df - data.frame(a = c(1:5, 1:10, 1:20), b = runif(35))
split(my.df, c(0, cumsum(diff(my.df$a) 0)))
$`0`
a b
1 1 0.2655087
2 2 0.3721239
3 3 0.5728534
4 4 0.9082078
5 5 0.2016819
$`1`
a b
6 1 0.89838968
7 2 0.94467527
8 3 0.66079779
9 4
Hi all,
Given a data frame:
my.df - data.frame(a = c(1:5, 1:10, 1:20), b = runif(35))
I want to split it by a such that I end up with a list containing 3
components i.e. the first containing a = 1 to 5, the second a = 1 to 10 etc.
In other words, sets of sequences of a.
I can't seem to find
try this:
dat - data.frame(x1 = paste(letters[1:5],10, sep=''), x2=rnorm(5))
row.names(dat) - dat$x1
dat['a1' %in% row.names(dat), ]
dat['a10' %in% row.names(dat), ]
I hope it helps.
Best,
Dimitris
Hi all,
While dat['a1',] and dat['a10',] produce the same results in the
following
Hi all,
I want to plot the grouped means of some variables. The dependent variables
and the grouping factor are stored in different columns. I want to draw a
simple line-plot of means, in which the x-axis represents the variables and
y-axis represents the means. The means of the groups should
On Mon, Sep 8, 2008 at 7:47 PM, Dimitri Liakhovitski [EMAIL PROTECTED] wrote:
Thank you everyone for your responses. I'll answer several questions.
1. Disclaimer: I have **NO IDEA** of the details of what you want
to do or why
-- but I am willing to bet that there are better ways of doing
Try this:
strptime(x, ifelse(nchar(x) == 8, '%d/%m/%y', '%d/%m/%Y'))
On Tue, Sep 9, 2008 at 3:48 AM, Megh Dal [EMAIL PROTECTED] wrote:
Hi,
I have following kind of dataset (all are dates) in my Excel sheet.
09/08/08
09/05/08
09/04/08
09/02/08
09/01/08
29/08/2008
28/08/2008
Why not Format - Cell in Excell?
el
on 9/9/08 1:03 PM Henrique Dallazuanna said the following:
Try this:
strptime(x, ifelse(nchar(x) == 8, '%d/%m/%y', '%d/%m/%Y'))
On Tue, Sep 9, 2008 at 3:48 AM, Megh Dal [EMAIL PROTECTED] wrote:
Hi,
I have following kind of dataset (all are dates)
Try creating a new object:
tb - rbind(table(a), do.call(rbind.data.frame, tapply(a, b, table)))
names(tb) - unique(a)
then write to csv by write.table.
On Tue, Sep 9, 2008 at 5:48 AM, Kunzler, Andreas [EMAIL PROTECTED] wrote:
Dear Everyone,
I try to create a cvs-file with different results
On 9/9/2008 6:49 AM, Erich Studerus wrote:
Hi all,
I want to plot the grouped means of some variables. The dependent variables
and the grouping factor are stored in different columns. I want to draw a
simple line-plot of means, in which the x-axis represents the variables and
y-axis
Dear Erich,
Have a look at ggplot2
library(ggplot2)
dataset - expand.grid(x = 1:20, y = factor(LETTERS[1:4]), value = 1:10)
dataset$value - rnorm(nrow(dataset), sd = 0.5) + as.numeric(dataset$y)
plotdata - aggregate(dataset$value, list(x = dataset$x, y = dataset$y),
mean)
plotdata -
Hi,
Just a thought.
You wrote:
ob1-object1$ORF
ob2-object2$ORF
and then use cbind like,
HG-cbind(on1,ob2)
but there is an error. Is there any other function I can use?
If you copied and pasted this from R, then your problem is
Hg - cbind(on1,ob2)
You mean
Hg - cbind(ob1,ob2)
So perhaps
Hi,
After manipulate my data I have ended up with 5 different data frames
with different number of observations but the same
number of variables (columns)
An example, if I write str(object1), I see this,
data.frame': 47 obs. of 3 variables:
$ ORF: Factor w/ 245 levels
this is day month year?
look at chron or maybe the easiest is to use excel to change the format
On Tue, Sep 9, 2008 at 7:12 AM, Dr Eberhard Lisse [EMAIL PROTECTED] wrote:
Why not Format - Cell in Excell?
el
on 9/9/08 1:03 PM Henrique Dallazuanna said the following:
Try this:
strptime(x,
Hi Erich,
Have a look at brkdn.plot in the plotrix package.
Jim
__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented,
On Tue, Sep 9, 2008 at 6:56 AM, ONKELINX, Thierry
[EMAIL PROTECTED] wrote:
Dear Erich,
Have a look at ggplot2
library(ggplot2)
dataset - expand.grid(x = 1:20, y = factor(LETTERS[1:4]), value = 1:10)
dataset$value - rnorm(nrow(dataset), sd = 0.5) + as.numeric(dataset$y)
Or with
Hello
Many thanks. It works just fine.
How about the packages issue? That is, same thing for the installation
path.
Cheers
Ed
-Original Message-
From: Gabor Grothendieck [mailto:[EMAIL PROTECTED]
Sent: Monday, September 08, 2008 10:01 PM
To: Eduardo M. A. M.Mendes
Cc:
Need to buy fast computer for running R on. Today we use 2,8 MHz intel D cpu
and the calculations takes around 15 days. Is it possible to get the same
calculations down to minutes/hours by only changing the hardware?
Should I go for an really fast dual 32 bit cpu and run R over linux or xp or
go
Thanks for all the suggestions, but it seems, that all these functions need
a rearrangement of my data, since in my case, the dependent variables are in
different columns. The error.bars.by-function seems to be the only plotting
function, that does not need a rearrangement. Are there other
You might look at ?.libPaths
(note the dot) and play around with adding a .libPaths command
to your Rprofile.site and again you may need Administrator rights
when editing it. If that does not help then you can try clarifying
the problem. In particular what the same refers to and what
is
After doing a PCA using princomp, how do you view how much each component
contributes to variance in the dataset. I'm still quite new to the theory of
PCA - I have a little idea about eigenvectors and eigenvalues (these
determine the variance explained?). Are the eigenvalues related to loadings
On Tue, Sep 9, 2008 at 8:38 AM, Erich Studerus
[EMAIL PROTECTED] wrote:
Thanks for all the suggestions, but it seems, that all these functions need
a rearrangement of my data, since in my case, the dependent variables are in
different columns. The error.bars.by-function seems to be the only
Both vorticity and divergence are defined in terms of partial derivatives.
You can compute these derivatives using the `grad' function in numDeriv
package.
U - function(X) { your U function}
V - function(X) { your V function}
# where X = c(x,y)
library(numDeriv)
grU - function(X) grad(X,
Hi,
I'm trying to verify the assumption of homogeneity of variance of residuals in
an ANOVA with levene.test. I don't know how to define the groups. I have 3
factors : A, B and C(AxB).
What do I have to change or to add in the command to set that I'm working with
the residuals and to set
Many thanks. I shall look at it. In case I run into trouble again, I'll try
to clarify the the same.
Ed
-Original Message-
From: Gabor Grothendieck [mailto:[EMAIL PROTECTED]
Sent: Tuesday, September 09, 2008 10:46 AM
To: Eduardo M. A. M.Mendes
Cc: r-help@r-project.org
Subject: Re: [R]
If you mean you want an EVD with a fat left tail (instead of a fat
right tail), then can;t you just multiply all the values by -1 to
reverse the distribution? A new location parameter could then shift
the distribution wherever you want along the number line ...
-Aaron
On Mon, Sep 8, 2008 at
Hi,
Please could someone explain how this element of predict.lm works?
From the help file
`
newdata
An optional data frame in which to look for variables with which to
predict. If omitted, the fitted values are used.
'
Does this dataframe (newdata) need to have the same variable names as
Just try it:
BOD # built in data frame
Time demand
118.3
22 10.3
33 19.0
44 16.0
55 15.6
67 19.8
BOD.lm - lm(demand ~ Time, BOD)
predict(BOD.lm, list(Time = 10))
1
25.73571
predict(BOD.lm, list(10))
Error in eval(expr, envir, enclos) : object
on 09/09/2008 09:59 AM Williams, Robin wrote:
Hi,
Please could someone explain how this element of predict.lm works?
From the help file
`
newdata
An optional data frame in which to look for variables with which to
predict. If omitted, the fitted values are used.
'
Does this
Hi,
my data table has 38939 rows. R prints the first 1 columns and then
prints an error message:[ reached getOption(max.print) -- omitted 27821
rows ]].
is it possible to set the maxprint parameter so that R prints all the rows?
tia,
anjan
--
=
anjan
On Tue, 9 Sep 2008, Nic Larson wrote:
Need to buy fast computer for running R on. Today we use 2,8 MHz intel D cpu
and the calculations takes around 15 days. Is it possible to get the same
calculations down to minutes/hours by only changing the hardware?
No: you would need to arrange to
Hi,
I'm trying to redefine the contrasts for a linear model.
With a 2 level factor, x, with levels A and B, a two level
factor outputs A and B - A from an lm fit, say
lm(y ~ x). I would like to set the contrasts so that
the coefficients output are -0.5 (A + B) and B - A,
but I can't get the sign
On Tue, Sep 9, 2008 at 3:48 AM, Kunzler, Andreas [EMAIL PROTECTED] wrote:
Dear Everyone,
I try to create a cvs-file with different results form the table function.
Imagine a data-frame with two vectors a and b where b is of the class factor.
I use the tapply function to count a for the
I did PCA stuff years there is a thing that is called a scree score
Which will give an indication of the number of PCA's and the variance
explained.
Might want to web search on scree score and PCA.
-Original Message-
From: [EMAIL PROTECTED] [mailto:[EMAIL PROTECTED]
On Behalf Of
I have a data set of mean velocity, discharge, and mean depth. I need
to find out which model best fits them out of log linear, linear, some
other kind of model... Using excel I have found that linear is not
that bad and log10(discharge) vs. the other two variables (I am trying
to predict
-0.5*(A+B) is not a contrast, which is the seat of your puzzlement.
All you can get from y ~ x is an intercept (a column of ones) and a single
'contrast' column for 'x'.
If you use y ~ 0+x you can get two columns for 'x', but R does not give
you an option of what columns in the case: see the
Hello,
I am using Rserve to create a dedicated computational back-engine. I
generate and pass an array of data to a java application on a separate
server. I was wondering if the same is possible for an image. I believe
that Rserve supports passing certain R objects and JRclient can cast
these
Dear Readers:
I have two issues in nonparametric statistical analysis that i need
help:
First, does R have a package that can implement the multimodality test,
e.g., the Silverman test, DIP test, MAP test or Runt test. I have seen
an earlier thread (sometime in 2003) where someone was trying to
I believe I have found my solution, so please disregard. Thanks
[[alternative HTML version deleted]]
__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide
stephen sefick ssefick at gmail.com writes:
I have a data set of mean velocity, discharge, and mean depth. I need
to find out which model best fits them out of log linear, linear, some
other kind of model... Using excel I have found that linear is not
that bad and log10(discharge) vs. the
Prof Brian Ripley skrev:
-0.5*(A+B) is not a contrast, which is the seat of your puzzlement.
All you can get from y ~ x is an intercept (a column of ones) and a
single 'contrast' column for 'x'.
If you use y ~ 0+x you can get two columns for 'x', but R does not
give you an option of what
For the command 'spectrum' I read:
The spectrum here is defined with scaling 1/frequency(x), following S-PLUS.
This makes the spectral density a density over the range (-frequency(x)/2,
+frequency(x)/2], whereas a more common scaling is 2π and range (-0.5, 0.5]
(e.g., Bloomfield) or 1 and
Peter Dalgaard skrev:
Prof Brian Ripley skrev:
-0.5*(A+B) is not a contrast, which is the seat of your puzzlement.
All you can get from y ~ x is an intercept (a column of ones) and a
single 'contrast' column for 'x'.
If you use y ~ 0+x you can get two columns for 'x', but R does not
I write a .mat file using the writeMat() command, but when i try to load it
in Matlab it says that file may be corrupt. I did it a month ago and it
worked. It exists any option that I can change for making the file readable
to Matlab?
A - c(1:10)
dim(A) - c(2,5)
library(R.matlab)
Hi,
I have little experience using wavelet and I would like to know if it is
possible,using R wavelet package, to have a plot of frequency versus time.
thank you
giov
--
View this message in context:
http://www.nabble.com/help-on-wavelet-tp19395583p19395583.html
Sent from the R help mailing
this may be a better question for r-devel, but ...
Is there a particular reason (and if so, what is it) that
the inverse link is not in the list of allowable link functions
for the binomial family? I initially thought this might
have something to do with the properties of canonical
vs
options(max.print)
$max.print
[1] 9
options(max.print=10)
options(max.print)
$max.print
[1] 1e+05
...so check what your max.print is, and figure out whether you need to
set it to nrow, ncol, or nrow*ncol of your data frame...then do so...though
of course, this is a global variable,
It depends on what you want to do. In wavelet speak frequency is scale.
these are the libraries:
wmtsa - wavCWT (make sure that you pick the wavelet. I suggest morlet
because it is compactly supported (disappears to zero quickly))
I would also suggest the fields packages for the tim.colors
Dear Colleagues,
I have a dataframe with variables:
[1] ID category a11a12
a13a21
[7] a22a23a31a32
b11b12
[13] b13b21b31b32
b33b41
[19] b42
I am combining many different random forest objects run on the same data set
using the combine ( ) function. After combining the forests I am not sure
whether the variable importance, local importance, and rsq predictors are
recalculated for the new random forest object or are calculated
Is this Month-Day or Day-Month or a mixture of both?
I still think using the Format - Cell - Date will work
much better...
el
On 09 Sep 2008, at 11:21 , David Scott wrote:
On Mon, 8 Sep 2008, Megh Dal wrote:
Hi,
I have following kind of dataset (all are dates) in my Excel sheet.
Maybe something like this:
by(df[,c(77,81,86,90,94,98,101,106)],df$category,apply,2,mean)
...which would then need to be reformatted into a data frame (there is
probably an easy way to do this which I don't know).
aggregate seems like a more reasonable choice, but the function for
aggregate
On 9/9/2008 2:12 PM, Adam D. I. Kramer wrote:
Maybe something like this:
by(df[,c(77,81,86,90,94,98,101,106)],df$category,apply,2,mean)
...which would then need to be reformatted into a data frame (there is
probably an easy way to do this which I don't know).
sparseby() in the reshape
Perfect!
Thanks.
On Tue, Sep 9, 2008 at 11:27 AM, Duncan Murdoch [EMAIL PROTECTED]wrote:
On 9/9/2008 2:12 PM, Adam D. I. Kramer wrote:
Maybe something like this:
by(df[,c(77,81,86,90,94,98,101,106)],df$category,apply,2,mean)
...which would then need to be reformatted into a data frame
On Tue, Sep 9, 2008 at 6:31 AM, Nic Larson [EMAIL PROTECTED] wrote:
Need to buy fast computer for running R on. Today we use 2,8 MHz intel D cpu
and the calculations takes around 15 days. Is it possible to get the same
calculations down to minutes/hours by only changing the hardware?
Should I
Hi Markus,
Many thanks. Is the cluster variable you mention below available in the
environment of the nodes ? Specifically, within that environment, how
could one identify the rank of that specific node ?
My code would use that information to partition the problem.
Thanks,
Tolga
Markus
Dear R Users,
I am on Windows XP SP2 platform, using R version 2.7.2 . I was wondering
if there is a way to find out, within R, the number of CPU's on my machine
? I would use this information to set the number of nodes in a cluster,
depending on the machine. Sys.info() and .Platform do not
Dear List:
I have a dataset with over 5000 records and I would like to put the Count in
bins
based on the ForkLength. e.g.
Forklength Count
32-34?
35-37?
38-40?
and so on...
and lastly I would like to plot (scatterplot) including the
On Tue, 9 Sep 2008, [EMAIL PROTECTED] wrote:
Hi Markus,
Many thanks. Is the cluster variable you mention below available in the
environment of the nodes ? Specifically, within that environment, how
could one identify the rank of that specific node ?
No -- that isn't the way snow works. With
Greetings -- I have a dataframe a with one element a vector, time, of
POSIXct values. What's a good way to split the data frame into
periods of a$time, e.g. days, and apply a function, e.g. mean, to some
other column of the dataframe, e.g. a$value?
Cheers,
Alexy
the diptest package, perhaps?
url:www.econ.uiuc.edu/~rogerRoger Koenker
email[EMAIL PROTECTED]Department of Economics
vox: 217-333-4558University of Illinois
fax: 217-244-6678Champaign, IL 61820
On Sep 9, 2008, at
Understood, that's what I'll do. I'm thinking of exporting the number of
nodes to all nodes and passing in the node rank as 1:nonodes through
clusterApply.
Thanks all,
Tolga
Luke Tierney [EMAIL PROTECTED]
09/09/2008 20:11
To
[EMAIL PROTECTED]
cc
[EMAIL PROTECTED], r-help@r-project.org
On Tue, 9 Sep 2008, [EMAIL PROTECTED] wrote:
Dear R Users,
I am on Windows XP SP2 platform, using R version 2.7.2 . I was wondering
if there is a way to find out, within R, the number of CPU's on my machine
? I would use this information to set the number of nodes in a cluster,
depending on the
Many thanks, that's very helpful.
Regards,
Tolga
- Original Message -
From: Prof Brian Ripley [EMAIL PROTECTED]
Sent: 09/09/2008 20:57 CET
To: Tolga Uzuner
Cc: r-help@r-project.org
Subject: Re: [R] Information on the number of CPU's
On Tue, 9 Sep 2008, [EMAIL PROTECTED] wrote:
Dear
Hi Amin,
First, does R have a package that can implement the multimodality test,
e.g., the Silverman test, DIP test, MAP test or Runt test.
Jeremy Tantrum (a Ph.D. student of Werner Steutzle's, c. 2003/04) did some
work on this. There is some useful code on Steutzle's website:
Whoops! I think that should be Stuetzle --- though I very much doubt that he
reads the list.
Mark Difford wrote:
Hi Amin,
First, does R have a package that can implement the multimodality test,
e.g., the Silverman test, DIP test, MAP test or Runt test.
Jeremy Tantrum (a Ph.D. student
The wmic command line utility can also be used to query this; on a
dual-core Vista laptop I get
C:\Users\lukewmic cpu get NumberOfCores,NumberOfLogicalProcessors
NumberOfCores NumberOfLogicalProcessors
2 2
luke
--
Luke Tierney
University of Iowa
Hi Amin,
And I have just remembered that there is a function called curveRep in Frank
Harrell's Hmisc package that might be useful, even if not quite in the
channel of your enquiry. curveRep was added to the package after my
struggles, so I never used it and so don't know how well it performs
hello,
subsequently to a NMDS analysis (performed with metaMDS or isoMDS) is
it possible to
rotate the axis through a varimax-rotation?
Thanks in advance.
Bernd Panassiti
__
R-help@r-project.org mailing list
Is there is function in R equivalent to Matlab's csaps? I need a
spline function with the same calculation of the smoothing parameter
in csaps to compare some results. AFAIK, the spar in smooth.spline is
related but not the same.
__
R-help@r-project.org
Does anyone know why I get the following error when trying tsdiag?
Error in UseMethod(tsdiag) : no applicable method for tsdiag
I am invoking it as: tsdiag(mar).
Thank you.
Kevin
__
R-help@r-project.org mailing list
?aggregate
?window.zoo
?rollapply
anyway have a look at package zoo
On Tue, Sep 9, 2008 at 3:25 PM, Alexy Khrabrov [EMAIL PROTECTED] wrote:
Greetings -- I have a dataframe a with one element a vector, time, of
POSIXct values. What's a good way to split the data frame into periods of
a$time,
have you looked at the vegan viginette- I know there is proscrutes rotation.
On Tue, Sep 9, 2008 at 3:54 PM, Bernd Panassiti
[EMAIL PROTECTED] wrote:
hello,
subsequently to a NMDS analysis (performed with metaMDS or isoMDS) is
it possible to
rotate the axis through a varimax-rotation?
Hello R users,
I am trying to make a my first package and I get an error that I can
understand. The package is build out of three files (one for functions, 1
for s4 classes and 1 for s4 methods).
Once I source them I run
package.skeleton( name=TDC )
within a R session and I get
Creating
-09-08 14:00:00 3
4 2008-09-08 21:00:00 4
$`20080909`
dates values
5 2008-09-09 04:00:00 5
6 2008-09-09 11:00:00 6
7 2008-09-09 18:00:00 7
$`20080910`
dates values
8 2008-09-10 01:00:00 8
9 2008-09-10 08:00:00 9
10 2008-09-10 15
This is why some help pages have references: please use them (Venables
Ripley explain the exact formulae used in R).
On Tue, 9 Sep 2008, [EMAIL PROTECTED] wrote:
For the command 'spectrum' I read:
The spectrum here is defined with scaling 1/frequency(x), following
S-PLUS. This makes the
This should do what you want.
#--x - read.table('clipboard', header=TRUE, as.is=TRUE)
# convert dates
x$date - as.POSIXct(strptime(x$SampleDate, %m/%d/%Y))
# put ForkLength into bins
x$bins - cut(x$ForkLength, breaks=c(32, 34, 37, 40), include.lowest=TRUE)
# count the bins
tapply(x$Count, x$bins,
On Mon, 8 Sep 2008, Qiong Yang wrote:
Hi,
The standard error from logistic regression is slightly different from the
naive SE from GEE under independence working correlation structure.
Yes
Shouldn't they be identical? Anyone has insight about this?
No, they shouldn't. They are different
Sorry, I misread your message. Prof Ripley is right, as usual -- the
estimates use different stopping criteria and so are just numerically
different.
-thomas
On Tue, 9 Sep 2008, Thomas Lumley wrote:
On Mon, 8 Sep 2008, Qiong Yang wrote:
Hi,
The standard error from logistic
Version 3.9 of the survey package is now on CRAN. Since the last
announcement (version 3.6-11, about a year ago) the main changes are
- Database-backed survey objects: the data can live in a SQLite (or other
DBI-compatible) database and be loaded as needed.
- Ordinal logistic regression
-
1 - 100 of 127 matches
Mail list logo