I have a data set like this
ID=c(A,A,A,A,A,A,A,B,B,B,B,B,B,B)
s=c(1.1,2.2,1.3,1.1,3.1,4.1,4.2,1.1,2.2,1.3,1.1,3.1,4.1,4.2)
d=c(1,2,3,4,5,6,7,1,2,3,4,5,6,7)
t=c(-3,-2,-1,0,1,2,3,-3,-2,-1,0,1,2,3)
mydata-data.frame(cbind(as.character(ID),as.numeric(s),as.integer(d),as.numeric(t)))
On Wed, Jul 1, 2009 at 2:47 AM, Dirk Eddelbuettel e...@debian.org wrote:
Rainer,
On 30 June 2009 at 14:30, Rainer M Krug wrote:
| following a discussion on difference in speed of R between R and Linux, I am
| wondering: is there a howto to get the most (concerning speed) out of R? I
| am
Thank you, Jim!
It looks much better with that new aspect ratio!
Unfortunately the legend is located at the same place,
too far on the rigt side next to the border.
Any ideas?
Thanks, Udo
Quoting jim holtman jholt...@gmail.com:
add
par(mar=c(2.5,4,1,1))
just after layout
On Tue,
Dear Max,
Le mardi 30 juin 2009 à 21:36 -0400, Max Kuhn a écrit :
Well, on this one, I might bring my record down to commercial software
bug fix time lines...
Having actually looked at the code before wailing, I had such a hunch...
[ Snip ... ]
We had to do some interesting things to get
Hallo,
I need your help.
I fitted my distribution of data with beta-prime, I need now to plot the
Cumulative distribution. For other distribution like Gamma is easy:
x - seq (0, 100, 0.5)
plot(x,pgamma(x, shape, scale), type= l, col=red)
but what about beta-prime? In R it exists only pbeta
Try this:
Key - list(text = list(c(FDP, FDL)), points = list(pch = c(*, o),
col = c(red, blue), cex = 2), space = right)
xyplot(S ~ t | ID, key = Key, panel = function(...) {
panel.points(...,
col = rep(c(red, blue), 3:4),
Thank you very much for your answer.
Mark hit the point of my query. Now we need somebody who knows how R
computes the fitting values, and why it does not use the inverse link...
In my humble opinion I think that R uses a kind of interpolation, using
some standard points (with the minimum value
Christopher W. Ryan cryan at binghamton.edu writes:
suppose I have some logical vector
x - as.logical(c(0,0,0,1,0,0,1,1,0))
x
How would I make the words TRUE appear on the screen in a different
color from the words FALSE?
Thanks.
--Chris
# install.packages(xterm256)
Maybe I haven't been sufficiently clear on what I am after:
I am looking for R adaptations of approaches (relevant to hierarchical
clustering of categorical variables) described in
Steinley and Brusco 2008 Selection of variables in cluster analysis: an
empirical comparison of eight
Dear Duncan and Rolf,
That's funny! Thanks a lot.
Best regards,
Craig
Duncan Murdoch wrote:
On 30/06/2009 5:11 PM, Craig P. Pyrame wrote:
Dear Rolf,
What do you mean?
He was talking about the fortunes package. Install it, type
fortune(), and you'll get a fortune cookie message. Maybe
Dear all,
Does anyone know if there exist a R function
in some package to compute streamlines
given the two components of a vectorial field?
Thanks in advance,
Cheers
Andrea
__
R-help@r-project.org mailing list
Hello all
I have a fit resulting from a call to glm. Now, I would like to extract the
model frame MF, and add some variables
from the original data frame DF. To do this, I need to know which rows in DF
correspond to rows in MF (since some were dropped by na.omit). How can I do
this? It's probably
David Hugh-Jones wrote:
Hello all
I have a fit resulting from a call to glm.
Now, now, no reason to overreact...
(Women can have fits upstairs -- sign in Indian tailor shop)
Now, I would like to extract the
model frame MF, and add some variables
from the original data frame DF. To do
I have a zoo object on daily data for 10 years. Now I want to create a list,
wherein each member of that list is the monthly observations. For example,
1st member of list contains daily observation of 1st month, 2nd member
contains daily observation of 2nd month etc.
Then for a particular month,
Hi Gabor,
thanks fort his great advice. Just one more question:
I cannot find how to switch off case sensitivity for the regex in the
documentation for gsubfn or strapply, like e.g. in gregexpr the ignore.case
=TRUE command. Is there a way?
TIA,
Mark
---
Mark
On Wed, Jul 1, 2009 at 3:02 AM, gug guygr...@netvigator.com wrote:
sapply(ls(), function(x) object.size(get(x)))
-This lists all objects with the memory each is using (I should be honest
and say that, never having used sapply before, I don't truly understand
the syntax of this, but it
*(note*: This is an R community question, not a statistical nor coding
question. Since this is my first time writing such a post, I hope no one
will take offence of it.)
Hello all,
I will be attending useR 2009 next week, and was wondering if there are any
of you who are *bloggers *intending to
Hi,
1.) I am trying to calculate the autocorrelation function for returns based
on rolling window, but it doesn't work.
My code is
rollapply(Returns,20,acf).
2.) My next try is
rollapply(Returns_2,20,cor)
Error in FUN(cdata[st, i], ...) : supply both 'x' and 'y' or a matrix-like
'x'
Thank
On 30/06/09 17:53, Chuck White wrote:
[...]
Is there a way to avoid the for loop? The following seems to work:
lapply(density.factor,grep,names(data.df))
However, that produces a list of lists which need to be merged. Note that in
the above example since we have 2 regular expressions, there
Christopher W. Ryan wrote:
suppose I have some logical vector
x - as.logical(c(0,0,0,1,0,0,1,1,0))
x
How would I make the words TRUE appear on the screen in a different
color from the words FALSE?
tfsample-as.logical(sample(c(0:1),10,TRUE))
plot(1:10,type=n)
Hi,
You might want to check package xterm256.
http://cran.r-project.org/web/packages/xterm256/index.html
http://romainfrancois.blog.free.fr/index.php?post/2009/04/18/Colorful-terminal%3A-the-R-package-%22xterm256%22
This works if the terminal you are using recognizes xterm escape
sequences as
On Wed, Jul 1, 2009 at 6:22 AM, Andriy Fetsunfet...@googlemail.com wrote:
Hi,
1.) I am trying to calculate the autocorrelation function for returns based
on rolling window, but it doesn't work.
My code is
rollapply(Returns,20,acf).
That's because acf returns a list. Try this:
Try this:
z - zooreg(1:365, start = as.Date(2001-01-01), freq = 1)
f - head
tapply(seq_along(z), as.yearmon(time(z)), function(ix) f(z[ix]))
where you should replace f with a function that does whatever
you want with each month's data. Here we just used head as
an example.
On Wed, Jul 1, 2009
I deal with a huge amount of Biology data stored in different databases.
The databases belongig to Bioconductor organization can be accessed through
Bioconductor packages.
Unluckily some useful data is stored in databases like, for instance, miRDB,
miRecords, etc ... which offer just an
strapply and gsubfn pass the ... argument to gsub so it accepts
all the same arguments. See ?strappy and ?gsubfn. e.g.
strapply(MyString, [bcdfghjklmnpqrstvwxyz]+, nchar, ignore.case = TRUE)
[[1]]
[1] 5 2
gsubfn([bcdfghjklmnpqrstvwxyz]+, X, MyString, ignore.case = TRUE)
[1] XiX
On Wed, Jul
I am typing the following on the command prompt:
variab = read.csv(file.choose(), header=T)
variab
It lists 900,000 ( this is the total number of observations in variab )
minus 797124 observations and prompts the following message
[ reached getOption(max.print) -- omitted 797124 entries ]]
Is
Change the value with 'options':
max.print: integer, defaulting to 9. print or show methods can
make use of this option, to limit the amount of information that is
printed, to something in the order of (and typically slightly less
than) max.print entries.
Why would you want all 900,000
On 01/07/2009 8:04 AM, saurav pathak wrote:
I am typing the following on the command prompt:
variab = read.csv(file.choose(), header=T)
variab
It lists 900,000 ( this is the total number of observations in variab )
minus 797124 observations and prompts the following message
[ reached
Yes Jim. Thanks. That's what I was looking for. My mistake letting [pos] block.
Cheers,
Mark
On Tue, Jun 30, 2009 at 8:04 PM, jim holtmanjholt...@gmail.com wrote:
Not exactly sure what you want to count. Does this do what you want (made a
change in RunningCount)
SNIP
RunningCount = function
Hi,
I've just run an rcorr on some data in Spearman's mode and it's just
produced the following values;
[,1] [,2]
[1,] 1.00 -0.55
[2,] -0.55 1.00
n= 46
P
[,1] [,2]
[1,] 0
[2,] 0
I presume this means the p-value is lower than 0.5, but is there any
way of increasing the
Now it works, I modified one variable (xleg) in the function,
Thanks a lot!
Quoting jim holtman jholt...@gmail.com:
It appears that the legend is fixed in that location within the funciton.
You could modify the function to put the legend in some other location.
On Wed, Jul 1, 2009 at 3:07
Hi R-friends,
Attached is the SAS XPORT file that I have imported into R using following code
library(foreign)
mydata-read.xport(C:\\ctf.xpt)
print(mydata)
I am trying to maximize logL in order to find Maximum Likelihood Estimate (MLE)
of 5 parameters (alpha1, beta1, alpha2, beta2, p) using NLM
James Allsopp wrote:
Hi,
I've just run an rcorr on some data in Spearman's mode and it's just
produced the following values;
[,1] [,2]
[1,] 1.00 -0.55
[2,] -0.55 1.00
n= 46
P
[,1] [,2]
[1,] 0
[2,] 0
I presume this means the p-value is lower than 0.5, but is there any
Hello all,
I have one question about how to increase the performance speed for running
multiple univariate Anova.
I have multiple observations for a group of subjects and want to run
univariate Anova using car:Anova with type 3 sum of square. So, the current
implementation is using apply()
I think your problem is with plotting, not with naming.
Tell the list what kind of plot you're doing
(with example code, of course) and where you need
to see names on the plot.
(What do you have in mind when you say names for
the whole matrix? There are row names, and
column names, and
Hello All,
When I use the following lines of code to create a plot and add labels
with R-square values the labels have a superscripted R2.
library(lattice)
xyplot(PropHatchedNests$Phatched + PropHatchedNests$PropNests +
PropHatchedNests$meanHSI + PropHatchedNests$RelMeanEggsNest ~
My version would be
newDev - function() { dev.new(); invisible( dev.cur() ) }
I agree with Hadley that return() is redundant in this instance.
Using invisible() suppresses automatic printing of the returned value
when it is not being assigned to a variable, thus making it more like
Dear R users:
In my recent works, I compared the cumulative incidences among three
different treatment groups. The cuminc function (cmprsk package ) yielded a
graph (refer to figure 1) and a p value (p = 0.0007). I don’t know how to
interpret the meaning of the p value ( one p value and
On Wed, Jul 1, 2009 at 3:30 PM, Don MacQueenm...@llnl.gov wrote:
My version would be
newDev - function() { dev.new(); invisible( dev.cur() ) }
I agree with Hadley that return() is redundant in this instance. Using
invisible() suppresses automatic printing of the returned value when it is
On Wed, Jul 1, 2009 at 7:45 AM, Barry
Rowlingsonb.rowling...@lancaster.ac.uk wrote:
On Wed, Jul 1, 2009 at 3:30 PM, Don MacQueenm...@llnl.gov wrote:
My version would be
newDev - function() { dev.new(); invisible( dev.cur() ) }
I agree with Hadley that return() is redundant in this
Hi Madan,
You are trying to find the MLE of a binary mixture distribution. So, there are
constraints on the parameters, as you have indicated. The nlm() function
cannot handle constraints. I would recommend one of the following functions
(not necessarily in any order), all of which can
I deal with a huge amount of Biology data stored in different databases.
The databases belongig to Bioconductor organization can be accessed through
Bioconductor packages.
Unluckily some useful data is stored in databases like, for instance, miRDB,
miRecords, etc ... which offer just an
Maura,
Try the RCurl package, specifically the functions getURL and getForm.
Greg
mau...@alice.it wrote:
I deal with a huge amount of Biology data stored in different databases.
The databases belongig to Bioconductor organization can be accessed through
Bioconductor packages.
Unluckily some
Hi Maura --
mau...@alice.it wrote:
I deal with a huge amount of Biology data stored in different databases.
The databases belongig to Bioconductor organization can be accessed through
Bioconductor packages.
Unluckily some useful data is stored in databases like, for instance, miRDB,
Dear all,
When doing nonlinear regression, we normally use nls if e are iid normal.
i learned that if the form of the variance of e is not completely known,
we can use the IRWLS (Iteratively Reweighted Least Squares )
algorithm:
for example, var e*i =*g0+g1*x*1
1. Start with *w**i = *1
2.
No, that's made no difference, sorry.
Frank E Harrell Jr wrote:
James Allsopp wrote:
Hi,
I've just run an rcorr on some data in Spearman's mode and it's just
produced the following values;
[,1] [,2]
[1,] 1.00 -0.55
[2,] -0.55 1.00
n= 46
P
[,1] [,2]
[1,] 0
[2,]
Hi,
I have a data.frame that is date ordered by row number - earliest
date first and most current last. I want to create a couple of new
columns that show the max and min values from other columns *so far* -
not for the whole data.frame.
It seems this sort of question is really coming from
I tried tseries::garch() and was getting a lot of False Convergence so
tried fGarch::grachFit.
Looking a bit further I find that from fGarch::garchFit I get
..@ fit:List of 17
snip
.. ..$ convergence: int 1
.. ..$ message: chr singular convergence (7)
for any and all fits.
Does anyone using linux KDE heard about RKWard? Is it good?
Is it better than emacs/ess? Any thought?
Ubirajara Alberton
__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide
Dear r-helpers,
This is a little bit more of a Windows problem than
an R problem, but ...
any idea how to query the *available* locales from
within R (or otherwise) on a Windows system? Teaching
in a Spanish-language setting and would like to do
something like
Hi All,
I would like to do double boostrapping to estimate 95% CI coverage.
So, I can only need to estimate 95% confidence interval from each
bootstrapped sample.
Since we don't have a closed form of 95% CI, in order to get 95% CI for each
sample, we need to use bootstrapping.
For outer
Hi all,
I'm using RScalapack library for parallelizing some heavy matrix
operations required by MCMC methods for spatio-temporal models. The
package reference manuals (dated 2005) states that the library needs
LamMPI to work but we have a Linux Cluster with OpenMPI. We have found
Hi,
I am trying to calculate the volatility on not overlapping basis. Do you
know functions for not overlapping calculation?
It is like to take first 20 observations and apply st.dev to 20 and then
take next 20 observations and calculate st. deviation.
I tried with function rollapply(), but it
On 01/07/2009 11:49 AM, Mark Knecht wrote:
Hi,
I have a data.frame that is date ordered by row number - earliest
date first and most current last. I want to create a couple of new
columns that show the max and min values from other columns *so far* -
not for the whole data.frame.
It seems
You are describing a generalized nonlinear least-squares estimation procedure.
This is implemented in the gnls() function in nlme package.
?gnls
Ravi.
Ravi Varadhan, Ph.D.
Assistant Professor,
Division of Geriatric Medicine
Hi all,
Could anybody point me to some latest productivity tools in R? I am
interested in speeding up my R programming and improving my efficiency
in terms of debugging and developing R programs.
I saw my friend has a R Console window which has automatic syntax
reminder when he types in the
I've asked about custom sorting before and it appears that -- in terms of a
user-defined order -- it can only be done either by defining a custom class
or using various tricks with order
Just wondering if anyone has a clever way to order vintages of the form
2002, 2003H1, 2003H2, 2004, 2005Q1,
See the by= argument.
On Wed, Jul 1, 2009 at 11:08 AM, Andriy Fetsunfet...@googlemail.com wrote:
Hi,
I am trying to calculate the volatility on not overlapping basis. Do you
know functions for not overlapping calculation?
It is like to take first 20 observations and apply st.dev to 20 and
I'm going to both UseR! (in Rennes) and DSC (in Copenhagen), and will
be blogging about the talks and other interesting things I learn here:
http://blog.revolution-computing.com/
# David Smith
--
David M Smith da...@revolution-computing.com
Director of Community, REvolution Computing
I am trying to set up a Grass project and need to set up the region so
that I can view the map. I can look at a map and find the lat/lon,
but the map projection is in UTM NAD38 WGS84 and I need to set the
eastings and northings. Is there a package that will help me
calculate this in R.
thanks
Hi,
I have a multiplot of 6 rows and 1 column.I need to draw vertical lines in
each plot.However when I use abline(v=locator(1)$x) in some plots the line
only comes for half the box and it goes beyond the box in others.I suspect
this has something to do with the margins.any help?
--
Rajesh.J
-Original Message-
From: r-help-boun...@r-project.org [mailto:r-help-boun...@r-project.org] On
Behalf Of Steve Jaffe
Sent: Wednesday, July 01, 2009 9:59 AM
To: r-help@r-project.org
Subject: [R] sorting question
I've asked about custom sorting before and it appears that -- in
It can be done without setting locales using chron:
library(chron)
as.Date(chron(1970-Jan-01, format = Year-Month-Day))
[1] 1970-01-01
On Wed, Jul 1, 2009 at 10:09 AM, Ben Bolkerbbol...@gmail.com wrote:
Dear r-helpers,
This is a little bit more of a Windows problem than
an R problem, but
How are you creating the plots before adding the line? This sounds like you
may be mixing graphics types (creating the plots with grid graphics (lattice or
ggplot2) then using abline from the base graphics system) or using a plot
function that plays with the graphics settings and leaves them
Hi Michael,
Great topic - I hope to see others respond.
For me there are several big time savers with using R (on windows XP),
search them on google :
1) tinn-r, for syntax highlighting.
2) Rexcel package - for getting data from excel. (BTW, for excel, I also
recommend the ASAP utillities)
3)
On Wed, Jul 1, 2009 at 9:39 AM, Duncan Murdochmurd...@stats.uwo.ca wrote:
On 01/07/2009 11:49 AM, Mark Knecht wrote:
Hi,
I have a data.frame that is date ordered by row number - earliest
date first and most current last. I want to create a couple of new
columns that show the max and min
This maps each string to one of the form yearQqtr at which point
you can sort them. Modify the mapping as necessary.
library(gsubfn)
dd - c(2002, 2003H1, 2003H2, 2004, 2005Q1, 2005Q2)
gsubfn(H.|Q.|$, list(H1 = Q1, H2 = Q2, Q2 = Q2, Q3 = Q3, Q4 = Q4,
Q1), dd)
[1] 2002Q1 2003Q1 2003Q2
Emacs or X-emacs with ess (Emacs Speaks Statistics) is great on Linux and
Mac (can be the console you saw on Mac) for syntax highlight, programming
and debugging. I think there is a package to visualize the links between
functions in a package, but I don't know its name (if anybody knows it, I
Dear R-helpers,
I am running R version 2.9.1 on a Mac Quad with 32Gb of RAM running
Mac OS X version 10.5.6. With over 20Gb of RAM free (according to
the Activity Monitor) the following happens.
x - matrix(rep(0, 6600^2), ncol = 6600)
# So far so good. But I need 3
Here's my code
library(sound);
q1-loadSample( path to wav);
q2-loadSample(path to wav);
q3-loadSample( path to wav);
m1-read.table(txt,header=FALSE);
m2-read.table(txt,header=FALSE);
m3-read.table(txt,header=FALSE);
layout(matrix(c(1,2,3,4,5,6), 6, 1, byrow = TRUE));
par(mar = c(0.6,4,2,4));
If you are coming to useR! next week, then you might want to check the
session on Workbenches:
http://www.agrocampus-ouest.fr/math/useR-2009/abstracts/schedule.html
Romain
On 07/01/2009 06:58 PM, Michael wrote:
Hi all,
Could anybody point me to some latest productivity tools in R? I am
I write about R every weekday at http://blog.revolution-computing.com
. In case you missed them, here are some articles from last month of
particular interest to R users.
http://bit.ly/tygLz announced the release of the foreach and
iterators packages on CRAN, for simple scalable parallel
Dear all,
A very basic terrain calculated as a matrix from Spatial Points Patterns:
#interpolate using the akima package
library(akima)
terrain=interp(ppoints$x,ppoints$y,ppoints$marks,xo=x0,yo=y0, linear=F)
class(terrain)
[1] list
class(terrain$x) #these are the x-coord i.e: [1...1000]
[1]
Great roundup - thank you David.
On Wed, Jul 1, 2009 at 8:48 PM, David M Smith
da...@revolution-computing.com wrote:
David
--
--
My contact information:
Tal Galili
Phone number: 972-50-3373767
FaceBook: Tal Galili
My Blogs:
David,
Using this mail I think I found a simple solution for something I
knew I was going to have to learn about.
Thanks,
Mark
On Wed, Jul 1, 2009 at 10:48 AM, David M
Smithda...@revolution-computing.com wrote:
I write about R every weekday at http://blog.revolution-computing.com
. In case
2009/7/1 miguel bernal mber...@marine.rutgers.edu
I think there is a package to visualize the links between
functions in a package, but I don't know its name (if anybody knows it, I
will love to know it).
reminds me of roxygen's callgraph (relies on graphviz), is that what you
meant?
... I saw my friend has a R Console window which has automatic syntax
reminder when he types in the first a few letters of R command. ...
You might be thinking of JGR (Jaguar) at
http://jgr.markushelbig.org/JGR.html . This editor also prompts you with
function argument lists, including for
Hello,
I'm trying to vectorize some assignment statements using match(), but
can't seem to get it correct.
I have 2 data frames each with a key column of unique values. I want to
copy a column from one frame to another where the key values are the
same. The data frames are not the same
Dear Christopher,
Try this:
merge(x,y,all=TRUE)
HTH,
Jorge
On Wed, Jul 1, 2009 at 2:51 PM, Hane, Christopher A
christopher.h...@ingenixconsulting.com wrote:
Hello,
I'm trying to vectorize some assignment statements using match(), but
can't seem to get it correct.
I have 2 data
seeliger.c...@epamail.epa.gov wrote:
snip
There is no IDE for R in the same way that there is for other languages --
something that supports integrated versioning, debugging and testing,
perhaps using Eclipse. Boy howdee, I hope someone knows otherwise.
There is a feature-rich R plug-in
#Highlight the text below (without the header)
# read the data in from clipboard
df - do.call(data.frame, scan(clipboard, what=list(id=0,
date=,loctype=0 ,haptype=0)))
# split the data by date, sample 1 observation from each split, and rbind
sampled_df - do.call(rbind, lapply(split(df,
Hi,
I have a data frame where one column is a list of lists. I would like to
subset the data frame based on membership of the lists in that column and be
able to 'denormalise' the data frame so that a row is duplicated for each of
its list elements. Example code follows:
# The data is read in in
REvolution Computing has just released three new packages for R to
CRAN (under the open-source Apache 2.0 license): foreach, iterators,
and doMC. Together, they provide a simple, scalable parallel computing
framework for R that lets you take advantage of your multicore or
multiprocessor
Hi,
(apologies for initial html posting)
I have a data frame where one column is a list of lists. I would like to
subset the data frame based on membership of the lists in that column and be
able to 'denormalise' the data frame so that a row is duplicated for each of
its list elements. Example
On Wed, Jul 1, 2009 at 2:10 PM, Sunil
Suchindransunilsuchind...@gmail.com wrote:
#Highlight the text below (without the header)
# read the data in from clipboard
df - do.call(data.frame, scan(clipboard, what=list(id=0,
date=,loctype=0 ,haptype=0)))
# split the data by date, sample 1
Hi all,
How could I set a timer in R, so that at fixed interval, the R program
will invoke some other functions to run some tasks?
Thank you very much!
__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read
Hi,
I am starting to play around with neural networks and noticed that there are
several packages on the CRAN website for neural networks (AMORE, grnnR,
neural, neuralnet, maybe more if I missed them).
Are any of these packages more well-suited for newbies to neural networks?
Are there any
On Wed, Jul 01, 2009 at 01:35:39PM -0400, miguel bernal wrote:
Emacs or X-emacs with ess (Emacs Speaks Statistics) is great on Linux and
Mac (can be the console you saw on Mac) for syntax highlight, programming
and debugging.
Also see
Thanks Roger. Your comments were very helpful. Unfortunately, each of
the 'groups' in this example are derived from the same set of data, two of
which were subsets-- so it is not that unlikely that the weighted medians
were the same in some cases.
This all leads back to an operation attempting
On 01/07/2009 1:26 PM, Mark Knecht wrote:
On Wed, Jul 1, 2009 at 9:39 AM, Duncan Murdochmurd...@stats.uwo.ca wrote:
On 01/07/2009 11:49 AM, Mark Knecht wrote:
Hi,
I have a data.frame that is date ordered by row number - earliest
date first and most current last. I want to create a couple of
James Allsopp wrote:
No, that's made no difference, sorry.
Sorry I forgot to check the print method for rcorr. If P.0001 it prints
as 0. To print under your control print the object $P from the list
created by rcorr:
r - rcorr(. . .)
r$P
Frank
Frank E Harrell Jr wrote:
James Allsopp
Just wanted to leave a note on this, after I got my new iMac (and
installed R64 from the ATT site) -- quantreg did run, after topping out
at whopping 12GB of swap space (MacOS X, at least, should theoretically
have as much swap space as there is space on the HD -- it will
dynamically increase
I use Windows. Thank you!
On Wed, Jul 1, 2009 at 12:53 PM, Eduardo Leonileoni...@msu.edu wrote:
I think you are better off writing the R script and invoke it using a
OS specific tool. For Unix-like systems there is cron.
hth,
-e
On Wed, Jul 1, 2009 at 3:41 PM, Michaelcomtech@gmail.com
On Wed, Jul 1, 2009 at 8:41 PM, Michaelcomtech@gmail.com wrote:
Hi all,
How could I set a timer in R, so that at fixed interval, the R program
will invoke some other functions to run some tasks?
Use timer events in the tcltk package:
z=function(){cat(Hello you!\n);tcl(after,1000,z)}
On Wed, Jul 1, 2009 at 12:54 PM, Duncan Murdochmurd...@stats.uwo.ca wrote:
On 01/07/2009 1:26 PM, Mark Knecht wrote:
On Wed, Jul 1, 2009 at 9:39 AM, Duncan Murdochmurd...@stats.uwo.ca
wrote:
On 01/07/2009 11:49 AM, Mark Knecht wrote:
Hi,
I have a data.frame that is date ordered by row
For another generic approach, you might be interested in the Reduce
function,
rolling - function( x, window=seq_along(x), f=max){
Reduce(f, x[window])
}
x= c(1:10, 2:10, 15, 1)
rolling(x)
#15
rolling(x, 1:10)
#10
rolling(x, 1:12)
#10
Of course this is only part of the solution to the
It's not clear to me whether you are looking for an exploratory tool
or something more like formal inference. For the former, it seems
that estimating a few weighted quantiles would be quite useful. at
least
it is rather Tukeyesque. While I'm appealing to authorities, I can't
resist
Steve:
Are you running R64.app? If not, grab it from here:
http://r.research.att.com/R-2.9.0.pkg
(http://r.research.att.com/ under Leopard build) .
As far as I know (and I actually just tried it this morning), the
standard R 2.9.1 package off the CRAN website is the 32 bit version,
By the way, you'll probably have to reinstall some or all of your
packages (and dependencies) if you are using R64.app, probably
downgrading them in the process.
--j
Steve Ellis wrote:
Dear R-helpers,
I am running R version 2.9.1 on a Mac Quad with 32Gb of RAM running
Mac OS X version
Hi, thanks everyone for any help in advance.
I found myself dealing with a tabular time-series data formatted each row
like [ time stamp, ID, values]. I made a small examples:
X = data.frame(t=c(1,1,1,2,2,2,2,3,3,3,4,4,4,5,5),id =
1 - 100 of 130 matches
Mail list logo