Hello,
You have to convert y1 to class "Date" first, then do date arithmetic.
The complete code would be
dat<-read.table(text=" y1, flag
24-01-2016,S
24-02-2016,R
24-03-2016,X
24-04-2016,H
24-01-2016,S
24-11-2016,R
24-10-2016,R
24-02-2016,X
24-01-2016,H
24-11-2016,S
24-02-2016,R
24-10-2016,X
Thank you Rui,
It Worked!
How about if the first variable is date format? Like the following
dat<-read.table(text=" y1, flag
24-01-2016,S
24-02-2016,R
24-03-2016,X
24-04-2016,H
24-01-2016,S
24-11-2016,R
24-10-2016,R
24-02-2016,X
24-01-2016,H
24-11-2016,S
24-02-2016,R
24-10-2016,X
24-03-2016,H
Hello,
You must run the code to create x1 first, part 1), then part 2).
I've tested with your data and all went well, the result is the following.
> dput(dat)
structure(list(y1 = c(39958L, 40058L, 40105L, 40294L, 40332L,
40471L, 40493L, 40533L, 40718L, 40771L, 40829L, 40892L, 41056L,
41110L,
Rui,
Thank You!
the second one gave me NULL.
dat$z2 <- unlist(tapply(dat$y1, dat$x1, function(y) y - y[1]))
dat$z2
NULL
On Wed, Oct 12, 2016 at 3:34 PM, Rui Barradas wrote:
> Hello,
>
> Seems simple:
>
>
> # 1)
> dat$x1 <- cumsum(dat$flag == "S")
>
> # 2)
> dat$z2 <-
Hello,
Seems simple:
# 1)
dat$x1 <- cumsum(dat$flag == "S")
# 2)
dat$z2 <- unlist(tapply(dat$y1, dat$x1, function(y) y - y[1]))
Hope this helps,
Rui Barradas
Em 12-10-2016 21:15, Val escreveu:
Hi all,
I have a data set like
dat<-read.table(text=" y1, flag
39958,S
40058,R
40105,X
40294,H
Hi all,
I have a data set like
dat<-read.table(text=" y1, flag
39958,S
40058,R
40105,X
40294,H
40332,S
40471,R
40493,R
40533,X
40718,H
40771,S
40829,R
40892,X
41056,H
41110,S
41160,R
41222,R
41250,R
41289,R
41324,X
41355,R
41415,X
41562,X
41562,H
41586,S
",sep=",",header=TRUE)
First sort the
I'm reading the paper Predictive modeling with high-dimensional data
streams: an on-line variable selection approach, from McWilliams and
Montana, which introduces an interesting algorithm to dynamically select
variables in high dimensional data streams.
Does anyone know if this (or a similar
Dunlap
Cc: r-help@r-project.org
Subject: Re: [R] Incremental ReadLines
Hi Bill,
Thank you so much for your suggestions. I will try and alter my
code.
Regarding the even shorter solution outside the loop it looks
good but my problem is that not all observations have
Hi there,
I am having a similar problem with reading in a large text file with around
550.000 observations with each 10 to 100 lines of description. I am trying
to parse it in R but I have troubles with the size of the file. It seems
like it is slowing down dramatically at some point. I would be
Date: Wed, 13 Apr 2011 10:57:58 -0700
From: frederikl...@gmail.com
To: r-help@r-project.org
Subject: Re: [R] Incremental ReadLines
Hi there,
I am having a similar problem with reading in a large text file with around
550.000 observations
-boun...@r-project.org
[mailto:r-help-boun...@r-project.org] On Behalf Of Freds
Sent: Wednesday, April 13, 2011 10:58 AM
To: r-help@r-project.org
Subject: Re: [R] Incremental ReadLines
Hi there,
I am having a similar problem with reading in a large text
file with around
550.000
: Re: [R] Incremental ReadLines
Hi there,
I am having a similar problem with reading in a large text file with
around
550.000 observations with each 10 to 100 lines of description. I am
trying
to parse it in R but I have troubles with the size of the file. It seems
like it is slowing
-
From: r-help-boun...@r-project.org
[mailto:r-help-boun...@r-project.org] On Behalf Of Freds
Sent: Wednesday, April 13, 2011 10:58 AM
To: r-help@r-project.org
Subject: Re: [R] Incremental ReadLines
Hi there,
I am having a similar problem with reading in a large text
file
Date: Thu, 14 Apr 2011 11:57:40 -0400
Subject: Re: [R] Incremental ReadLines
From: frederikl...@gmail.com
To: marchy...@hotmail.com
CC: r-help@r-project.org
Hi Mike,
Thanks for your comment.
I must admit that I am very new to R and although
[see below]
From: Frederik Lang [mailto:frederikl...@gmail.com]
Sent: Thursday, April 14, 2011 12:56 PM
To: William Dunlap
Cc: r-help@r-project.org
Subject: Re: [R] Incremental ReadLines
Hi Bill,
Thank you so much for your suggestions. I will try and alter my
code
Hi All,
I am currently working on some time series data, I know I can use
LOESS/ARIMA model.
The data is written to a vector whose length is 1000, which is a queue,
updating every 15 minutes,
Thus the old data will pop out while the new data push in the vector.
I can rerun the
Gene,
You might want to look at function read.csv.ffdf from package ff which can read
large csv-files into a ffdf object. That's kind of data.frame which is stored
on disk resp. in the file-system-cache. Once you subscript part of it, you get
a regular data.frame.
Jens Oehlschlägel
--
If the headers all start with the same letter, A say, and the data
only contain numbers on their lines then just use
read.table(..., comment = A)
On Mon, Nov 2, 2009 at 2:03 PM, Gene Leynes gleyne...@gmail.com wrote:
I've been trying to figure out how to read in a large file for a few days
I've been trying to figure out how to read in a large file for a few days
now, and after extensive research I'm still not sure what to do.
I have a large comma delimited text file that contains 59 fields in each
record.
There is also a header every 121 records
This function works well for
On 11/2/2009 2:03 PM, Gene Leynes wrote:
I've been trying to figure out how to read in a large file for a few days
now, and after extensive research I'm still not sure what to do.
I have a large comma delimited text file that contains 59 fields in each
record.
There is also a header every 121
Hi Gene,
Rather than using R to parse this file, have you considered using either
grep or sed to pre-process the file and then read it in?
It looks like you just want lines starting with numbers, so something like
grep '^[0-9]\+' thefile.csv otherfile.csv
should be much faster, and then
James,
I think those are Unix commands? I'm on Windows, so that's not an option
(for now)
Also the suggestions posed by Duncan and Phil seem to be working. Thank you
so much, such a simple thing to add the r or rt to the file connection.
I read about blocking, but I didn't imagine that it
22 matches
Mail list logo