There's a package called "pivottabler" which exports PivotTable:
http://pivottabler.org.uk/reference/PivotTable.html .
Duncan Murdoch
On 30/09/2023 7:11 a.m., John Kane wrote:
To follow up on Rui Barradas's post, I do not think PivotTable is an R
command.
You may be thinking og the "pivot_lon
is a solution,
but it could be.
-Original Message-
From: R-help On Behalf Of John Kane
Sent: Saturday, September 30, 2023 7:11 AM
To: Rui Barradas
Cc: Paul Bernal ; R
Subject: Re: [R] Grouping by Date and showing count of failures by date
[External Email]
To follow up on Rui
To follow up on Rui Barradas's post, I do not think PivotTable is an R
command.
You may be thinking og the "pivot_longer" and "pivot_wider" functions in
the {tidyr} package which is part of {tidyverse}.
On Sat, 30 Sept 2023 at 07:03, Rui Barradas wrote:
> Às 21:29 de 29/09/2023, Paul Bernal esc
Às 21:29 de 29/09/2023, Paul Bernal escreveu:
Dear friends,
Hope you are doing great. I am attaching the dataset I am working with
because, when I tried to dput() it, I was not able to copy the entire
result from dput(), so I apologize in advance for that.
I am interested in creating a column n
Dear friends,
Hope you are doing great. I am attaching the dataset I am working with
because, when I tried to dput() it, I was not able to copy the entire
result from dput(), so I apologize in advance for that.
I am interested in creating a column named Failure_Date_Period that has the
FAILDATE b
2 8 passpass
5 2 10 passpass
6 3 19 failfail
7 3 13 passfail
Would be easier for us if used dput() to share your data but thanks for the
minimal example!
Chris
- Original Messa
On Sat, 21 Mar 2020 20:01:30 -0700
Thomas Subia via R-help wrote:
> Serial_test is a pass, when all of the Meas_test are pass for a given
> serial. Else Serial_test is a fail.
Use by/tapply in base R or dplyr::group_by if you prefer tidyverse
packages.
--
Best regards,
Ivan
__
Colleagues,
Here is my dataset.
Serial Measurement Meas_test Serial_test
1 17 failfail
1 16 passfail
2 12 passpass
2 8 passpass
2 10 pass
Rui
Your first code worked just fine.
Jeff
-Original Message-
From: Rui Barradas
Sent: Saturday, May 26, 2018 8:30 AM
To: reichm...@sbcglobal.net; 'R-help'
Subject: Re: [R] Grouping by 3 variable and renaming groups
Hello,
Sorry, but I think my first answer is wrong.
Yo
Rui
That did it
Jeff
-Original Message-
From: Rui Barradas
Sent: Saturday, May 26, 2018 8:23 AM
To: reichm...@sbcglobal.net; 'R-help'
Subject: Re: [R] Grouping by 3 variable and renaming groups
Hello,
See if this is it:
priceStore_Grps$StoreID <- paste("St
Hello,
Sorry, but I think my first answer is wrong.
You probably want something along the lines of
sp <- split(priceStore_Grps, priceStore_Grps$StorePC)
res <- lapply(seq_along(sp), function(i){
sp[[i]]$StoreID <- paste("Store", i, sep = "_")
sp[[i]]
})
res <- do.call(rbind, res)
row.na
Hello,
See if this is it:
priceStore_Grps$StoreID <- paste("Store",
seq_len(nrow(priceStore_Grps)), sep = "_")
Hope this helps,
Rui Barradas
On 5/26/2018 2:03 PM, Jeff Reichman wrote:
ALCON
I'm trying to figure out how to rename groups in a data frame after groups
by selected variab
ALCON
I'm trying to figure out how to rename groups in a data frame after groups
by selected variabels. I am using the dplyr library to group my data by 3
variables as follows
# group by lat (StoreX)/long (StoreY)
priceStore <- LapTopSales[,c(4,5,15,16)]
priceStore <- priceStore[complete
indInterval
Cheers
Petr
> -Original Message-
> From: R-help [mailto:r-help-boun...@r-project.org] On Behalf Of Shivi82
> Sent: Thursday, June 18, 2015 9:22 AM
> To: r-help@r-project.org
> Subject: [R] Grouping in R
>
> Hi All,
>
> I am working on a data where th
Hi All,
I am working on a data where the total row count is 25+ and have approx.
20 variables. One of the var on which i need to summarize the data is
Consignor i.e. seller name.
Now the issue here is after deleting all the duplicate names i still have
55000 unique customer name and i am not
Reading the Intro, as Bert suggests, would likely solve some of your problems.
If you think about how many combinations it would take, using only one variable
from each group in any one model, you would see that the number of individual
models (12) is not so onerous that you couldn’t specify the
Unless there is reason to keep the conversation private, always reply
to the list. How will anyone else know that my answer wasn't
satisfactory?
1. I don't intend to go through your references. A minimal
reproducible example of what you wish to do and what you tried would
help.
2. Have you read A
Have you read "An Introduction to R" (or other online tutorial)? If
not, please do so before posting further here. It sounds like you are
missing very basic knowledge -- on factors -- which you need to learn
about before proceeding.
?factor
gives you the answer you seek, I believe.
Cheers,
Bert
Dear all,
I am trying to run a GLMM following the procedure described by Rhodes et al.
(Ch. 21) in the Zuur book Mixed effects models and extensions in R . Like in
his example, I have four "sets" of explanatory variables:
1. Land use - 1 variable, factor (forest or agriculture)
2. Location - 1
You need to re-think. What you said is nonsense. Use an appropriate
clustering algorithm.
(a can be near b; b can be near c; but a is not near c, using "near" =
closer than threshhold)
Cheers,
Bert
Bert Gunter
Genentech Nonclinical Biostatistics
(650) 467-7374
"Data is not information. Informati
Hello,
I'm looking for a function that groups elements below a certain distance
threshold, based on a distance matrix. In other words, I'd like to group
samples without using a standard clustering algorithm on the distance matrix.
For example, let the distance matrix be :
A B C
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1
On 01/20/14, 14:27 , jim holtman wrote:
> Check out the use of the 'local' function:
True - have completely forgotten the "local" function.
Thanks,
Rainer
>
>
>> gc()
> used (Mb) gc trigger (Mb) max used (Mb) Ncells 199420 10.7
> 407500 21.8
Check out the use of the 'local' function:
> gc()
used (Mb) gc trigger (Mb) max used (Mb)
Ncells 199420 10.7 407500 21.8 35 18.7
Vcells 308004 2.4 786432 6.0 786424 6.0
> result <- local({
+ a <- rnorm(100) # big objects
+ b <- rnorm(100)
+ mean(a
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1
Hi
I would like to group commands, so that after a group of commands has
been executed, the variables defined in that group are automatically
deleted.
My reasoning: I have a longer sript which is used to load data, do
analysis and plot graphs, all pa
Hi Jake.
Sorry, I misunderstood about what you wanted.
Instead of this:
lapply(split(indx,(indx-1)%%n+1),function(i) mat1[,i])
If I use:
res1<- lapply(split(indx,(indx-1)%/%n+1),function(i) mat1[,i])
#or
lapply(split(indx, as.numeric(gl(ncol(mat1),n,ncol(mat1,function(i)
mat1[,i])
lapp
HI,
May be this helps:
set.seed(24)
mat1<- matrix(sample(1:60,30*24,replace=TRUE),ncol=24)
colnames(mat1)<- rep(c("O","H","L","C"),6)
indx<-seq_along(colnames(mat1))
n<- length(unique(colnames(mat1)))
res<- lapply(split(indx,(indx-1)%%n+1),function(i) mat1[,i])
lapply(res,head,2)
#$`1`
# O
Arun caught my attention that I committed a mistake with example data set.
I send now the correct, with same text explain my problem.
Sorry all of you for the confusion.
I have a very large data frame (more than 5 million lines) as below (dput
example at the end of mail):
Station Antenna Tag
Hello all,
I have a very large data frame (more than 5 million lines) as below (dput
example at the end of mail):
Station Antenna TagDateTime Power Events
1 2 999 22/07/2013 11:00:2117 1
1 2 999 22/07/2013 11:33:4731 1
1 2 999 22/0
Hello all,
I´m have a very large data frame (more than 5 million lines) as below (dput
example at the end of mail):
Station Antenna TagDateTime Power Events
1 1 2 999 22/07/2013 11:00:2117 1
2 1 2 999 22/07/2013 11:33:4731 1
3 1 2
1.Please cc to the list, as I have here, unless your comments are off topic.
2. Use dput() (?dput) to include **small** amounts of data in your
message, as attachments are generally stripped from r-help.
3. I have no experience with itemsets or the arules package, but a
quick glance at the docs t
I **suggest** that you explain what you wish to accomplish using a
reproducible example rather than telling us what packages you think
you should use. I believe you are making things too complicated; e.g.
what do you mean by "frequent patterns"? Moreover, "basket format" is
rather unclear -- and m
I have a data in the following form :
CIN TRN_TYP
90799541
90799542
90799543
90799544
90799545
90799544
90799545
90799546
90799547
90799548
90799549
90799549
..
..
..
there are 100 types of C
# V2 V3
#1 c 0.9
#
#$b
# V2 V3
#2 x 0.8
#3 z 0.5
#
#$c
# V2 V3
#4 y 0.9
#5 x 0.7
#6 z 0.6
A.K.
- Original Message -
From: Nuri Alpay Temiz
To: R-help@r-project.org
Cc:
Sent: Tuesday, January 15, 2013 12:10 PM
Subject: [R] grouping elements of a data frame
Hi every
On Jan 15, 2013, at 9:10 AM, Nuri Alpay Temiz wrote:
> Hi everyone,
>
> I have a question on selecting and grouping elements of a data frame. For
> example:
>
> A.df<- [ a c 0.9
> b x 0.8
> b z 0.5
> c y 0.9
> c x 0.7
> c z 0.6]
Tha
Hi everyone,
I have a question on selecting and grouping elements of a data frame. For
example:
A.df<- [ a c 0.9
b x 0.8
b z 0.5
c y 0.9
c x 0.7
c z 0.6]
I want to create a list of a data frame that gives me the unique values of
Thank you Rui,
I am trying to create a column in the data file turtlehatch.csv
Saludos, Jean
--
View this message in context:
http://r.789695.n4.nabble.com/Grouping-distances-tp4632985p4632989.html
Sent from the R help mailing list archive at Nabble.com.
__
Hello,
It's easy to create a new column. Since you haven't said where nor the
type of data structure you are using, I'll try to answer to both.
Suppose that 'x' s a matrix. Then
newcolumn <- newvalues
x2 <- cbind(x, newcolumn) # new column added to x, result in x2
Suppose that 'y' is a data.
Hi R-listers,
I am trying to group my HTL data, this is a column of data of "Distances to
the HTL" data = turtlehatch. I would like to create an Index of distances
(0-5m, 6-10, 11-15, 16-20... up to 60). And then create a new file with this
HTLIndex in a column.
So far I have gotten this far:
Hi R-listers,
I am trying to group my HTL data, this is a column of data of "Distances to
the HTL" data = turtlehatch. I would like to create an Index of distances
(0-5m, 6-10, 11-15, 16-20... up to 60). And then create a new file with this
HTLIndex in a column.
So far I have gotten this far:
0 0 0 3 0 0 0 0
> makegroup2(df$begin,df$end)
[1] 1 2 3 0 0 2 3 0 0 0 3 0 0 0 0
A. K.
- Original Message -
From: Sarah Goslee
To: g...@asu.edu
Cc: "r-help@r-project.org"
Sent: Tuesday, May 8, 2012 2:33 PM
Subject: Re: [R] grouping function
Hi,
On Tue, May 8, 2012 at 2:17 PM,
gt;> makegroup1(df$begin,df$end)
> [1] 3 3 3 0 0 3 3 0 0 0 3 0 0 0 0
>> makegroup2(df$begin,df$end)
> [1] 1 2 3 0 0 2 3 0 0 0 3 0 0 0 0
>
>
> A. K.
>
>
>
>
> - Original Message -
> From: Sarah Goslee
> To: g...@asu.edu
> Cc: "r-help@r-p
Hi,
On Tue, May 8, 2012 at 2:17 PM, Geoffrey Smith wrote:
> Hello, I would like to write a function that makes a grouping variable for
> some panel data . The grouping variable is made conditional on the begin
> year and the end year. Here is the code I have written so far.
>
> name <- c(rep('F
Hello, I would like to write a function that makes a grouping variable for
some panel data . The grouping variable is made conditional on the begin
year and the end year. Here is the code I have written so far.
name <- c(rep('Frank',5), rep('Tony',5), rep('Edward',5));
begin <- c(seq(1990,1994),
Thanks a ton!
It was weird because according to me ordering should have by default.
Anyways, your workaround along with Weidong's method are both good
solutions.
On Wed, Apr 4, 2012 at 12:10 PM, Berend Hasselman wrote:
>
> On 04-04-2012, at 07:15, Ashish Agarwal wrote:
>
> > Yes. I was missing th
On 04-04-2012, at 07:15, Ashish Agarwal wrote:
> Yes. I was missing the DROP argument.
> But now the problem is splitting is causing some weird ordering of groups.
Why weird?
> See below:
>
> DF <- read.table(text="
> Houseid,Personid,Tripid,taz
> 1,1,1,4
> 1,1,2,7
> 2,1,1,96
> 2,1,2,4
> 2,1,3
Yes. I was missing the DROP argument.
But now the problem is splitting is causing some weird ordering of groups.
See below:
DF <- read.table(text="
Houseid,Personid,Tripid,taz
1,1,1,4
1,1,2,7
2,1,1,96
2,1,2,4
2,1,3,2
2,2,1,58
3,1,5,7
", header=TRUE, sep=",")
aa <- split(DF, DF[, 1:2], drop=TRUE)
Hello,
Ashish Agarwal wrote
>
> I have a dataframe imported from csv file below:
>
> Houseid,Personid,Tripid,taz
> 1,1,1,4
> 1,1,2,7
> 2,1,1,96
> 2,1,2,4
> 2,1,3,2
> 2,2,1,58
>
> There are three groups identified based on the combination of first and
> second columns. How do I split this data
how about
split(inpfil, paste(inpfil[,1],inpfil[,2],sep=','))
Weidong Gu
On Tue, Apr 3, 2012 at 6:42 PM, Ashish Agarwal
wrote:
> I have a dataframe imported from csv file below:
>
> Houseid,Personid,Tripid,taz
> 1,1,1,4
> 1,1,2,7
> 2,1,1,96
> 2,1,2,4
> 2,1,3,2
> 2,2,1,58
>
> There are three gro
I have a dataframe imported from csv file below:
Houseid,Personid,Tripid,taz
1,1,1,4
1,1,2,7
2,1,1,96
2,1,2,4
2,1,3,2
2,2,1,58
There are three groups identified based on the combination of first and
second columns. How do I split this data frame?
I tried
aa <- split(inpfil, inpfil[,1:2])
but it
Please take a look at my first reply to you:
ave(y, findInterval(y, quantile(y, c(0.33, 0.66
Then read ?ave for an explanation of the syntax. ave takes two
vectors, the first being the data to be averaged, the second being an
index to split by. You don't want to use split() here.
Michael
On
On 03-04-2012, at 20:21, Val wrote:
> Hi All,
>
> On the same data points
> x=c(46, 125 , 36 ,193, 209, 78, 66, 242 , 297,45 )
>
> I want to have have the following output as data frame
>
> x group group mean
> 46 142.3
> 125 289.6
> 36 142.3
>
On 03-04-2012, at 21:02, Val wrote:
>
>
> On Tue, Apr 3, 2012 at 2:53 PM, Berend Hasselman wrote:
>
> On 03-04-2012, at 20:21, Val wrote:
>
> > Hi All,
> >
> > On the same data points
> > x=c(46, 125 , 36 ,193, 209, 78, 66, 242 , 297,45 )
> >
> > I want to have have the following output as
On Tue, Apr 3, 2012 at 2:53 PM, Berend Hasselman wrote:
>
> On 03-04-2012, at 20:21, Val wrote:
>
> > Hi All,
> >
> > On the same data points
> > x=c(46, 125 , 36 ,193, 209, 78, 66, 242 , 297,45 )
> >
> > I want to have have the following output as data frame
> >
> > x group group mean
I did look at it the result is below,
x=c(46, 125 , 36 ,193, 209, 78, 66, 242 , 297,45 )
#lapply( split(x, cut(x, quantile(x, prob=c(0, .333, .66 ,1)) ,
include.lowest=TRUE) ), mean)
ave( split(x, cut(x, quantile(x, prob=c(0, .333, .66 ,1)) ,
include.lowest=TRUE) ), mean)
> ave( split(x, cut(
On Tue, Apr 03, 2012 at 02:21:36PM -0400, Val wrote:
> Hi All,
>
> On the same data points
> x=c(46, 125 , 36 ,193, 209, 78, 66, 242 , 297,45 )
>
> I want to have have the following output as data frame
>
> x group group mean
> 46 142.3
> 125 289.6
> 36
Hi All,
On the same data points
x=c(46, 125 , 36 ,193, 209, 78, 66, 242 , 297,45 )
I want to have have the following output as data frame
x group group mean
46 142.3
125 289.6
36 142.3
193 3235.25
209 3235.25
78 2
On Apr 3, 2012, at 10:11 AM, Val wrote:
> David W and all,
>
> Thank you very much for your help.
>
> Here is the final output that I want in the form of data frame. The
> data frame should contain x, group and group_ mean in the following
> way
>
> x group group mean
> 46 1
David W and all,
Thank you very much for your help.
Here is the final output that I want in the form of data frame. The data
frame should contain x, group and group_ mean in the following way
x group group mean
46 142.3
125 289.6
36 142.3
193
versity
College Station, TX 77843-4352
-Original Message-
From: r-help-boun...@r-project.org [mailto:r-help-boun...@r-project.org] On
Behalf Of R. Michael Weylandt
Sent: Tuesday, April 03, 2012 8:32 AM
To: Val
Cc: r-help@r-project.org
Subject: Re: [R] grouping
Use cut2 as I suggested an
On Apr 3, 2012, at 9:32 AM, R. Michael Weylandt wrote:
Use cut2 as I suggested and David demonstrated.
Agree that Hmisc::cut2 is extremely handy and I also like that fact
that the closed ends of intervals are on the left side (which is not
the same behavior as cut()), which has the otehr
On Tue, Apr 03, 2012 at 09:31:29AM -0400, Val wrote:
> Thank you all (David, Michael, Giovanni) for your prompt response.
>
> First there was a typo error for the group mean it was 89.6 not 87.
>
> For a small data set and few groupings I can use prob=c(0, .333, .66 ,1)
> to group in to three g
Use cut2 as I suggested and David demonstrated.
Michael
On Tue, Apr 3, 2012 at 9:31 AM, Val wrote:
> Thank you all (David, Michael, Giovanni) for your prompt response.
>
> First there was a typo error for the group mean it was 89.6 not 87.
>
> For a small data set and few groupings I can use p
Thank you all (David, Michael, Giovanni) for your prompt response.
First there was a typo error for the group mean it was 89.6 not 87.
For a small data set and few groupings I can use prob=c(0, .333, .66 ,1)
to group in to three groups in this case. However, if I want to extend the
number of g
Hi!
Maybe not the most elegant solution, but works:
for(i in seq(1,length(data)-(length(data) %% 3), 3)) {
ifelse((length(data)-i)>3, { print(sort(data)[ c(i:(i+2)) ]);
print(mean(sort(data)[ c(i:(i+2)) ])) }, { print(sort(data)[
c(i:length(data)) ]); print(mean(sort(data)[ c(i:length(data))
Probably something along the following lines:
> x <- c( 46, 125 , 36 ,193, 209, 78, 66, 242 , 297 , 45)
> sorted <- c(36 , 45 , 46, 66, 78, 125,193, 209, 242, 297)
> tapply(sorted, INDEX = (seq_along(sorted) - 1) %/% 3, FUN = mean)
0 1 2 3
42.3 89.7 21
Ignoring the fact your desired answers are wrong, I'd split the
separating part and the group means parts into three steps:
i) quantile() can help you get the split points,
ii) findInterval() can assign each y to a group
iii) then ave() or tapply() will do group-wise means
Something like:
y <-
On Apr 3, 2012, at 8:47 AM, Val wrote:
Hi all,
Assume that I have the following 10 data points.
x=c( 46, 125 , 36 ,193, 209, 78, 66, 242 , 297 , 45)
sort x and get the following
y= (36 , 45 , 46, 66, 78, 125,193, 209, 242, 297)
The methods below do not require a sorting step.
I want
Hi all,
Assume that I have the following 10 data points.
x=c( 46, 125 , 36 ,193, 209, 78, 66, 242 , 297 , 45)
sort x and get the following
y= (36 , 45 , 46, 66, 78, 125,193, 209, 242, 297)
I want to group the sorted data point (y) into equal number of
observation per group. In this ca
Perhaps cut.POSIXt (which is a generic so you can just call cut)
depending on the unstated form of your time object.
Michael
On Thu, Feb 9, 2012 at 12:15 PM, Abraham Mathew wrote:
> I have the following variable, time, which is a character variable and it's
> structured as follows.
>
>> head(as.
I have the following variable, time, which is a character variable and it's
structured as follows.
> head(as.character(dat$time), 30) [1] "00:00:01" "00:00:16" "00:00:24"
> "00:00:25" "00:00:25" "00:00:40" "00:01:50" "00:01:54" "00:02:33" "00:02:43"
> "00:03:22"
[12] "00:03:31" "00:03:41" "00:03
On Feb 5, 2012, at 9:54 AM, jim holtman wrote:
Is this what you are after:
x <- c(1327211358, 1327221999, 1327527296, 1327555433, 1327701042,
+ 1327761389, 1327780993, 1327815670, 1327822964, 1327897497,
1327897527,
+ 1327937072, 1327938300, 1327957589, 1328044466, 1328127921,
1328157588,
Is this what you are after:
> x <- c(1327211358, 1327221999, 1327527296, 1327555433, 1327701042,
+ 1327761389, 1327780993, 1327815670, 1327822964, 1327897497, 1327897527,
+ 1327937072, 1327938300, 1327957589, 1328044466, 1328127921, 1328157588,
+ 1328213951, 1328236836, 1328300276, 1328335936, 132
I have a list of numbers corresponding to timestamps, a sample of which follows:
c(1327211358, 1327221999, 1327527296, 1327555433, 1327701042,
1327761389, 1327780993, 1327815670, 1327822964, 1327897497, 1327897527,
1327937072, 1327938300, 1327957589, 1328044466, 1328127921, 1328157588,
1328213951,
Hi Julia,
sorry for the very late reply, your original email was posted while I was on
hiatus from R-help. I'm the author of the dynamicTreeCut package. I
recommend that you try using the "hybrid" method using the cutreeDynamic
function. What you observed is a known problem of the tree method (wh
On Sat, Aug 27, 2011 at 7:26 AM, Andra Isan wrote:
> Hi All,
>
> I have a data frame as follow:
>
> user_id time age location gender
> .
>
> and I learn a logistic regression to learn the weights (glm with family=
> (link = logit))), my response value is either zero or one. I would like to
>
Hi All,
I have a data frame as follow:
user_id time age location gender
.
and I learn a logistic regression to learn the weights (glm with family= (link
= logit))), my response value is either zero or one. I would like to group the
users based on user_id and time and see the y values and
Hi @ all,
both possibilities are working very fine.
Thanks a lot for the fast help!
Best Greetinx from the "Earth Eater" Geophagus
--
View this message in context:
http://r.789695.n4.nabble.com/Grouping-columns-tp3681018p3683076.html
Sent from the R help mailing list archive at Nabble.com.
___
On Jul 20, 2011, at 10:42 AM, Geophagus wrote:
*Hi @ all,
I have a question concerning the possibilty of grouping the columns
of a
matrix.
R groups the columns alphabetically.
What can I do to group the columns in my specifications?
Dear Earth Eater;
You can create a factor whose levels a
*Hi @ all,
I have a question concerning the possibilty of grouping the columns of a
matrix.
R groups the columns alphabetically.
What can I do to group the columns in my specifications?
The script is the following:*
> #R-Skript: Anzahl xyz
>
> #Quelldatei einlesen
> b<-read.csv2("Z:/int/xyz.csv
untested because I don't have access to your data, but this should work.
b13.NEW <- b13[, c("Gesamt", "Wasser", "Boden", "Luft", "Abwasser",
"Gefährliche Abfälle", "nicht gefährliche Abfälle")]
Geophagus wrote:
>
> *Hi @ all,
> I have a question concerning the possibilty of grouping the
All the examples in 'nlme' are in "Grouped Data: distance ~ age | Subject"
format.
How do I "group" my data in "dolf" the same way the data "Orthodont" are
grouped.
> show(dolf)
distance age Subjectt Sex
16.83679 22.01 F1 F
26.63245 23.04 F1 F
3 11.58730 39.26
adolfpf wrote:
>
> How do I "group" my data in "dolf" the same way the data "Orthodont" are
> grouped.
>
>> show(dolf)
>distance age Subjectt Sex
> 16.83679 22.01 F1 F
> 26.63245 23.04 F1 F
> 3 11.58730 39.26 M2 M
>
>
I know that many sample in that exc
Hi Jason,
Something along the lines of
with(Orange, table(cut(age, breaks = c(118, 664, 1004, 1372, 1582, Inf)),
cut(circumference, breaks = c(30, 58, 62, 115,
145, 179, 214
should get you started.
HTH,
Jorge
On Sat, Mar 5, 2011 at 5:38 PM, Jason Rupert <> wrote
2011 3:38 PM
> To: R Project Help
> Subject: [R] Grouping data in ranges in table
>
> Working with the built in R data set Orange, e.g. with(Orange,
> table(age,
> circumference)).
>
>
> How should I go about about grouping the ages and circumferences in the
> follow
Working with the built in R data set Orange, e.g. with(Orange, table(age,
circumference)).
How should I go about about grouping the ages and circumferences in the
following ranges and having them display as such in a table?
age range:
118 - 664
1004 - 1372
1582
circumference range:
30-58
62-
Hi Steve,
Just test whether y is greater than the predicted y (i.e., your line).
## function using the model coefficients*
f <- function(x) {82.9996 + (.5589 * x)}
## Find group membership
group <- ifelse(y > foo(x), "A", "B")
*Note that depending how accurate this needs to be, you will probably
Hi R-list,
I have a data set with plot locations and observations and want to label
them based on locations. For example, I have GPS information (x and y) as
follows:
> x
[1] -87.85092 -87.85092 -87.85092 -87.85093 -87.85093 -87.85093 -87.85094
[8] -87.85094 -87.85094 -87.85096 -87.85095 -87
Here is one solution; mine differs since there should be at least one
item in the range which would be itself:
tm gr
1 12345 1
2 42352 3
3 12435 1
4 67546 2
5 24234 2
6 76543 4
7 31243 2
8 13334 3
9 64562 3
10 64123 3
> d$ct <- ave(d$tm, d$gr, FUN = function(x){
+ # de
have nobody any idea?
i have already try with tapply(d,gr, ... ) but i have problems with the
choose of the function ... also i am not really sure if that is the right
direction with tapply ...
it'll be really great when somebody comes with new suggestion..
10x
--
View this message in context
sry,
new try:
tm<-c(12345,42352,12435,67546,24234,76543,31243,13334,64562,64123)
gr<-c(1,3,1,2,2,4,2,3,3,3)
d<-data.frame(cbind(time,gr))
where tm are unix times and gr the factor grouping by
i have a skalar for example k=500
now i need to calculate in for every row how much examples in the g
On Feb 25, 2011, at 8:28 PM, zem wrote:
hi all,
i have a little problem, i have some code writen, but it is to slow...
i have a dataframe with a column of time series and a grouping column,
really there is no metter if in the first col what kind of data is,
it can
be a random number like
hi all,
i have a little problem, i have some code writen, but it is to slow...
i have a dataframe with a column of time series and a grouping column,
really there is no metter if in the first col what kind of data is, it can
be a random number like this
x<-rnorm(10)
gr<-c(1,3,1,2,2,4,2,3,3,3)
x
I'm working on getting this to work - need to figure out how to extract
pieces properly.
In the mean time, I may have figured out an alternate method to group
the factors by the following:
> stems139$SpeciesF <- factor(stems139$Species)
> stems139GLM <- glm(Stems ~ Time*SizeClassF*Species, f
Hi:
One approach would be to use dlply() from the plyr package to generate the
models and assign the results to a list, something like the following:
library(plyr)
# function to run the GLM in each data subset - the argument is a generic
data subset d
gfun <- function(d) glm(Stems ~ Time, data =
I'm having a hard time figuring out how to group results by certain
factors in R. I have data with the following headings:
[1] "Time" "Plot" "LatCat""Elevation" "ElevCat"
"Aspect""AspCat""Slope"
[9] "SlopeCat" "Species" "SizeClass" "Stems"
and I'm trying to use a G
Hello Jorge,
Thank you for the reply. I tried a few different things with if/else but
couldn't get them to go. I really appreciate your feedback. I learned
something new from this
Will
--
View this message in context:
http://r.789695.n4.nabble.com/grouping-question-tp3019922p3019952.html
Se
Hello Jim
Wow. I tried cut but i see you have an interim step with labels a,b,c but
levels night and day. i was really close to this. i have labels
night,day,night and it wouldn't let me duplicate labels. I am very greatful
for your input
Will
--
View this message in context:
http://r.7896
Hi Will,
One way would be:
> x
[1] 0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23
24
> factor(ifelse(x>6 & x<18, 'day', 'night'))
[1] night night night night night night night day day day day day
day day day
[16] day day day night night night night nig
try this:
> x
[1] 0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24
> y <- cut(x, breaks=c(-Inf,6,18, Inf), labels=c('a','b','c'))
> levels(y) <- c('night','day','night')
> y
[1] night night night night night night night day day day day
day day day day day
Hello
I have what is probably a very simple grouping question however, given my
limited exposure to R, I have not found a solution yet despite my research
efforts and wild attempts at what I thought "might" produce some sort of
result.
I have a very simple list of integers that range between 1 a
1 - 100 of 181 matches
Mail list logo