Hello,
I have sample data set that looks like:

YEAR    MONTH   DAY     CONTINUE        SPL             TIMEFISH
TIMEUNIT        AREA    COUNTY  DEPTH   DEPUNIT GEAR            TRIPID
CONVUNIT
1992    1       26      1               SP0073928       8
H               7       25              4       NA              1000000
02163399054     161
1992    1       26      1               SP0073928       8
H               7       25              4       NA              1000000
02163399054     8
1992    1       26      2               SP0004228       8
H               7       25              4       NA              1000000
02163399054     161
1992    1       26      2               SP0004228       8
H               7       25              4       NA              1000000
02163399054     8
1992    1       25      NA              SP0052652       8
H               7       25              4       NA              1000000
02163399057     85
1992    1       26      NA              SP0037940       8
H               7       25              4       NA              1000000
02163399058     70
1992    1       27      NA              SP0072357       8
H               7       25              4       NA              1000000
02163399059     15
1992    1       27      NA              SP0072357       8
H               7       25              4       NA              1000000
02163399059     20
1992    1       27      NA              SP0026324       8
H               7       25              4       NA              1000000
02163399060     8
1992    1       28      1               SP0072357       8
H               7       25              4       NA              1000000
02163399062     200

How can I use unique to extract the rows that have repeated tripid's
only, not a unique value for each variable but only for TRIPID.  I then
want to condense the unique values by summing the CONVUNIT for each
unique value of TRIPID.  I posted a similar question last week and
received a sufficient answer of how to do this without using uniqe.  The
solution below worked just fine on this sample data set but the full
data set has 446,000 rows of data and my computer and R simply cannot
handle this follwing code on data this large.

conds<-by(Step4,Step4$TRIPID,function(x)
replace(x[1,],"CONVUNIT",sum(x$CONVUNIT)))
Step5<-do.call(rbind,conds)

Thank you,

Cameron Guenther, Ph.D. 
Associate Research Scientist
FWC/FWRI, Marine Fisheries Research
100 8th Avenue S.E.
St. Petersburg, FL 33701
(727)896-8626 Ext. 4305
[EMAIL PROTECTED]

______________________________________________
R-help@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html

Reply via email to