Matthew, Joe,

Thanks very much for the quick response, and for flagging the performance pitfalls!

In the general case I won't know what columns will be filtered against and we're working with files in the 1 to 10 million rows. It's not so much extra work to parse the 1st parameter and re-key the data, so I'm testing the 2 approaches.

Right now it's working with eval(parse(text=) and lots of paste() as Matt suggested -- not pretty but working. I didn't know about paste0(), I'm checking that as well.

Kudos for your incredible work with data.table.

-Mel.

param1 <- unlist(str_split(param1, ","))
param1 <- str_trim(param1)
param1 <- str_replace(param1, "=", "==")

param2 <- unlist(str_split(param2, ","))
param2 <- str_trim(param2)

dt <- data.table(dt)
dt <- eval(parse(text=paste("dt[", paste(param1, collapse="&"), ", lapply(.SD, flist), by = param3, .SDcols = param2]", sep="")))



On 2012-04-13 09:16, Steve Lianoglou wrote:

Hmmm...

On Fri, Apr 13, 2012 at 10:50 AM, Joseph Voelkel <[email protected]> wrote:
[snip]

Please do not burn this heretic at the stake!

... well do you sink in water?

-- Steve Lianoglou Graduate Student: Computational Systems Biology |
Memorial Sloan-Kettering Cancer Center | Weill Medical College of
Cornell
University Contact Info: http://cbio.mskcc.org/~lianos/contact
_______________________________________________
datatable-help mailing list
[email protected]
https://lists.r-forge.r-project.org/cgi-bin/mailman/listinfo/datatable-help

Reply via email to