Hello all,
the test I provided was just to pinpoint that for loading once a big csv
file with read.csv was quicker than read.csv.sql... I have already
"optimized" my calls to read.csv for my particular problem, but is a simple
call to read.csv was quicker than read.csv.sql I doubt that specifying args
would invert the reult a lot...

May be I should outline my problem :

I am working on a powerful machine with 32Go or 64Go of RAM, so loading file
and keeping them in memory is not really an issue.
Those files (let's say 100) are shared by many and are flat csv files (this
to say that modify them is out of question).
Those files have lots of rows and between 10 and 20 colums, string and
numeric...

I basically need to be able to load these files to quicker possible and then
I will keep those data frame in memory... 
So :
Should I write my own C++ function and call it from R ?
Or is there a R way of improving drastically read.csv ?

Thanks a lot
-- 
View this message in context: 
http://r.789695.n4.nabble.com/efficient-equivalent-to-read-csv-write-csv-tp2714325p2717937.html
Sent from the R help mailing list archive at Nabble.com.

______________________________________________
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.

Reply via email to