Note that read.csv.sql in the sqldf package could be used to avoid
most of the setup:
library(sqldf)
DF <- read.csv.sql("myfile.csv", sql = "select ...")
It will setup the database, read the file into it, apply the select
statement, place the result into data frame DF and destroy the
database all
On Sat, 24 Oct 2009, Carlos J. Gil Bellosta wrote:
Hello,
Adding to Thomas' email, you could also use package colbycol which
allows you to load into R files that a simple read.table cannot cope
with, study columns independently, select those you are more interested
in and, finally, set up a dat
Hello,
Adding to Thomas' email, you could also use package colbycol which
allows you to load into R files that a simple read.table cannot cope
with, study columns independently, select those you are more interested
in and, finally, set up a dataframe with just the columns you are
interested in.
I
Yes, a 350Mb data frame is a bit big for 32-bit R to handle conveniently.
As you note, the survey package doesn't yet do database-backed replicate-weight
designs. You can get the same effect yourself without too much work.
First, put the data into a database, such as SQLite. If you have the
I'm working with a 350MB CSV file on a server that has 3GB of RAM, yet I'm
hitting a memory error when I try to store the data frame into a survey
design object, the R object that stores data for complex sample survey data.
When I launch R, I execute the following line from Windows:
"C:\Program Fi
5 matches
Mail list logo