does it solve to a part your problem, if you use read.table() instead of
scan, since it imports data directly to a data.frame?
let me know, if it helps
Nawaaz Ahmed wrote:
I'm trying to read in datasets with roughly 150,000 rows and 600
features. I wrote a function using scan() to read it in (I
I can usually read in large tables by very careful usage of
read.table() without having to resort to scan(). In particular, using
the `colClasses', `nrows', and `comment.char' arguments correctly can
greatly reduce memory usage (and increase speed) when reading in data.
Converting from a list
I'm trying to read in datasets with roughly 150,000 rows and 600
features. I wrote a function using scan() to read it in (I have a 4GB
linux machine) and it works like a charm. Unfortunately, converting the
scanned list into a datafame using as.data.frame() causes the memory
usage to explode (it
]
Subject: [R] Handling large data sets via scan()
I'm trying to read in datasets with roughly 150,000 rows and 600
features. I wrote a function using scan() to read it in (I have a 4GB
linux machine) and it works like a charm. Unfortunately,
converting the
scanned list into a datafame using