Hi !
I'm trying to read a large CSV files (some 40000 records) with
DBD::CSV version 0.2002 and perl v5.8.1 built for i586-linux-thread-multi
Reading smaller CSV files (i.e. around 5000 records) works like a
charm. Only large ones fail.
After searching the archive I found a hint using the LIMIT clause
for select but that doesn't seem to help.
Here is a code fragment:
print "Providing ".scalar(@cols)." colnames to netting...\n";
$dbh->{'csv_tables'}->{'netting'} = {'col_names' => [EMAIL PROTECTED];
my $sth;
# test LIMIT clause
print "Preparing test LIMIT clause...\n";
$sth = $dbh->prepare("select count(*) from netting limit 1000") or die $DBI::errstr;
print "Executing test LIMIT clause...\n";
$sth->execute() or die $DBI::errstr;
print "Retrieving result for test LIMIT clause...\n";
while (my ($count) = $sth->fetchrow_array()) {
print("LIMIT 1000: found $count records in netting");
}
$sth->finish();
and here is the output:
Providing 36 colnames to netting...
Preparing test LIMIT clause...
Executing test LIMIT clause...
DBD::CSV::st execute failed: Error while reading file ./netting:
illegal filedescriptor at /usr/lib/perl5/site_perl/5.8.1/DBD/CSV.pm line 216, <GEN8>
chunk 27617.
[EMAIL PROTECTED]:> wc netting
42123 286032 16561068 netting
When I reduce the size of the CSV file to e.g. 20000 records the
above code works as it should.
What is the proposed way of reading large CSV files (other than
splitting the file into smaller chunks and reading all chunks
seperately).
Any hint appreciated, best wishes,
Michael
--
Vote against SPAM - see http://www.politik-digital.de/spam/
Michael Gerdau email: [EMAIL PROTECTED]
GPG-keys available on request or at public keyserver