Kee Hinckley wrote:
> 
> >
> > At 17:18 28.04.2002, Ernest Lergon wrote:
> > >Now I'm scared about the memory consumption:
> > >
> > >The CSV file has 14.000 records with 18 fields and a size of 2 MB
> > >(approx. 150 Bytes per record).
> >
> > Now a question I would like to ask: do you *need* to read the whole CSV
> > info into memory? There are ways to overcome this. For example, looking at
> 
> When I have a csv to play with and it's not up to being transfered to a real
> database I use the DBD CSV module which puts a nice sql wrapper around
> it.
>
I've installed DBD::CSV and tested it with my data:

$dbh =
DBI->connect("DBI:CSV:csv_sep_char=\t;csv_eol=\n;csv_escape_char=")
$dbh->{'csv_tables'}->{'foo'} = { 'file' => 'foo.data'};

3 MB memory used.

$sth = $dbh->prepare("SELECT * FROM foo");

3 MB memory used.

$sth->execute();

16 MB memory used!

If I do it record by record like

$sth = $dbh->prepare("SELECT * FROM foo WHERE id=?");

than memory usage will grow query by query due to caching.

Moreover it becomes VERY slow because of reading every time the whole
file again; an index can't be created/used.

No win :-(

Ernest


-- 

*********************************************************************
* VIRTUALITAS Inc.               *                                  *
*                                *                                  *
* European Consultant Office     *      http://www.virtualitas.net  *
* Internationales Handelszentrum *   contact:Ernest Lergon          *
* Friedrichstraße 95             *    mailto:[EMAIL PROTECTED] *
* 10117 Berlin / Germany         *       ums:+49180528132130266     *
*********************************************************************
       PGP-Key http://www.virtualitas.net/Ernest_Lergon.asc

Reply via email to