On Thu, 20 Dec 2001, Hardy Merrill wrote:
> I looked back at Groetjes's message where he described the
> memory problem:
>
> while( $hash_ref = $query->fetchrow_hashref ) { ... }
>
> It didn't matter if the while contained something or
> nothing... the memory usage kept growing with each row
> it fetched. He had to fetch about 300.000 rows ... It
> began at 5MB and quit at +/- 80MB.
>
> Since his memory usage keeps growing with each fetchrow_hashref,
> it would seem that the problem may be with fetchrow_hashref
> in general, or ??? We use DBD::Oracle, but I'm not in a
> position to test this to see if each fetchrow_hashref eats
> more memory.
Here is a quick test script that I wrote to test this. perl 5.6.1 &
DBD::mysql 2.1004 & mysql 3.23.46
#!/usr/local/perl_5.6.1/perl
use DBI;
my $dbi = DBI->connect("dbi:mysql:dbname=poarch","","");
my $sth = $dbi->prepare(q{SELECT * FROM author_access LIMIT 1200000});
$sth->execute();
print "HERE\n";
sleep(10);
print "Done Sleep\n";
while ($ref = $sth->fetchrow_hashref('NAME_lc')) {
}
sleep(10);
-------
Memory grows to about 48M during the execute and then does not
move for the loop. I tried it both with the 'NAME_lc' and without, and
that did not make much of a difference. What caused the memory to grow
the most was changing the limit clause; 1.2M records resulted in 45 megs
memory usage, and 800K records resulted in 33 megs. Oh, and the table is
just an int and a timestamp. So from this we can say that DBD::mysql does
not seem to leak memory for fetchrow_hashref with tables that have only
int/timestap columns using the above configuration.
-Rudy