Hi,

I am running an Apache 2.059 server, on a Win32 platform (Perl 5.8 & mod_perl 
2.03).

During the lifetime of the server, I cn see that the memory footprint of the 
Apache process is getting bigger and bigger, so I would like to understand why.

I can't say this is really a problem, but I would like to foresee any potential 
problem during production.

I suspect that this is coming from data retrieved from my DB not being well 
freed.

In order to isolate the problem, I tried to run some Perl script (using the 
Perl command line) and see how the memory footprint is evolving.

Here is a simple script:


use strict;
use DBI;
print "Before anything\n";
<STDIN>; #step1
my $dbh = DBI->connect('DBI:mysql:mydb;host=localhost;', 'root', '');
my $sth = $dbh->prepare('SELECT * FROM mytable);
print "After prepare\n";
<STDIN>; #step2
$sth->execute;
print "After execute\n";
<STDIN>; #step3
$sth->finish();
print "After finish\n";
<STDIN>; #step4

As you can see, I am using some <STDIN> in order to pause the program.
In order to make things stand out more, I am running a query that retrieves 
over 100 000 rows, and 8 fields.

The memory taken by the Perl.exe process is, depending on the step:

-step1: 4196k (here, nothing is run)
-step2: 6008k (here, the statement has been prepared)
-step3: 23888k (here, the statement has been executed, and so, the data 
retrieved from the MySQL server. I suppose DBI is getting it in a raw format)
-step4: 6092k (I am calling finish to tell DBI I don't need the data anymore, 
and we are back to a fair memory level, so, data is freed well)

So, as you can see, in the situation described above, a lot of data is 
retrieved from the DB, but is then freed up well, so I can be confident that 
this wouldn't cause any problem in a mod_perl environment.


Now, let's tweak up a bit my script, and try to get a grip over the retrieved 
data this way:

use strict;
use DBI;
print "Before anything\n";
<STDIN>; #step1
my $dbh = DBI->connect('DBI:mysql:mydb;host=localhost;', 'root', '');
my $sth = $dbh->prepare('SELECT * FROM mytable);
print "After prepare\n";
<STDIN>; #step2
$sth->execute;
print "After execute\n";
<STDIN>; #step3
my $ary_ref = $sth->fetchall_arrayref();
$#$ary_ref = -1;
print "After fetch\n";
<STDIN>; #step4

Here, the beginning is the same, and the memory footprint as well. Thing start 
t differ in step4:
-step1: 4196k (here, nothing is run)
-step2: 6008k (here, the statement has been prepared)
-step3: 23888k (here, the statement has been executed, and so, the data 
retrieved from the MySQL server. I suppose DBI is getting it in a raw format)
-step4: 71940k (!!)

As you can see, in step4, I am retrieving in a reference to a big array all the 
data from the DB handle, but then ask the array to free itself. Nevertheless, 
the memory footprint doesn't go back to normal.

And when run into mod_perl, the same happens: in the first situation, the 
Apache.exe memory footprint goes back to normal, while it remains high in the 
second situation (which could lead to problem for further requests)

So, I would like to know if this really matters and if I should worry or 
perhaps if I am missing something.
I am really afraid of memory leaks.

Of course, in a real scenario, I should never retrieve that much data but I did 
it this way to try unveiling potential problems.

Lionel.

Reply via email to