ID:               48525
 Updated by:       u...@php.net
 Reported By:      i-devolder at jm-bruneau dot be
 Status:           Assigned
 Bug Type:         PDO related
 Operating System: Linux 2.6.26 ( Debian )
 PHP Version:      5.3.0RC2
 Assigned To:      mysql
 New Comment:

There is no leak here. Memory usage does no increase when running the
same query again and again. Put a loop around SHOW VARIABLES and run it
100 times - memory usage remains the same with PHP 5.3 from SVN.

You seem to be comparing PHP 5.3 @ mysqlnd with PHP 5.2.6 @ libmysql by
help of memory_get_usage(). That comparison is invalid. 

mysqlnd allocates its memory through the PHP internal memory allocation
function - emalloc(), ecmalloc() and so on. memory_get_usage() shows how
much memory has been allocated this way.

libmysql does NOT use the PHP internal memory allocation functions.
Therefore memory_get_usage() does NOT show the memory allocated by
libmysql.



Previous Comments:
------------------------------------------------------------------------

[2009-09-30 18:49:27] hp at coutosolutions dot com

This bug is _NOT_ bogus. A PHP developer should not have to care about

malloc. Memory use increases when fetching, and there is no way in PHP

to free it. I'm not sure how else to define a memory leak bug in a PHP

extension.

Here is another test case that doesn't require any tables:

<?
$host = '';
$db = '';
$usr = '';
$pwd = '';
$dsn = "mysql:host={$host}; dbname={$db}; username={$usr}; 
password={$pwd}";

print "PHP Version: " . PHP_VERSION . "\n\n";

function run_query($query)
{
        global $dsn, $usr, $pwd;
        
        $pdo = new PDO($dsn, $usr, $pwd);
        $stmnt = $pdo->query($query);
        
        while (($stmnt->fetch(PDO::FETCH_ASSOC)))
                ;
                
        unset($pdo);
        unset($stmnt);
}

// This function demonstrates that the same query run twice will not 
increase memory usage, but new queries will.
function test_query($query)
{
        print "{$query}\n";
        print "Before: " . memory_get_usage() . "\n";
        run_query($query);
        print "Between: " . memory_get_usage() . "\n";
        run_query($query);
        print "After: " . memory_get_usage() . "\n";
}

test_query("SHOW VARIABLES;");
print "\n";
test_query("SHOW MASTER STATUS;");

?>

Results (Linux fresh compile):

PHP Version: 5.3.0

SHOW VARIABLES;
Before: 325980
Between: 326156
After: 326156

SHOW MASTER STATUS;
Before: 326160
Between: 326160
After: 326160

Results (Windows Zend Core):
PHP Version: 5.2.6

SHOW VARIABLES;
Before: 75584
Between: 77640
After: 77640

SHOW MASTER STATUS;
Before: 77672
Between: 77768
After: 77768

------------------------------------------------------------------------

[2009-06-20 13:59:52] paj...@php.net

This change has been disabled in RC4 as it introduced a BC break.

However I have to say that it makes no sense to introduce it in the 1st
place only to give the user the feeling that he is doing it right.

As Andrey said, libmysql memory usage was not part of the results of
memory_get_usage, just like any other external libraries memory usages.
There is no bug per se. I would suggest to bogus this report :)

------------------------------------------------------------------------

[2009-06-16 13:13:08] and...@php.net

I committed a change to the 5.3 branch with the following comment:
----------
  Log:
  Memory usage optimisation. mysqlnd is not libmysql. mysqlnd does use
the Zend allocator, which means that is easier to hit memory_limit if
you have big stored (buffered) result sets. Before with libmysql you
won't hit memory_limit because libmysql uses libc's allocator and
nothing is checked. Now, with mysqlnd the situation is stricter and it
is easier to hit memory_limit. We try to optimize for big result sets.
If a result set is larger than 10 rows we will start freeing some data
to keep memory usage after 10 rows constant. This will help in the cases
where a buffered result set is scrolled forward only and just only once,
or mysqlnd will need to decode data from the network buffers again -
yes, it is a trade-off between CPU time and memory size. The best for
big result sets is of course using unbuffered queries - for comparison :
3 Million rows with buffered take at least 180MB, with buffered you will
stay at 3MB, and unbuffered will be just 7-8% slower.
-----------
According to Johannes it doesn't help PDO that much as it does help
ext/mysql and ext/mysqli because of the architecture of PDO that forces
mysqlnd to do a lot of extra work because of one call. And because 5.3.0
is just antes portas no change will be done to PDO before releasing
5.3.0. There is a ToDo item for PDO/PHP5.3.1

------------------------------------------------------------------------

[2009-06-15 11:06:40] and...@php.net

If you build without mysqlnd then you use libmysql and
memory_get_usage() doesn't count the memory allocated by libmysql. So
memory_get_usage() can give you a hint how much memory is used by the
script but is not the ultimative source of informaion.
I check last week mysqlnd with debug enabled and in some cases where
mysqlnd wants 52 bytes to be allocated Zend allocates 102, which means
the memory usage is doubled. Can't say exactly for optimized builds,
could be different.

------------------------------------------------------------------------

[2009-06-11 10:10:51] i-devolder at jm-bruneau dot be

the problem also occurs with mysqli

My Colleague rebuilt the latest trunk without mysqlnd

there we have a consistent memory usage

so the problem is in the mysqlnd

------------------------------------------------------------------------

The remainder of the comments for this report are too long. To view
the rest of the comments, please view the bug report online at
    http://bugs.php.net/48525

-- 
Edit this bug report at http://bugs.php.net/?id=48525&edit=1

Reply via email to