Edit report at http://bugs.php.net/bug.php?id=53669&edit=1

 ID:                 53669
 Comment by:         jille at hexon dot cx
 Reported by:        jille at hexon dot cx
 Summary:            PHP does not return memory to system
 Status:             Wont fix
 Type:               Bug
 Package:            Scripting Engine problem
 Operating System:   Linux 2.6.29.2-smp
 PHP Version:        5.3.4
 Block user comment: N
 Private report:     N

 New Comment:

I understand it won't be possible to free all of the used memory, mostly
due to 

fragmentation. Our scripts use over 100MB of memory and I don't believe
every 

page is used.



When looking at zend_alloc.c in _zend_mm_free_int: (5.3.6)

  if (ZEND_MM_IS_FIRST_BLOCK(mm_block) &&

      ZEND_MM_IS_GUARD_BLOCK(ZEND_MM_BLOCK_AT(mm_block, size))) {

    zend_mm_del_segment(heap, (zend_mm_segment *) ((char *)mm_block -
ZEND_MM_AL

IGNED_SEGMENT_SIZE));

  } else [...]



Shouldn't that free every segment no longer used? As segments are 2MB by


default, it could be possible there are some parts used in every
segment, but I 

don't think that is very likely when running over hundreds of
megabytes.



If above isn't suppose to "fix my problem", would it be possible to
create a 

function that checks whether it can remove any segments? That way the 

performance hit can be controlled.


Previous Comments:
------------------------------------------------------------------------
[2011-04-12 22:17:07] ras...@php.net

There are plenty random things that stay on the heap across requests.
Persistent 

connections, the statcache and a number of other things, so it is pretty
much 

impossible to do this in a heap-based allocator. For mmap stuff it would
be 

technically possible, but again, the performance hit for doing so would
be pretty 

nasty.

------------------------------------------------------------------------
[2011-04-12 21:42:10] jille at hexon dot cx

When looking at zend_alloc.c it seems to support several memory
allocators. As 

far as I know when you munmap() pages they should be returned to the
system. Am I 

looking in the wrong page and is the problem somewhere the munmap()?



We are using the CLI sapi instead of the Apache sapi as jeffwhiting
does.

------------------------------------------------------------------------
[2011-04-12 19:58:56] jeffwhiting at hotmail dot com

Thanks for the reply.  What you're saying makes sense and I understand
how difficult it would be to it across operating systems as they handle
things very differently.  However the one thing I don't understand
(sorry about my ignorance) is why it can't just free everything when the
request ends? That way you don't have to worry about the 1mb sitting at
the top of the heap. The request is done so we don't need to keep
anything around.



I also tried playing around with apache's MaxFreeMem
(http://httpd.apache.org/docs/2.0/mod/mpm_common.html#maxmemfree) as it
seemed applicable but to no avail.



We also have a hard time farming off the requests as the entire
application is heavily object oriented.  So the circular reference heap
allocation issue (as shown in the bug) ends up being a big deal for us. 
We are actively working on reducing our memory foot print which should
help some.

------------------------------------------------------------------------
[2011-04-12 18:48:16] ras...@php.net

There is no clean and portable way to do this across the various
operating 

systems. Even on a single operating system like Linux, returning heap
memory 

back 

to the system after a process has touched it is extremely tricky. You
can only 

adjust the size of the heap, so if your heap grows to 60M and you free
up the 

lower 59M you still can't return it because of that 1M sitting at the
top. I 

don't 

know if any memory allocators that will try to defrag the heap and move


everything 

down in order to shrink the heap. Even if some existed, it would be an
extremely 

slow process to do so. 



Your best bet is to fix your scripts to not use up 60M to begin with.
Once you 

heap grows to that size, it will remain that size until the process
exits. If 

you 

have some requests that really do need that much memory, consider doing
a proxy-

pass to a different pool of mod_php/PHP-FPM processes that are dedicated
to just 

running those requests. That way you can have fewer of them.

------------------------------------------------------------------------
[2011-04-12 18:38:57] jeffwhiting at hotmail dot com

This seems like a big problem.  We are running into the same thing in
our production environment.  We have multiple apache servers and the
memory usage continues to go up just like in the example script.  We are
forced to set MaxChildRequests to 10 to prevent out of memory
conditions.  Running top before the script, the apache/php process is
taking up 13m.  After running the script it says 60m.  Assume you are
running apache with 100 child workers and php is now taking up 6GB. I
understand that for performance reasons it may be nice to keep the 60m
allocated for future use but it would be nice to be able to tune this
parameter.  We would gladly pay the performance penalty of
allocating/deallocating the memory rather than have large allocated and
unused memory.   



However doing something like this (without circular references) works
great and always frees up memory:



<?php



for ($i=0; $i < 20; $i++)

        $s = str_pad("", 1024 * 1024 * $60);



?>

------------------------------------------------------------------------


The remainder of the comments for this report are too long. To view
the rest of the comments, please view the bug report online at

    http://bugs.php.net/bug.php?id=53669


-- 
Edit this bug report at http://bugs.php.net/bug.php?id=53669&edit=1

Reply via email to