Edit report at http://bugs.php.net/bug.php?id=53669&edit=1
ID: 53669 Comment by: jeffwhiting at hotmail dot com Reported by: jille at hexon dot cx Summary: PHP does not return memory to system Status: Wont fix Type: Bug Package: Scripting Engine problem Operating System: Linux 2.6.29.2-smp PHP Version: 5.3.4 Block user comment: N Private report: N New Comment: Is allocating memory really that much of a performance hit? It seems like allocating memory is pretty cheap. I could see a large performance hit if you were defragmenting the heap. Also something like that would be easily tunable via php.ini so users could choose if they wanted the performance penalty. We are currently working around the situation by using the following prepend file. What it does is monitor's the memory usage and tells the apache child to gracefully terminate after you get above a memory usage threshold. Sorry jille it is only useful with the apache sapi. Honestly it is a pretty ugly hack but it works... <?php function apacheMemoryUsage() { $result = 0; exec('ps -orss -p ' . getmypid(), $output); $result = trim($output[1]); return $result / 1024; } $memUseMB = apacheMemoryUsage(); $maxMem = get_cfg_var("apache_memory_limit_mb"); if (!$maxMem) $maxMem = 128; //error_log(getmypid()."> apache memory monitor: ". // "using $memUseMB MB of $maxMem MB."); if ($memUseMB > $maxMem && function_exists('posix_kill')) { error_log(getmypid()."> apache memory monitor: ". "$memUseMB MB > $maxMem MB. Sending graceful stop."); // Terminate Apache 2 child process after request has been // done by sending a SIGUSR1 POSIX signal (10) which // is a graceful stop. function killApacheChildOnExit() { error_log('posix_kill: '.getmypid()); posix_kill( getmypid(), 10 ); } register_shutdown_function( 'killApacheChildOnExit' ); } ?> Previous Comments: ------------------------------------------------------------------------ [2011-04-12 22:40:42] jille at hexon dot cx I understand it won't be possible to free all of the used memory, mostly due to fragmentation. Our scripts use over 100MB of memory and I don't believe every page is used. When looking at zend_alloc.c in _zend_mm_free_int: (5.3.6) if (ZEND_MM_IS_FIRST_BLOCK(mm_block) && ZEND_MM_IS_GUARD_BLOCK(ZEND_MM_BLOCK_AT(mm_block, size))) { zend_mm_del_segment(heap, (zend_mm_segment *) ((char *)mm_block - ZEND_MM_AL IGNED_SEGMENT_SIZE)); } else [...] Shouldn't that free every segment no longer used? As segments are 2MB by default, it could be possible there are some parts used in every segment, but I don't think that is very likely when running over hundreds of megabytes. If above isn't suppose to "fix my problem", would it be possible to create a function that checks whether it can remove any segments? That way the performance hit can be controlled. ------------------------------------------------------------------------ [2011-04-12 22:17:07] ras...@php.net There are plenty random things that stay on the heap across requests. Persistent connections, the statcache and a number of other things, so it is pretty much impossible to do this in a heap-based allocator. For mmap stuff it would be technically possible, but again, the performance hit for doing so would be pretty nasty. ------------------------------------------------------------------------ [2011-04-12 21:42:10] jille at hexon dot cx When looking at zend_alloc.c it seems to support several memory allocators. As far as I know when you munmap() pages they should be returned to the system. Am I looking in the wrong page and is the problem somewhere the munmap()? We are using the CLI sapi instead of the Apache sapi as jeffwhiting does. ------------------------------------------------------------------------ [2011-04-12 19:58:56] jeffwhiting at hotmail dot com Thanks for the reply. What you're saying makes sense and I understand how difficult it would be to it across operating systems as they handle things very differently. However the one thing I don't understand (sorry about my ignorance) is why it can't just free everything when the request ends? That way you don't have to worry about the 1mb sitting at the top of the heap. The request is done so we don't need to keep anything around. I also tried playing around with apache's MaxFreeMem (http://httpd.apache.org/docs/2.0/mod/mpm_common.html#maxmemfree) as it seemed applicable but to no avail. We also have a hard time farming off the requests as the entire application is heavily object oriented. So the circular reference heap allocation issue (as shown in the bug) ends up being a big deal for us. We are actively working on reducing our memory foot print which should help some. ------------------------------------------------------------------------ [2011-04-12 18:48:16] ras...@php.net There is no clean and portable way to do this across the various operating systems. Even on a single operating system like Linux, returning heap memory back to the system after a process has touched it is extremely tricky. You can only adjust the size of the heap, so if your heap grows to 60M and you free up the lower 59M you still can't return it because of that 1M sitting at the top. I don't know if any memory allocators that will try to defrag the heap and move everything down in order to shrink the heap. Even if some existed, it would be an extremely slow process to do so. Your best bet is to fix your scripts to not use up 60M to begin with. Once you heap grows to that size, it will remain that size until the process exits. If you have some requests that really do need that much memory, consider doing a proxy- pass to a different pool of mod_php/PHP-FPM processes that are dedicated to just running those requests. That way you can have fewer of them. ------------------------------------------------------------------------ The remainder of the comments for this report are too long. To view the rest of the comments, please view the bug report online at http://bugs.php.net/bug.php?id=53669 -- Edit this bug report at http://bugs.php.net/bug.php?id=53669&edit=1