E R wrote:
Hi,
I have a problem where a mod_perl handler will allocate a lot of
memory when processing a request, and this causes Apache to kill the
child due to exceeding the configure child size limit.
However, the memory allocated will get freed up or re-used by the next
request - I think the memory is just fragmented enough to be
automatically reclaimed by the memory allocator (I've heard that some
mallocs can return memory to the OS in 1 MB chunks.)
Are there any special techniques people use to avoid this situation?
Does SizeLimit count actual memory used or does it just look at the
process size?
This is not a direct answer to your question, and it begs for a more
authoritative answer.
I have had problems similar to yours, which I solved by turning what was
original part of my mod_perl handler (or script), into a separate
process. I know it is not elegant, but it seems to work well in my case.
One of the problems is that, as far as I know, once perl has obtained
some memory from the OS, it will never give it back until perl itself
exits. And since by definition with mod_perl normally the perl
interpreter lives as long as the Apache process it is inside of, that
means almost never.
But if perl runs as an external process for a request, then of course it
exits at the end of it, and the memory is returned.
The exact problem I had, was that some processing I had to do involved
parsing XML, and some XML parsing module in the chain (XML::Twig ?) was
leaking a chunk of memory at each request. I ended up with
multi-megabyte Apache children all the time.
So I off-loaded this parsing into a separate process, which wrote (to
disk) its results in the form of a Storable structure. When the
external process was done, the main mod_perl handler sucked in the data
back from the Storable file and deleted it.
Not elegant, but it's been working flawlessly for several years now.