I have some very large httpd processes (35 MB) running our 
application software.  Every so often, one of the processes will grow 
infinitly large, consuming all available system resources.  After 300 
seconds the process dies (as specified in the config file), and the 
system usually returns to normal.  Is there any way to determine what 
is eating up all the memory?  I need to pinpoint this to a particular 
module.  I've tried coredumping during the incident, but gdb has yet 
to tell me anything useful.

I was actually playing around with the idea of hacking the perl 
source so that it will change $0 to whatever the current package 
name, but I don't know that this will translate back to mod perl 
correctly, as $0 is the name of the configuration from within mod 
perl.

Has anyone had to deal with this sort of problem in the past?

Robert Landrum

Reply via email to