On Mon, 29 Jan 2001, Robert Landrum wrote:

> I have some very large httpd processes (35 MB) running our 

mod_perl are not freeing memory when httpd doing cleanup phase.


Me too :). 

Use the MaxRequestPerChild directive in httpd.conf.
After my investigations it seems to be only way to 
build a normal system. 

There are no 100% right worked ways, supplied with apache.
mod_status can provide you some info, but...

On Solaris 2.5.1, 7, 8 you can use /usr/proc/bin/pmap to 
build a map of the httpd process.


> application software.  Every so often, one of the processes will grow 
> infinitly large, consuming all available system resources.  After 300 
> seconds the process dies (as specified in the config file), and the 
> system usually returns to normal.  Is there any way to determine what 
> is eating up all the memory?  I need to pinpoint this to a particular 
> module.  I've tried coredumping during the incident, but gdb has yet 
> to tell me anything useful.
> 
> I was actually playing around with the idea of hacking the perl 
> source so that it will change $0 to whatever the current package 
> name, but I don't know that this will translate back to mod perl 
> correctly, as $0 is the name of the configuration from within mod 
> perl.
> 
> Has anyone had to deal with this sort of problem in the past?
> 
> Robert Landrum
> 

Vasily Petrushin
+7 (095) 2508363
http://www.interfax.ru
mailto:[EMAIL PROTECTED]

Reply via email to