Hi!
On Wed, Feb 06, 2002 at 04:48:06PM -0500, Brian Burke wrote:
> When I run ulimit -Hn and ulimit -Sn, the system shows I can have
> 1024 open handles. Does that mean if I run lsof | fgrep httpd | wc
> -l and it is close to 1024, I have a problem?
Only, if you run Apache with the -X flag (one process only, some kind
of debugging state), because 'lsof | fgrep httpd' would match all
httpd processes. And even, when I grepped after the pid of one httpd
process I not always got near the ulimit with wc -l. My guess is, that
probably there is the right timing for the lsof needed.
I tried the following:
lsof | fgrep httpd | sort -k9
(maybe you need to use another value than 9, depends on the parameters
to lsof) to sort by the path of the open files. If you see one file
very often (tens per httpd process), that's usually the one which
causes the trouble. In my case it was the magic file, so I knew I had
search in or around File::MMagic for the problem.
But due to with Apache (1.x) each child can only handle one request a
time, something must go really wrong to reach that limit with a single
request. (The Solaris limit of 64 was easier to reach... ;-)
Regards, Axel
--
Axel Beckert - [EMAIL PROTECTED] - http://abe.home.pages.de/
---------------------------------------------------------------------
To unsubscribe, e-mail: [EMAIL PROTECTED]
For additional commands, e-mail: [EMAIL PROTECTED]