The following reply was made to PR mod_cern_meta/3366; it has been noted by 
GNATS.

From: Marc Slemko <[EMAIL PROTECTED]>
To: Kevin Goddard <[EMAIL PROTECTED]>
Cc: [EMAIL PROTECTED]
Subject: Re: mod_cern_meta/3366: I am getting this - (11)Resource temporarily
 unavailable: couldn't spawn child process - on very busy servers
Date: Sun, 8 Nov 1998 16:16:13 -0800 (PST)

 On 9 Nov 1998, Kevin Goddard wrote:
 
 > Here is the output from a Uname -a:
 > Linux server.domain.com 2.0.34 #9 Wed Nov 4 15:46:51 EST 1998 i686 unknown
 > The server is a Dell Poweredge 2300 with 256 MB of RAM and a Pentium II 400 
 > processor.  I am having the exact same problem on three identical machines.  
 > All running the same version of Linux with identical Hard ware configuration
 > >Description:
 > Okay I have seen this problem listed before, and I have applied every single 
 > fix listed.  The server is not listening to any port but 80.  Here is a list 
 > of a ulimit -a:
 > core file size (blocks)  1000000
 > data seg size (kbytes)   unlimited
 > file size (blocks)       unlimited
 > max memory size (kbytes) unlimited
 > stack size (kbytes)      8192
 > cpu time (seconds)       unlimited
 > max user processes       unlimited
 > pipe size (512 bytes)    8
 > open files               1024
 > virtual memory (kbytes)  2105343
 > 
 > As you can see, but the user processes and the open files is very high.  I 
 > am currently getting this error message on one machine which has 227 httpd 
 > processes running and 600 tcp connections to it.  I have the 
 > HARD_SERVER_LIMIT set to 1000.  The follo wing info is in my config:
 
 How many processes are running at the time, both total and for the user
 Apache runs as?
 
 What does cat /proc/sys/kernel/file-max show?
 
 Do you have any reason to think this is anything other than Linux simply
 running into file or process number restrictions?  Linux has some very
 annoying such restrictions that can be a pain to remove.
 
 What ulimit returns means nothing if the kernel limits are lower.
 

Reply via email to