Good morning --

We're using supervisord to monitor 197 processes.  We just added 10 more, for a 
total of 207, and started Server Error 500 when trying to view the status 
webpage.  I traced the error 500 to python getting an "out of file descriptors" 
error (which I know from experience is normally only 1024).    This seems to be 
legitimate, because I see 5 fds being used for each process, so not counting 
basic overhead and the rpc/http listeners, I needed 985 (197*5), and now I need 
1035 (207*5).

So I raised the soft limit on file descriptors to 1536 (a 50% increase) and 
raised the hard limit to 65,000, by creating the following file at 
/etc/security/limits.d/60-nofile-limit.conf, containing these lines:

        fs.file-max = 65000
        * soft nofile 1536
        * hard nofile 65000

I logged out, logged back in, verified that 'ulimit -n' does indeed show 1536 
instead of 1024.  I restarted supervisor again, and I added the argument: 
"--minfds=1536".  

I told supervisor to start all the processes.  When it got up to 1024 
descriptors, I expected it to continue.  Instead I get a different error, this 
time from Python itself, resulting in an exit:  ValueError: filedescriptor out 
of range in select()

Is python's select() call simply saying this because some of the fd values are 
higher than 1024?  Is there some other python-specific thing I need to raise so 
it will accept fd values higher than 1024?  Supervisord seems to know something 
about it, because I got the message "Increased RLIMIT_NOFILE limit to 1536" in 
the supervisor.log.




tlj
_______________________________________________
Supervisor-users mailing list
[email protected]
http://lists.supervisord.org/mailman/listinfo/supervisor-users

Reply via email to