Charles-Francois Natali <neolo...@free.fr> added the comment:

To elaborate on this, to my knowledge, there's no portable and reliable way to 
close all open file descriptors.
Even with the current code, it's still possible that some FD aren't properly 
closed, since getconf(SC_OPEN_MAX) often returns RLIMIT_NOFILE soft limit, 
which might have been lowered by the application after having opened a whole 
bunch of files.
Anyway, if the performance hit is too high, I think you only have two options:
- set your FD as CLOEXEC (note that it's already the case for FD used by popen 
and friends), and don't pass close_fds argument
- if you don't need many FD, you could explicitely set RLIMIT_NOFILE to a low 
value so that close_fds doesn't need to try too many FD, e.g.:
import resource
resource.setrlimit(resource.RLIMIT_NOFILE, (1024, 1024))

----------

_______________________________________
Python tracker <rep...@bugs.python.org>
<http://bugs.python.org/issue11284>
_______________________________________
_______________________________________________
Python-bugs-list mailing list
Unsubscribe: 
http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com

Reply via email to