Gregory P. Smith <g...@krypto.org> added the comment:

subprocess has nothing to do with this bug.  subprocess is safe as of Python 
3.2 (and the subprocess32 backport for 2.x).  Its preexec_fn argument is 
already documented as an unsafe legacy.  If you want to replace subprocess, go 
ahead, write something new and post it on pypi.  That is out of the scope of 
this issue.

Look at the original message I opened this bug with.  I *only* want to make the 
standard library use of locks not be a source of deadlocks as it is 
unacceptable for a standard library itself to force your code to adopt a 
threads only or a fork only programming style.  How we do that is irrelevant; I 
merely started the discussion with one suggestion.

Third party libraries are always free to hang their users however they see fit.

If you want to "log" something before deadlocking, writing directly to the 
stderr file descriptor is the best that can be done.  That is what exceptions 
that escape __del__ destructors do.

logging, http.cookiejar, _strptime  - all use locks that could be dealt with in 
a sane manner to avoid deadlocks after forking.

Queue, concurrent.futures & threading.Condition  - may not make sense to fix as 
these are pretty threading specific as is and should just carry the "don't 
fork" caveats in their documentation.


(My *real* preference would be to remove os.fork() from the standard library.  
Not going to happen.)

----------

_______________________________________
Python tracker <rep...@bugs.python.org>
<http://bugs.python.org/issue6721>
_______________________________________
_______________________________________________
Python-bugs-list mailing list
Unsubscribe: 
http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com

Reply via email to