/[...]
What dangers do you refer to specifically? Something reproducible?
-L
/
Since it's a race condition issue, it's not easily reproducible with
normal libraries - which only take threading locks for small moments.
But it can appear if your threads make good use of the threading module.
By forking randomly, you have chances that the main locks of the logging
module you frozen in an "acquired" state (even though their owner
threads are not existing in the child process), and your next attempt to
use logging will result in a pretty deadlock (on some *nix platforms, at
least). This issue led to the creation of python-atfork by the way.
Stefan Behnel a écrit :
Stefan Behnel, 30.01.2010 07:36:
Pascal Chambon, 29.01.2010 22:58:
I've just recently realized the huge problems surrounding the mix of
multithreading and fork() - i.e that only the main thread actually
survived the fork(), and that process data (in particular,
synchronization primitives) could be left in a dangerously broken state
because of such forks, if multithreaded programs.
I would *never* have even tried that, but it doesn't surprise me that it
works basically as expected. I found this as a quick intro:
http://unix.derkeiler.com/Newsgroups/comp.unix.programmer/2003-09/0672.html
... and another interesting link that also describes exec() usage in this
context.
http://www.linuxprogrammingblog.com/threads-and-fork-think-twice-before-using-them
Stefan
Yep, these links sum it up quite well.
But to me it's not a matter of "trying" to mix threads and fork - most
people won't on purpose seek trouble.
It's simply the fact that, in a multithreaded program (i.e, any program
of some importance), multiprocessing modules will be impossible to use
safely without a complex synchronization of all threads to prepare the
underlying forking (and we know that using multiprocessing can be a
serious benefit, for GIL/performance reasons).
Solutions to fork() issues clearly exist - just add a "use_forking=yes"
attribute to subprocess functions, and users will be free to use the
spawnl() semantic, which is already implemented on win32 platforms, and
which gives full control over both threads and subprocesses. Honestly, I
don't see how it will complicate stuffs, except slightly for the
programmer which will have to edit the code to add spwawnl() support (I
might help on that).
Regards,
Pascal
_______________________________________________
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe:
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com