[issue29797] Deadlock with multiprocessing.Queue()

2017-03-12 Thread Tim Peters
Changes by Tim Peters : -- resolution: -> not a bug stage: -> resolved status: open -> closed ___ Python tracker ___ ___ Python-bugs

[issue29797] Deadlock with multiprocessing.Queue()

2017-03-12 Thread Max
Max added the comment: Yes, this makes sense. My bad, I didn't realize processes might need to wait until the queue is consumed. I don't think there's any need to update the docs either, nobody should have production code that never reads the queue (mine was a test of some other issue).

[issue29797] Deadlock with multiprocessing.Queue()

2017-03-12 Thread Eryk Sun
Eryk Sun added the comment: On Windows the "QueueFeederThread" in each child process is blocked in WaitForMultipleObjects in PipeConnection._send_bytes. The pipe buffer size is 8 KiB, and each pickled int is 5-6 bytes. With 2 processes the pipe is full after sending (256 + 469) * 2 == 1450 int

[issue29797] Deadlock with multiprocessing.Queue()

2017-03-12 Thread Raymond Hettinger
Changes by Raymond Hettinger : -- nosy: +davin ___ Python tracker ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.pyt

[issue29797] Deadlock with multiprocessing.Queue()

2017-03-11 Thread Tim Peters
Tim Peters added the comment: I think this is expected. Add this as the first line of `simulate()` and the problem should go away: q.cancel_join_thread() As the docs say, a Queue works with a background thread, which feeds incoming data from an internal buffer to a (interprocess) pipe.

[issue29797] Deadlock with multiprocessing.Queue()

2017-03-11 Thread Max
New submission from Max: Using multiprocessing.Queue() with several processes writing very fast results in a deadlock both on Windows and UNIX. For example, this code: from multiprocessing import Process, Queue, Manager import time, sys def simulate(q, n_results): for i in range(n_results