New submission from Samuel Grayson :
If all processes try to close the Queue immediately after someone has written
to it, this causes [an error][1] (see the link for more details). Uncommenting
any of the `time.sleep`s makes it work consistently again.
import multiprocessing
import time
import logging
import multiprocessing.util
multiprocessing.util.log_to_stderr(level=logging.DEBUG)
queue = multiprocessing.Queue(maxsize=10)
def worker(queue):
queue.put('abcdefghijklmnop')
# "Indicate that no more data will be put on this queue by the
# current process." --Documentation
# time.sleep(0.01)
queue.close()
proc = multiprocessing.Process(target=worker, args=(queue,))
proc.start()
# "Indicate that no more data will be put on this queue by the current
# process." --Documentation
# time.sleep(0.01)
queue.close()
proc.join()
Perhaps this is because I am not understanding the documentation correctly, but
in that case I would contend this is a documentation bug.
Traceback (most recent call last):
File "/usr/lib/python3.7/multiprocessing/queues.py", line 242, in _feed
send_bytes(obj)
File "/usr/lib/python3.7/multiprocessing/connection.py", line 200, in
send_bytes
self._send_bytes(m[offset:offset + size])
File "/usr/lib/python3.7/multiprocessing/connection.py", line 404, in
_send_bytes
self._send(header + buf)
File "/usr/lib/python3.7/multiprocessing/connection.py", line 368, in
_send
n = write(self._handle, buf)
BrokenPipeError: [Errno 32] Broken pipe
[1]: https://stackoverflow.com/q/51680479/1078199
--
assignee: docs@python
components: Documentation, Library (Lib)
messages: 334490
nosy: charmonium, docs@python
priority: normal
severity: normal
status: open
title: Calling `Multiprocessing.Queue.close()` too quickly causes intermittent
failure (BrokenPipeError)
versions: Python 3.7
___
Python tracker
<https://bugs.python.org/issue35844>
___
___
Python-bugs-list mailing list
Unsubscribe:
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com