[issue29168] multiprocessing pickle error
Simon Schuler added the comment: I don't have any lock object. I just use the multiprocessing pool and a QueueHandler in order to be able to log from all processes. -- ___ Python tracker <http://bugs.python.org/issue29168> ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue29168] multiprocessing pickle error
Simon Schuler added the comment: I want to handle the logging of the main and all my started processes. They should all log to the same Queue. Have a look at the sample.py program. In addition there is a inconsistency in using a multiprocessing pool or just the process class directly. The sample program illustrated this. -- ___ Python tracker <http://bugs.python.org/issue29168> ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue29168] multiprocessing pickle error
Simon Schuler added the comment: Attached is a sample program to illustrate the problem. When I use a multiprocessing pool the exception is raised. -- Added file: http://bugs.python.org/file46160/sample.py ___ Python tracker <http://bugs.python.org/issue29168> ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue29168] multiprocessing pickle error
New submission from Simon Schuler: Hello, the following code doesn't work any longer in the new Python version 3.6. import sys import os import subprocess from multiprocessing import Pool, Value, Queue import multiprocessing import logging import logging.handlers import pickle queue = multiprocessing.Manager().Queue(-1) qh = logging.handlers.QueueHandler(queue) pickle.dumps(qh) It raises the following exception. >>> TypeError: can't pickle _thread.RLock objects Furthermore, also for customized logging handler classes it doesn't work anymore. class CustomHandler(logging.Handler): def __init__(self, queue): logging.Handler.__init__(self) self.queue = queue def emit(self, record): try: ei = record.exc_info if ei: dummy = self.format(record) record.exc_info = None except (KeyboardInterrupt, SystemExit): raise except: self.handleError(record) For a centralized logging facility in a multiprocess environment this is a big problem. How can I handle this in the 3.6 version? -- messages: 284738 nosy: cxss priority: normal severity: normal status: open title: multiprocessing pickle error versions: Python 3.6 ___ Python tracker <http://bugs.python.org/issue29168> ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com