Greg Brockman <g...@mit.edu> added the comment: Ah, you're right--sorry, I had misread your code. I hadn't noticed the usage of the worker_pids. This explains what you're doing with the ACKs. Now, the problem is, I think doing it this way introduces some races (which is why I introduced the ACK from the task handler in my most recent patch). What happens if: - A worker removes a job from the queue and is killed before sending an ACK. - A worker removes a job from the queue, sends an ACK, and then is killed. Due to bad luck with the scheduler, the parent cleans the worker before the parent has recorded the worker pid.
You're now reading from self._cache in one thread but writing it in another. What happens if a worker sends a result and then is killed? Again, I haven't thought too hard about what will happen here, so if you have a correctness argument for why it's safe as-is I'd be happy to hear it. Also, I just noted that your current way of dealing with child deaths doesn't play well with the maxtasksperchild variable. In particular, try running: """ import multiprocessing def foo(x): return x multiprocessing.Pool(1, maxtasksperchild=1).map(foo, [1, 2, 3, 4]) """ (This should be an easy fix.) ---------- _______________________________________ Python tracker <rep...@bugs.python.org> <http://bugs.python.org/issue9205> _______________________________________ _______________________________________________ Python-bugs-list mailing list Unsubscribe: http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com