Changes by Richard Oudkerk shibt...@gmail.com:
--
resolution: - later
stage: patch review - committed/rejected
status: open - closed
___
Python tracker rep...@bugs.python.org
http://bugs.python.org/issue10037
___
Ask Solem a...@celeryproject.org added the comment:
Well, I still don't know exactly why restarting the socket read made it work,
but the patch solved an issue where newly started pool processes would be stuck
in socket read forever (happening to maybe 1/500 new processes)
This and a dozen
Richard Oudkerk shibt...@gmail.com added the comment:
I think this issue can be closed, the worker handler is simply borked and
we could open up a new issue deciding how to fix it (merging billiard.Pool
or someting else).
OK. I am not sure which option under Resolution should be chosen.
Ask Solem a...@celeryproject.org added the comment:
Later works, or just close it. I can open up a new issue to merge the
improvements in billiard later.
The execv stuff certainly won't go in by Py3.3. There has not been
consensus that adding it is a good idea.
(I also have the unit
Richard Oudkerk shibt...@gmail.com added the comment:
Ah, a working 'fork server' would be just as good.
Only problem is that it depends on fd passing which is apparently broken on
MacOSX.
Btw, Billiard now supports running Pool without threads, using
epoll/kqueue/select instead. So
Changes by Richard Oudkerk shibt...@gmail.com:
--
nosy: +sbt
___
Python tracker rep...@bugs.python.org
http://bugs.python.org/issue10037
___
___
Python-bugs-list mailing
Richard Oudkerk shibt...@gmail.com added the comment:
It is not clear to me how to reproduce the bug.
When you say letting the workers terminate themselves do mean calling
sys.exit() or os._exit() in the submitted task? Are you trying to get the
result of a task which caused the worker to
Sean Reifschneider j...@tummy.com added the comment:
The attached patch does change the semantics somewhat, but I don't fully
understand how much. In particular:
It changes the get() call to be turned into get(timeout=1.0) if inqueue
doesn't have a _reader attribute.
In the case that inqueue
Changes by Terry J. Reedy tjre...@udel.edu:
--
versions: -Python 3.1
___
Python tracker rep...@bugs.python.org
http://bugs.python.org/issue10037
___
___
Python-bugs-list
Changes by Nir Aides n...@winpdb.org:
--
nosy: +nirai
___
Python tracker rep...@bugs.python.org
http://bugs.python.org/issue10037
___
___
Python-bugs-list mailing list
Ray.Allen ysj@gmail.com added the comment:
Could you give an example code which can reproduce this issue?
--
nosy: +ysj.ray
___
Python tracker rep...@bugs.python.org
http://bugs.python.org/issue10037
___
New submission from Ask Solem a...@opera.com:
While working on an autoscaling (yes, people call it that...) feature for
Celery, I noticed that the processes created by the _handle_workers thread
doesn't always work. I have reproduced this in general, by just using the
maxtasksperchild
12 matches
Mail list logo