Ask Solem added the comment:
Perhaps we could add a self._finally to the event loop itself?
Like loop._ready, but a list of callbacks run_until_complete will call before
returning?
--
___
Python tracker
<https://bugs.python.org/issue36
Ask Solem added the comment:
Ah, so the extra call_soon means it needs a:
[code]
loop.run_until_complete(asyncio.sleep(0))```
[/code]
before the self.assertTrue(it.finally_executed)
to finish executing agen.close().
Why is create_task different? Does it execute an iteration
ml=cHl0aG9uLWFubm91b
mNlLWxpc3RAcHl0aG9uLm9yZw==)
Thank you for your support,
\--
[ Ask Solem - github.com/ask | twitter.com/asksol ]
--
https://mail.python.org/mailman/listinfo/python-announce-list
Support the Python Software Foundation:
http://www.python.org/psf/donations/
Ask Solem added the comment:
This patch is quite dated now and I have fixed many bugs since. The feature is
available in billiard and is working well but The code has diverged quite a lot
from python trunk. I will be updating billiard to reflect the changes for
Python 3.4 soon (billiard
Ask Solem added the comment:
I vote to close too as it's very hard to fix in a clean way.
A big problem though is that there is a standard for defining exceptions, that
also ensures that the exception is pickleable (always call Exception.__init__
with original args), that is not documented
and contributors!
- http://celeryproject.org/
--
Ask Solem
twitter.com/asksol | +44 (0)7713357179
signature.asc
Description: Message signed with OpenPGP using GPGMail
--
http://mail.python.org/mailman/listinfo/python-list
Ask Solem a...@celeryproject.org added the comment:
Well, I still don't know exactly why restarting the socket read made it work,
but the patch solved an issue where newly started pool processes would be stuck
in socket read forever (happening to maybe 1/500 new processes)
This and a dozen
Ask Solem a...@celeryproject.org added the comment:
Later works, or just close it. I can open up a new issue to merge the
improvements in billiard later.
The execv stuff certainly won't go in by Py3.3. There has not been
consensus that adding it is a good idea.
(I also have the unit
Ask Solem a...@celeryproject.org added the comment:
@swindmill, if you provide a doc/test patch then this can probably be merged.
@pitrou, We could change it to `setup_queues`, though I don't think
even changing the name of private methods is a good idea. It could simply be
an alias
Ask Solem a...@celeryproject.org added the comment:
I have suspected that this may be necessary, not just merely useful, for some
time, and issue6721 seems to verify that. In addition to adding the keyword
arg to Process, it should also be added to Pool and Manager.
Is anyone working
Ask Solem a...@celeryproject.org added the comment:
How would you replace the following functionality
with the multiple with statement syntax:
x = (A(), B(), C())
with nested(*x) as context:
It seems to me that nested() is still useful for this particular
use case
Ask Solem a...@celeryproject.org added the comment:
This is great! I always wondered if it was really necessary to use C for this.
10µs overhead should be worth it ;)
I've read the patch, but not carefully. So far nothing jumps at me either.
Cheers
://groups.google.com/group/celery-users
:IRC: #celery at irc.freenode.net.
--
{Ask Solem | twitter.com/asksol }.
--
http://mail.python.org/mailman/listinfo/python-announce-list
Support the Python Software Foundation:
http://www.python.org/psf/donations/
Ask Solem a...@opera.com added the comment:
While it makes sense for `join` to raise an error on timeout, that could
possibly break existing code, so I don't think that is an option. Adding a
note in the documentation would be great.
--
___
Python
Ask Solem a...@opera.com added the comment:
ah, this is something I've seen as well, its part of a bug that I haven't
created an issue for yet.
--
___
Python tracker rep...@bugs.python.org
http://bugs.python.org/issue10305
Ask Solem a...@opera.com added the comment:
Since you can't specify the return code, `self.terminate` is less flexible than
`sys.exit`.
I think the original intent is clear here, the method is there for the parent
to control the child. You are of course welcome to argue otherwise
Changes by Ask Solem a...@opera.com:
--
nosy: +asksol
___
Python tracker rep...@bugs.python.org
http://bugs.python.org/issue7292
___
___
Python-bugs-list mailing list
Changes by Ask Solem a...@opera.com:
--
nosy: +asksol
___
Python tracker rep...@bugs.python.org
http://bugs.python.org/issue5930
___
___
Python-bugs-list mailing list
Changes by Ask Solem a...@opera.com:
--
resolution: - invalid
status: open - closed
___
Python tracker rep...@bugs.python.org
http://bugs.python.org/issue9733
Ask Solem a...@opera.com added the comment:
What is the status of this issue? There are several platform listed here,
which I unfortunately don't have access to.
--
nosy: +asksol
___
Python tracker rep...@bugs.python.org
http://bugs.python.org
Ask Solem a...@opera.com added the comment:
Can't reproduce on Python 2.7, but can indeed reproduce on 2.6. Issue fixed?
--
___
Python tracker rep...@bugs.python.org
http://bugs.python.org/issue9955
Changes by Ask Solem a...@opera.com:
--
nosy: +asksol
___
Python tracker rep...@bugs.python.org
http://bugs.python.org/issue10133
___
___
Python-bugs-list mailing list
Ask Solem a...@opera.com added the comment:
It seems that Process.terminate is not meant to be used by the child, but only
the parent.
From the documentation:
Note that the start(), join(), is_alive() and exit_code methods
should only be called by the process that created the process
Changes by Ask Solem a...@opera.com:
--
resolution: - invalid
status: open - closed
___
Python tracker rep...@bugs.python.org
http://bugs.python.org/issue5573
Ask Solem a...@opera.com added the comment:
Pickling on put makes sense to me. I can't think of cases where this could
break existing code either. I think this may also resolve issue 8323
--
stage: - unit test needed
___
Python tracker rep
Ask Solem a...@opera.com added the comment:
Updated doc patch
--
nosy: +asksol
Added file: http://bugs.python.org/file19350/issue-4999.diff
___
Python tracker rep...@bugs.python.org
http://bugs.python.org/issue4999
Ask Solem a...@opera.com added the comment:
AFAICS the object will be pickled twice with this patch.
See Modules/_multiprocessing/connection.h: connection_send_obj.
--
nosy: +asksol
___
Python tracker rep...@bugs.python.org
http://bugs.python.org
Ask Solem a...@opera.com added the comment:
aha, no. I see now you use connection.send_bytes instead.
Then I can't think of any issues with this patch, but I don't know why
it was done this way in the first place.
--
___
Python tracker rep
Ask Solem a...@opera.com added the comment:
Queue uses multiprocessing.util.Finalize, which uses weakrefs to track when the
object is out of scope, so this is actually expected behavior.
IMHO it is not a very good approach, but changing the API to use explicit close
methods is a little late
Ask Solem a...@opera.com added the comment:
Matthew, would you be willing to write tests + documentation for this?
--
___
Python tracker rep...@bugs.python.org
http://bugs.python.org/issue6407
Ask Solem a...@opera.com added the comment:
I can't seem to reproduce this on trunk...
--
nosy: +asksol
___
Python tracker rep...@bugs.python.org
http://bugs.python.org/issue7474
Ask Solem a...@opera.com added the comment:
I don't know about the socket internals, but I find the behavior
acceptable. It may not be feasible to change it now anyway, as there may be
people already depending on it (e.g. not handling errors occurring at poll
Ask Solem a...@opera.com added the comment:
Please add the traceback, I can't seem to find any obvious places where this
would happen now.
Also, what version are you currently using?
I agree with the fileno, but I'd say close is a reasonable method to implement,
especially for stdin/stdout
Changes by Ask Solem a...@opera.com:
--
resolution: - invalid
status: open - closed
___
Python tracker rep...@bugs.python.org
http://bugs.python.org/issue10128
Ask Solem a...@opera.com added the comment:
Is this on Windows? Does it work for you now?
--
___
Python tracker rep...@bugs.python.org
http://bugs.python.org/issue10128
://pypi.python.org/pypi/django-celery/2.1.0
--
{Ask Solem,
+47 98435213 | twitter.com/asksol }.
--
http://mail.python.org/mailman/listinfo/python-announce-list
Support the Python Software Foundation:
http://www.python.org/psf/donations/
New submission from Ask Solem a...@opera.com:
While working on an autoscaling (yes, people call it that...) feature for
Celery, I noticed that the processes created by the _handle_workers thread
doesn't always work. I have reproduced this in general, by just using the
maxtasksperchild
Ask Solem a...@opera.com added the comment:
Could you please reduce this to the shorted possible example that reproduces
the problem?
--
nosy: +asksol
___
Python tracker rep...@bugs.python.org
http://bugs.python.org/issue8028
Changes by Ask Solem a...@opera.com:
--
nosy: +asksol
___
Python tracker rep...@bugs.python.org
http://bugs.python.org/issue8094
___
___
Python-bugs-list mailing list
Ask Solem a...@opera.com added the comment:
Did you finish the code to reproduce the problem?
--
nosy: +asksol
___
Python tracker rep...@bugs.python.org
http://bugs.python.org/issue8144
Ask Solem a...@opera.com added the comment:
Maybe surprising but not so weird if you think about what happens
behind the scenes.
When you do
x = man.list()
x.append({})
You send an empty dict to the manager to be appended to x
when do:
x[0]
{}
you receive a local copy
Ask Solem a...@opera.com added the comment:
I created a small doc patch for this (attached).
--
keywords: +needs review, patch
nosy: +asksol
versions: +Python 3.1 -Python 2.6
Added file: http://bugs.python.org/file18967/multiprocessing-issue7707.patch
Ask Solem a...@opera.com added the comment:
I expected I could iterate over a DictProxy as I do over a
regular dict.
DictProxy doesn't support iterkeys(), itervalues(), or iteritems() either.
So while
iter(d)
could do
iter(d.keys())
behind the scenes, it would mask the fact
Ask Solem a...@opera.com added the comment:
As no one has been able to confirm that this is still an issue, I'm closing it
as out of date. The issue can be reopened if necessary.
--
resolution: accepted - out of date
status: open - closed
___
Python
Ask Solem a...@opera.com added the comment:
As no one is able to confirm that this is still an issue, I'm closing it. It
can be reopened if necessary.
--
resolution: - out of date
___
Python tracker rep...@bugs.python.org
http://bugs.python.org
Changes by Ask Solem a...@opera.com:
--
resolution: - postponed
stage: unit test needed - needs patch
___
Python tracker rep...@bugs.python.org
http://bugs.python.org/issue3735
Changes by Ask Solem a...@opera.com:
--
nosy: +asksol
___
Python tracker rep...@bugs.python.org
http://bugs.python.org/issue4892
___
___
Python-bugs-list mailing list
Changes by Ask Solem a...@opera.com:
--
nosy: +asksol
___
Python tracker rep...@bugs.python.org
http://bugs.python.org/issue3831
___
___
Python-bugs-list mailing list
Changes by Ask Solem a...@opera.com:
--
nosy: +asksol
___
Python tracker rep...@bugs.python.org
http://bugs.python.org/issue5501
___
___
Python-bugs-list mailing list
Changes by Ask Solem a...@opera.com:
--
keywords: +needs review
nosy: +asksol
___
Python tracker rep...@bugs.python.org
http://bugs.python.org/issue8534
___
___
Python
Changes by Ask Solem a...@opera.com:
--
nosy: +asksol
stage: - needs patch
___
Python tracker rep...@bugs.python.org
http://bugs.python.org/issue3093
___
___
Python-bugs
Ask Solem a...@opera.com added the comment:
are there really any test/doc changes needed for this?
--
___
Python tracker rep...@bugs.python.org
http://bugs.python.org/issue6407
Changes by Ask Solem a...@opera.com:
--
stage: needs patch - unit test needed
___
Python tracker rep...@bugs.python.org
http://bugs.python.org/issue6407
___
___
Python
Ask Solem a...@opera.com added the comment:
Does the problem make sense/do you have any ideas for an alternate
solution?
Well, I still haven't given up on the trackjobs patch. I changed it to use a
single queue for both the acks and the result (see new patch attached:
multiprocessing-tr
Changes by Ask Solem a...@opera.com:
--
nosy: +asksol
___
Python tracker rep...@bugs.python.org
http://bugs.python.org/issue5573
___
___
Python-bugs-list mailing list
Changes by Ask Solem a...@opera.com:
--
nosy: +asksol
___
Python tracker rep...@bugs.python.org
http://bugs.python.org/issue3125
___
___
Python-bugs-list mailing list
Changes by Ask Solem a...@opera.com:
--
nosy: +asksol
___
Python tracker rep...@bugs.python.org
http://bugs.python.org/issue3111
___
___
Python-bugs-list mailing list
Changes by Ask Solem a...@opera.com:
--
nosy: +asksol
___
Python tracker rep...@bugs.python.org
http://bugs.python.org/issue6056
___
___
Python-bugs-list mailing list
Changes by Ask Solem a...@opera.com:
--
nosy: +asksol
___
Python tracker rep...@bugs.python.org
http://bugs.python.org/issue6362
___
___
Python-bugs-list mailing list
Changes by Ask Solem a...@opera.com:
--
nosy: +asksol
___
Python tracker rep...@bugs.python.org
http://bugs.python.org/issue6407
___
___
Python-bugs-list mailing list
Changes by Ask Solem a...@opera.com:
--
nosy: +asksol
___
Python tracker rep...@bugs.python.org
http://bugs.python.org/issue6417
___
___
Python-bugs-list mailing list
Changes by Ask Solem a...@opera.com:
--
nosy: +asksol
___
Python tracker rep...@bugs.python.org
http://bugs.python.org/issue3518
___
___
Python-bugs-list mailing list
Changes by Ask Solem a...@opera.com:
--
nosy: +asksol
___
Python tracker rep...@bugs.python.org
http://bugs.python.org/issue6653
___
___
Python-bugs-list mailing list
Changes by Ask Solem a...@opera.com:
--
nosy: +asksol
___
Python tracker rep...@bugs.python.org
http://bugs.python.org/issue7123
___
___
Python-bugs-list mailing list
Changes by Ask Solem a...@opera.com:
--
nosy: +asksol
___
Python tracker rep...@bugs.python.org
http://bugs.python.org/issue7060
___
___
Python-bugs-list mailing list
Ask Solem a...@opera.com added the comment:
Duplicate of 3518?
--
nosy: +asksol
___
Python tracker rep...@bugs.python.org
http://bugs.python.org/issue5862
___
___
Python
Ask Solem a...@opera.com added the comment:
This is a nice feature, but it's also very specific and can be implemented
by extending what's already there.
Could you make a patch for this that applies to the py3k branch? If no one has
the time for this, then we should probably just close
Ask Solem a...@opera.com added the comment:
New patch attach (termination-trackjobs3.patch).
Hmm, a few notes. I have a bunch of nitpicks, but those
can wait for a later iteration. (Just one style nit: I
noticed a few unneeded whitespace changes... please try
not to do that, as it makes
Ask Solem a...@opera.com added the comment:
- A worker removes a job from the queue and is killed before
sending an ACK.
Yeah, this may be a problem. I was thinking we could make sure the task is
acked before child process shutdown. Kill -9 is then not safe, but do we really
want
Ask Solem a...@opera.com added the comment:
By the way, I'm also working on writing some simple benchmarks for the multiple
queues approach, just to see if theres at all an overhead to
worry about.
--
___
Python tracker rep...@bugs.python.org
http
Ask Solem a...@opera.com added the comment:
On closer look your patch is also ignoring SystemExit. I think it's beneficial
to honor SystemExit, so a user could use this as a means to replace the current
process with a new one.
If we keep that behavior, the real problem here is that the
result
Ask Solem a...@opera.com added the comment:
This is related to our discussions at #9205 as well
(http://bugs.python.org/issue9205), as the final patch there will also fix this
issue.
--
___
Python tracker rep...@bugs.python.org
http
Ask Solem a...@opera.com added the comment:
@greg
Been very busy lately, just had some time now to look at your patch.
I'm very ambivalent about using one SimpleQueue per process. What is the reason
for doing that?
--
___
Python tracker rep
Ask Solem a...@opera.com added the comment:
A potential implementation is in termination.patch. Basically,
try to shut down gracefully, but if you timeout, just give up and
kill everything.
You can't have a sensible default timeout, because the worker may be processing
something important
Ask Solem a...@opera.com added the comment:
At first glance, looks like there are a number of sites where you don't
change the blocking calls to non-blocking calls (e.g. get()). Almost all of
the get()s have the potential to be called when there is no possibility for
them to terminate.
I
Ask Solem a...@opera.com added the comment:
Btw, the current problem with termination3.patch seems to be that the
MainProcess somehow appears in self._pool. I have no idea how it gets there.
Maybe some unrelated issue that appears when forking that late in the tests
Ask Solem a...@opera.com added the comment:
but if you make a blocking call such as in the following program,
you'll get a hang
Yeah, and for that we could use the same approach as for the maps.
But, I've just implemented the accept callback approach, which should be
superior. Maps/Apply
Changes by Ask Solem a...@opera.com:
Added file:
http://bugs.python.org/file18026/multiprocessing-tr...@82502-termination-trackjobs.patch
___
Python tracker rep...@bugs.python.org
http://bugs.python.org/issue9205
Ask Solem a...@opera.com added the comment:
Greg,
Before I forget, looks like we also need to deal with the
result from a worker being un-unpickleable:
This is what my patch in bug 9244 does...
Yep. Again, as things stand, once you've lost an worker,
you've lost a task, and you can't
Ask Solem a...@opera.com added the comment:
Ok. I implemented my suggestions in the patch attached
(multiprocessing-tr...@82502-termination2.patch)
What do you think?
Greg, Maybe we could keep the behavior in termination.patch as an option for
map jobs? It is certainly a problem that map jobs
Ask Solem a...@opera.com added the comment:
Updated patch with Greg's suggestions.
(multiprocessing-tr...@82502-handle_worker_encoding_errors2.patch)
--
Added file:
http://bugs.python.org/file18014/multiprocessing-tr...@82502-handle_worker_encoding_errors2.patch
Changes by Ask Solem a...@opera.com:
Removed file:
http://bugs.python.org/file18013/multiprocessing-tr...@82502-termination2.patch
___
Python tracker rep...@bugs.python.org
http://bugs.python.org/issue9205
Ask Solem a...@opera.com added the comment:
Just some small cosmetic changes to the patch.
(added multiprocessing-tr...@82502-termination3.patch)
--
Added file:
http://bugs.python.org/file18015/multiprocessing-tr...@82502-termination3.patch
Ask Solem a...@opera.com added the comment:
Really? I could be misremembering, but I believe you deal
with the case of the result being unpickleable. I.e. you
deal with the put(result) failing, but not the get() in the
result handler.
Your example is demonstrating the pickle error on put
Ask Solem a...@opera.com added the comment:
There's one more thing
if exitcode is not None:
cleaned = True
if exitcode != 0 and not worker._termination_requested:
abnormal.append((worker.pid, exitcode))
Instead of restarting crashed worker
Ask Solem a...@opera.com added the comment:
To be clear, the errback change and the unpickleable result
change are actually orthogonal, right?
Yes, it could be a separate issue. Jesse, do you think I should I open
up a separate issue for this?
Why not add an error_callback for map_async
Ask Solem a...@opera.com added the comment:
Jesse wrote,
We can work around the shutdown issue (really, bug 9207) by
ignoring the exception such as shutdown.patch does, or passing in
references/adding references to the functions those methods need. Or (as
Brett suggested) converting them
Ask Solem a...@opera.com added the comment:
I think I misunderstood the purpose of the patch. This is about handling errors
on get(), not on put() like I was working on. So sorry for that confusion.
What kind of errors are you having that makes the get() call fail?
If the queue is not working
New submission from Ask Solem a...@opera.com:
If the target function returns an unpickleable value the worker process
crashes. This patch tries to safely handle unpickleable errors, while enabling
the user to inspect such errors after the fact.
In addition a new argument has been added
Ask Solem a...@opera.com added the comment:
For reference I opened up a new issue for the put() case here:
http://bugs.python.org/issue9244
--
___
Python tracker rep...@bugs.python.org
http://bugs.python.org/issue9205
Changes by Ask Solem a...@opera.com:
--
title: multiprocessing.pool: Pool crashes if worker can't encode result (with
patch) - multiprocessing.pool: Worker crashes if result can't be encoded
result (with patch)
___
Python tracker rep
Changes by Ask Solem a...@opera.com:
--
title: multiprocessing.pool: Worker crashes if result can't be encoded result
(with patch) - multiprocessing.pool: Worker crashes if result can't be encoded
___
Python tracker rep...@bugs.python.org
http
New submission from Ask Solem a...@opera.com:
This patch adds the `waitforslot` argument to apply_async. If set to `True`,
apply_async will not return until there is a worker available to process the
job.
This is implemented by a semaphore that is released by the result handler
whenever
Changes by Ask Solem a...@opera.com:
--
keywords: +patch
Added file:
http://bugs.python.org/file17985/multiprocessing-tr...@82502-apply-semaphore.patch
___
Python tracker rep...@bugs.python.org
http://bugs.python.org/issue9248
Ask Solem a...@opera.com added the comment:
termination.patch, in the result handler you've added:
while cache and thread._state != TERMINATE and not failed
why are you terminating the second pass after finding a failed process?
Unpickleable errors and other errors occurring in the worker
Ask Solem a...@opera.com added the comment:
Unfortunately, if you've lost a worker, you are no
longer guaranteed that cache will eventually be empty.
In particular, you may have lost a task, which could
result in an ApplyResult waiting forever for a _set call.
More generally, my chief
Changes by Ask Solem a...@opera.com:
--
nosy: +asksol
___
Python tracker rep...@bugs.python.org
http://bugs.python.org/issue9207
___
___
Python-bugs-list mailing list
Changes by Ask Solem a...@opera.com:
--
nosy: +asksol
___
Python tracker rep...@bugs.python.org
http://bugs.python.org/issue9162
___
___
Python-bugs-list mailing list
: http://groups.google.com/group/celery-users
:IRC: #celery at irc.freenode.net.
--
{Ask Solem,
twitter.com/asksol | github.com/ask }.
--
http://mail.python.org/mailman/listinfo/python-announce-list
Support the Python Software Foundation:
http://www.python.org/psf/donations/
===
Celery 1.0 has been released!
===
We're happy to announce the release of Celery 1.0.
What is it?
===
Celery is a task queue/job queue based on distributed message passing.
It is focused on real-time operation, but supports
1 - 100 of 106 matches
Mail list logo