[issue36403] AsyncIterator on 3.7: __aiter__ no longer honors finally blocks

2019-03-23 Thread Ask Solem


Ask Solem  added the comment:

Perhaps we could add a self._finally to the event loop itself?
Like loop._ready, but a list of callbacks run_until_complete will call before 
returning?

--

___
Python tracker 
<https://bugs.python.org/issue36403>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue36403] AsyncIterator on 3.7: __aiter__ no longer honors finally blocks

2019-03-22 Thread Ask Solem


Ask Solem  added the comment:

Ah, so the extra call_soon means it needs a:

[code]
loop.run_until_complete(asyncio.sleep(0))```
[/code]

before the self.assertTrue(it.finally_executed)

to finish executing agen.close().

Why is create_task different? Does it execute an iteration of the generator 
immediately?

Seems good for this behavior to be consistent, but not sure how difficult that 
would be.

--

___
Python tracker 
<https://bugs.python.org/issue36403>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



Release: Celery 4.0 (latentcall)

2016-11-08 Thread Ask Solem
  
I'm happy to announce that a new major version of Celery has been released!

  

To see the complete list of changes go here:

[http://docs.celeryproject.org/en/latest/whatsnew-4.0.html](https://link.nylas
.com/link/c6kbuz4icejw17oc3qqi6syrm/local-d891639d-1557/0?redirect=http%3A%2F%
2Fdocs.celeryproject.org%2Fen%2Flatest%2Fwhatsnew-4.0.html=cHl0aG9uLWFubm91b
mNlLWxpc3RAcHl0aG9uLm9yZw==)  

  

This is a massive release with over two years of changes.  
Not only does it come with many new features, but it also fixes  
a massive list of bugs, so in many ways you could call it  
our "Snow Leopard" release.  
  
The next major version of Celery will support Python 3.5 only, were  
we are planning to take advantage of the new asyncio library.  
  
This release would not have been possible without the support  
of my employer, Robinhood (we're hiring!
[http://robinhood.com](https://link.nylas.com/link/c6kbuz4icejw17oc3qqi6syrm
/local-d891639d-1557/1?redirect=http%3A%2F%2Frobinhood.com=cHl0aG9uLWFubm91b
mNlLWxpc3RAcHl0aG9uLm9yZw==)).

  

It's important that you read the "What's new in Celery 4.0" document

before you upgrade, and since the list of changes is far too big to put in an
email

you have to visit the documentation:

  

[http://docs.celeryproject.org/en/latest/whatsnew-4.0.html](https://link.nylas
.com/link/c6kbuz4icejw17oc3qqi6syrm/local-d891639d-1557/2?redirect=http%3A%2F%
2Fdocs.celeryproject.org%2Fen%2Flatest%2Fwhatsnew-4.0.html=cHl0aG9uLWFubm91b
mNlLWxpc3RAcHl0aG9uLm9yZw==)  

  

Thank you for your support,

\--

[ Ask Solem - github.com/ask | twitter.com/asksol ]

-- 
https://mail.python.org/mailman/listinfo/python-announce-list

Support the Python Software Foundation:
http://www.python.org/psf/donations/


[issue9248] multiprocessing.pool: Proposal: waitforslot

2014-06-30 Thread Ask Solem

Ask Solem added the comment:

This patch is quite dated now and I have fixed many bugs since.  The feature is 
available in billiard and is working well but The code has diverged quite a lot 
from python trunk.  I will be updating billiard to reflect the changes for 
Python 3.4 soon (billiard is currently 3.3).

I think we can forget about taking individual patches from billiard for now,
and instead maybe merge the codebases at some point if there's interest.
we have a version of multiprocessing.Pool using async IO and one pipe per 
process that drastically improves performance
and also avoids the threads+forking issues (well, not the initial fork), but I 
have not yet adapted it to use the new asyncio module in 3.4

So suggestion is to close this and rather get a discussion going for combining 
our efforts.

--

___
Python tracker rep...@bugs.python.org
http://bugs.python.org/issue9248
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue9592] Limitations in objects returned by multiprocessing Pool

2012-09-12 Thread Ask Solem

Ask Solem added the comment:

I vote to close too as it's very hard to fix in a clean way.

A big problem though is that there is a standard for defining exceptions, that 
also ensures that the exception is pickleable (always call Exception.__init__ 
with original args), that is not documented 
(http://docs.python.org/tutorial/errors.html#user-defined-exceptions).

Celery has an elaborate mechanism to rewrite unpickleable exceptions, but it's 
a massive workaround just to keep the workers running, and shouldn't be part of 
the stdlib.  It would help if the Python documentation mentioned this though.

Related: 
http://docs.celeryproject.org/en/latest/userguide/tasks.html#creating-pickleable-exceptions

--

___
Python tracker rep...@bugs.python.org
http://bugs.python.org/issue9592
___
___
Python-bugs-list mailing list
Unsubscribe: 
http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



ANN: Celery 3.0 (chiastic slide) released!

2012-07-07 Thread Ask Solem
===
 Celery 3.0 (Chiastic Slide) Released!
===

Celery is a simple, flexible and reliable distributed system to
process vast amounts of messages, while providing operations with
the tools required to maintain such a system.

It's a task queue with focus on real-time processing, while also
supporting task scheduling.

Celery has a large and diverse community of users and contributors,
you should come join us on IRC (freenode.net: #celery)
or our mailing-list (http://groups.google.com/group/celery-users).

To read more about Celery you should go read the introduction:
  - http://docs.celeryproject.org/en/latest/getting-started/introduction.html

If you use Celery in combination with Django you must also
read the django-celery changelog and upgrade to django-celery 3.0
  - http://github.com/celery/django-celery/tree/master/Changelog

This version is officially supported on CPython 2.5, 2.6, 2.7, 3.2 and 3.3,
as well as PyPy and Jython.

/*
You should read the full changelog which contains important notes at:

- http://docs.celeryproject.org/en/latest/whatsnew-3.0.html
*/

Highlights
==

- A new and improved API, that is both simpler and more powerful.

Everyone must read the new first-steps tutorial,
and the new next-steps tutorial

  - http://bit.ly/celery-first-steps
  - http://bit.ly/celery-next-steps

Oh, and why not reread the user guide while you're at it :)

There are no current plans to deprecate the old API,
so you don't have to be in a hurry to port your applications.

- The worker is now thread-less, giving great performance improvements.

- The new Canvas makes it easy to define complex workflows.

Ever wanted to chain tasks together? This is possible, but
not just that, now you can even chain together groups and chords,
or even combine multiple chains.

Read more in the Canvas user guide:
  - http://docs.celeryproject.org/en/latest/userguide/canvas.html

- All of Celery's command line programs are now available from a single
umbrella command: ``celery``.

- This is the last version to support Python 2.5.

Starting with Celery 3.1, Python 2.6 or later is required.

- Support for the new librabbitmq C client.

Celery will automatically use the librabbitmq module
if installed, which is a very fast and memory-optimized
replacement for the amqplib module.

- Redis support is more reliable with improved ack emulation.

- Celery now always uses UTC

- Over 600 commits, 30k additions/36k deletions.

In comparison 1.0 to 2.0 had 18k additions/8k deletions.

Thank you to all users and contributors!

- http://celeryproject.org/



-- 
Ask Solem
twitter.com/asksol | +44 (0)7713357179



signature.asc
Description: Message signed with OpenPGP using GPGMail
-- 
http://mail.python.org/mailman/listinfo/python-list


[issue10037] multiprocessing.pool processes started by worker handler stops working

2012-06-07 Thread Ask Solem

Ask Solem a...@celeryproject.org added the comment:

Well, I still don't know exactly why restarting the socket read made it work, 
but the patch solved an issue where newly started pool processes would be stuck 
in socket read forever (happening to maybe 1/500 new processes)

This and a dozen other pool related fixes are in my billiard fork of 
multiprocessing, e.g. what you
describe in your comment:
# trying res.get() would block forever
works in billiard, where res.get() will raise WorkerLostError in that
case.

https://github.com/celery/billiard/

Earlier commit history for the pool can be found in Celery:
https://github.com/ask/celery/commits/2.5/celery/concurrency/processes/pool.py

My eventual goal is to merge these fixes back into Python, but except
for people using Python 3.x, they would have to use billiard for quite some 
time anyway, so I don't feel in a hurry.


I think this issue can be closed, the worker handler is simply borked and  we 
could open up a new issue deciding how to fix it (merging billiard.Pool or 
someting else).

(btw, Richard, you're sbt? I was trying to find your real name to give
you credit for the no_execv patch in billiard)

--

___
Python tracker rep...@bugs.python.org
http://bugs.python.org/issue10037
___
___
Python-bugs-list mailing list
Unsubscribe: 
http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue10037] multiprocessing.pool processes started by worker handler stops working

2012-06-07 Thread Ask Solem

Ask Solem a...@celeryproject.org added the comment:

Later works, or just close it.  I can open up a new issue to merge the 
improvements in billiard later.

 The execv stuff certainly won't go in by Py3.3.  There has not been 
 consensus that adding it is a good idea.

 (I also have the unit tests passing with a fork server: the server process 
 is forked at the beginning of the program and then forked children of the 
 server process are started on request.  It is about 10 times faster then 
 using execv, and almost as fast as simple forking.)

Ah, a working 'fork server' would be just as good.
Btw, Billiard now supports running Pool without threads, using 
epoll/kqueue/select instead. So Celery uses that when it can be nonblocking, 
and execv when it can't.  It performs way better without threads, and in 
addition shutdown + replacing worker processes is much more responsive.  
Changing the default Pool is not going to happen, but ncluding a simple 
select() based Pool would be possible, and then it could also easily work with 
Twisted, Eventlet, Gevent, etc. (especially now that the Connection is 
rewritten in pure python).

--

___
Python tracker rep...@bugs.python.org
http://bugs.python.org/issue10037
___
___
Python-bugs-list mailing list
Unsubscribe: 
http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue6407] multiprocessing Pool should allow custom task queue

2011-11-24 Thread Ask Solem

Ask Solem a...@celeryproject.org added the comment:

@swindmill, if you provide a doc/test patch then this can probably be merged.

@pitrou, We could change it to `setup_queues`, though I don't think
even changing the name of private methods is a good idea.  It could simply be 
an alias to `_setup_queues` or vice versa.

--

___
Python tracker rep...@bugs.python.org
http://bugs.python.org/issue6407
___
___
Python-bugs-list mailing list
Unsubscribe: 
http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue8713] multiprocessing needs option to eschew fork() under Linux

2011-08-24 Thread Ask Solem

Ask Solem a...@celeryproject.org added the comment:

I have suspected that this may be necessary, not just merely useful, for some 
time, and issue6721 seems to verify that.  In addition to adding the keyword 
arg to Process, it should also be added to Pool and Manager.

Is anyone working on a patch? If not I will work on a patch asap.

--

___
Python tracker rep...@bugs.python.org
http://bugs.python.org/issue8713
___
___
Python-bugs-list mailing list
Unsubscribe: 
http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue6288] Update contextlib.nested docs in light of deprecation

2011-07-27 Thread Ask Solem

Ask Solem a...@celeryproject.org added the comment:

How would you replace the following functionality
with the multiple with statement syntax:


x = (A(), B(), C())
with nested(*x) as context:



It seems to me that nested() is still useful for this particular
use case.

--
nosy: +asksol

___
Python tracker rep...@bugs.python.org
http://bugs.python.org/issue6288
___
___
Python-bugs-list mailing list
Unsubscribe: 
http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue11743] Rewrite PipeConnection and Connection in pure Python

2011-04-03 Thread Ask Solem

Ask Solem a...@celeryproject.org added the comment:

This is great!  I always wondered if it was really necessary to use C for this. 
10µs overhead should be worth it ;)

I've read the patch, but not carefully.  So far nothing jumps at me either.

Cheers!

--

___
Python tracker rep...@bugs.python.org
http://bugs.python.org/issue11743
___
___
Python-bugs-list mailing list
Unsubscribe: 
http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[Ann] Celery 2.2 released!

2011-02-01 Thread Ask Solem Hoel
==
 Celery 2.2 is now available!
==

We're happy to announce the release of Celery 2.2.

Thanks to all contributors, testers and users, which without
this release would not have been possible.

What is it?
===

Celery is an asynchronous task queue/job queue based on distributed message
passing. It is focused on real-time operation, but supports scheduling as well.

The execution units, called tasks, are executed concurrently on a single or
more worker nodes using multiprocessing, Eventlet, or gevent. Tasks can
execute asynchronously (in the background) or synchronously (wait until ready).

Celery is used in production systems to process millions of tasks a day.

Celery is written in Python, but the protocol can be implemented in any
language. It can also operate with other languages using webhooks.

The recommended message broker is RabbitMQ, but limited support for Redis,
Beanstalk, MongoDB, CouchDB, and databases (using SQLAlchemy or the Django ORM)
is also available.

Celery is easy to integrate with Django, Pylons and Flask, using the
django-celery, celery-pylons and Flask-Celery add-on packages.

Going to PyCon US 2011?
===

Then don't forget to attend Ryan Petrello's talk,
Distributed Tasks with Celery: http://us.pycon.org/2011/schedule/sessions/1/

What's new?
===

* Eventlet and gevent support.

  Worker nodes can now use Eventlet/gevent as an alternative to
  multiprocessing.  You could run several nodes running different
  pool implementations and route tasks to the best tool for the job.

* Jython support!

* This is the last version supporting Python 2.4.

* Celery is now a class that can be instantiated, and the configuration is no
  longer global (see http://bit.ly/i6s3qK)

* Built-in support for virtual transports (ghettoq queues)

Virtual transports for Redis, Beanstalk, CouchDB, MongoDB,
SQLAlchemy and the Django ORM are now available by default,
and the implementations have been drastically improved.

* Now using Kombu instead of Carrot.

Kombu is the next generation messaging framework for Python.
See http://packages.python.org/kombu

* Multiple instances of event monitors can now run simultaneously (celeryev,
  celerymon, djcelerymon)

* Redis transport can now do remote control commands.

* Autoscaling of worker processes.

* Magic keyword arguments pending deprecation.

The magic keyword arguments will be completely removed in version 3.0.
It is important that you read the full changelog for more details.


And *lots* more!  The Changelog contains a detailed
list of all improvements and fixes:

http://celeryproject.org/docs/changelog.html#version-2-2-0

Be sure to read this before you upgrade!

Upgrading
=

To upgrade using pip::

$ pip install -U celery

If you're using Django, then django-celery will automatically
upgrade Celery as well::

$ pip install -U django-celery

Resources
=

:Homepage: http://celeryproject.org

:Download: http://pypi.python.org/pypi/celery

:Community Links: http://celeryq.org/community.html

:Documentation: http://celeryproject.org/docs/

:Code: http://github.com/ask/celery/

:FAQ: http://ask.github.com/celery/faq.html

:Mailing list: http://groups.google.com/group/celery-users

:IRC: #celery at irc.freenode.net.


-- 
{Ask Solem | twitter.com/asksol }.

-- 
http://mail.python.org/mailman/listinfo/python-announce-list

Support the Python Software Foundation:
http://www.python.org/psf/donations/


[issue10673] multiprocess.Process join method - timeout indistinguishable from success

2010-12-10 Thread Ask Solem

Ask Solem a...@opera.com added the comment:

While it makes sense for `join` to raise an error on timeout, that could 
possibly break existing code, so I don't think that is an option.  Adding a 
note in the documentation would be great.

--

___
Python tracker rep...@bugs.python.org
http://bugs.python.org/issue10673
___
___
Python-bugs-list mailing list
Unsubscribe: 
http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue10305] Cleanup up ResourceWarnings in multiprocessing

2010-11-04 Thread Ask Solem

Ask Solem a...@opera.com added the comment:

ah, this is something I've seen as well, its part of a bug that I haven't 
created an issue for yet.

--

___
Python tracker rep...@bugs.python.org
http://bugs.python.org/issue10305
___
___
Python-bugs-list mailing list
Unsubscribe: 
http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue8028] self.terminate() from a multiprocessing.Process raises AttributeError exception

2010-11-03 Thread Ask Solem

Ask Solem a...@opera.com added the comment:

Since you can't specify the return code, `self.terminate` is less flexible than 
`sys.exit`.

I think the original intent is clear here, the method is there for the parent 
to control the child.  You are of course welcome to argue otherwise.

By the way, I just read the code and noticed that it handles SystemExit well, 
and supports using it to set the return code:

class X(Process):
def run(self):
if not frobulating:
raise SystemExit(255)

--

___
Python tracker rep...@bugs.python.org
http://bugs.python.org/issue8028
___
___
Python-bugs-list mailing list
Unsubscribe: 
http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue7292] Multiprocessing Joinable race condition?

2010-11-02 Thread Ask Solem

Changes by Ask Solem a...@opera.com:


--
nosy: +asksol

___
Python tracker rep...@bugs.python.org
http://bugs.python.org/issue7292
___
___
Python-bugs-list mailing list
Unsubscribe: 
http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue5930] Transient error in multiprocessing (test_number_of_objects)

2010-11-02 Thread Ask Solem

Changes by Ask Solem a...@opera.com:


--
nosy: +asksol

___
Python tracker rep...@bugs.python.org
http://bugs.python.org/issue5930
___
___
Python-bugs-list mailing list
Unsubscribe: 
http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue9733] Can't iterate over multiprocessing.managers.DictProxy

2010-11-02 Thread Ask Solem

Changes by Ask Solem a...@opera.com:


--
resolution:  - invalid
status: open - closed

___
Python tracker rep...@bugs.python.org
http://bugs.python.org/issue9733
___
___
Python-bugs-list mailing list
Unsubscribe: 
http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue3876] multiprocessing does not compile on systems which do not define sem_timedwait

2010-11-02 Thread Ask Solem

Ask Solem a...@opera.com added the comment:

What is the status of this issue?  There are several platform listed here, 
which I unfortunately don't have access to.

--
nosy: +asksol

___
Python tracker rep...@bugs.python.org
http://bugs.python.org/issue3876
___
___
Python-bugs-list mailing list
Unsubscribe: 
http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue9955] multiprocessing.Pipe segmentation fault when recv of unpicklable object

2010-11-02 Thread Ask Solem

Ask Solem a...@opera.com added the comment:

Can't reproduce on Python 2.7, but can indeed reproduce on 2.6.  Issue fixed?

--

___
Python tracker rep...@bugs.python.org
http://bugs.python.org/issue9955
___
___
Python-bugs-list mailing list
Unsubscribe: 
http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue10133] multiprocessing: conn_recv_string() broken error handling

2010-11-02 Thread Ask Solem

Changes by Ask Solem a...@opera.com:


--
nosy: +asksol

___
Python tracker rep...@bugs.python.org
http://bugs.python.org/issue10133
___
___
Python-bugs-list mailing list
Unsubscribe: 
http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue8028] self.terminate() from a multiprocessing.Process raises AttributeError exception

2010-11-02 Thread Ask Solem

Ask Solem a...@opera.com added the comment:

It seems that Process.terminate is not meant to be used by the child, but only 
the parent.

From the documentation:

  Note that the start(), join(), is_alive() and exit_code methods
  should only be called by the process that created the process object.

Either terminate() should be added to this list,
or terminate should be patch to call sys.exit in a child process.

I vote for the former, so I attached a doc patch.

--
keywords: +patch
Added file: http://bugs.python.org/file19466/i8028.patch

___
Python tracker rep...@bugs.python.org
http://bugs.python.org/issue8028
___
___
Python-bugs-list mailing list
Unsubscribe: 
http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue5573] multiprocessing Pipe poll() and recv() semantics.

2010-11-02 Thread Ask Solem

Changes by Ask Solem a...@opera.com:


--
resolution:  - invalid
status: open - closed

___
Python tracker rep...@bugs.python.org
http://bugs.python.org/issue5573
___
___
Python-bugs-list mailing list
Unsubscribe: 
http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue8037] multiprocessing.Queue's put() not atomic thread wise

2010-11-02 Thread Ask Solem

Ask Solem a...@opera.com added the comment:

Pickling on put makes sense to me.  I can't think of cases where this could 
break existing code either.  I think this may also resolve issue 8323

--
stage:  - unit test needed

___
Python tracker rep...@bugs.python.org
http://bugs.python.org/issue8037
___
___
Python-bugs-list mailing list
Unsubscribe: 
http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue4999] multiprocessing.Queue does not order objects

2010-10-24 Thread Ask Solem

Ask Solem a...@opera.com added the comment:

Updated doc patch

--
nosy: +asksol
Added file: http://bugs.python.org/file19350/issue-4999.diff

___
Python tracker rep...@bugs.python.org
http://bugs.python.org/issue4999
___
___
Python-bugs-list mailing list
Unsubscribe: 
http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue8037] multiprocessing.Queue's put() not atomic thread wise

2010-10-24 Thread Ask Solem

Ask Solem a...@opera.com added the comment:

AFAICS the object will be pickled twice with this patch.
See Modules/_multiprocessing/connection.h: connection_send_obj.

--
nosy: +asksol

___
Python tracker rep...@bugs.python.org
http://bugs.python.org/issue8037
___
___
Python-bugs-list mailing list
Unsubscribe: 
http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue8037] multiprocessing.Queue's put() not atomic thread wise

2010-10-24 Thread Ask Solem

Ask Solem a...@opera.com added the comment:

aha, no. I see now you use connection.send_bytes instead.
Then I can't think of any issues with this patch, but I don't know why
it was done this way in the first place.

--

___
Python tracker rep...@bugs.python.org
http://bugs.python.org/issue8037
___
___
Python-bugs-list mailing list
Unsubscribe: 
http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue7200] multiprocessing deadlock on Mac OS X when queue collected before process terminates

2010-10-24 Thread Ask Solem

Ask Solem a...@opera.com added the comment:

Queue uses multiprocessing.util.Finalize, which uses weakrefs to track when the 
object is out of scope, so this is actually expected behavior.

IMHO it is not a very good approach, but changing the API to use explicit close 
methods is a little late at this point, I guess.

--
nosy: +asksol

___
Python tracker rep...@bugs.python.org
http://bugs.python.org/issue7200
___
___
Python-bugs-list mailing list
Unsubscribe: 
http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue6407] multiprocessing Pool should allow custom task queue

2010-10-24 Thread Ask Solem

Ask Solem a...@opera.com added the comment:

Matthew, would you be willing to write tests + documentation for this?

--

___
Python tracker rep...@bugs.python.org
http://bugs.python.org/issue6407
___
___
Python-bugs-list mailing list
Unsubscribe: 
http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue7474] multiprocessing.managers.SyncManager managed object creation fails when started outside of invoked file

2010-10-24 Thread Ask Solem

Ask Solem a...@opera.com added the comment:

I can't seem to reproduce this on trunk...

--
nosy: +asksol

___
Python tracker rep...@bugs.python.org
http://bugs.python.org/issue7474
___
___
Python-bugs-list mailing list
Unsubscribe: 
http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue5573] multiprocessing Pipe poll() and recv() semantics.

2010-10-24 Thread Ask Solem

Ask Solem a...@opera.com added the comment:

I don't know about the socket internals, but I find the behavior 
acceptable. It may not be feasible to change it now anyway, as there may be 
people already depending on it (e.g. not handling errors occurring at poll)

--

___
Python tracker rep...@bugs.python.org
http://bugs.python.org/issue5573
___
___
Python-bugs-list mailing list
Unsubscribe: 
http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue10174] multiprocessing expects sys.stdout to have a fileno/close method.

2010-10-23 Thread Ask Solem

Ask Solem a...@opera.com added the comment:

Please add the traceback,  I can't seem to find any obvious places where this 
would happen now.

Also, what version are you currently using?


I agree with the fileno, but I'd say close is a reasonable method to implement, 
especially for stdin/stdout/stderr

--

___
Python tracker rep...@bugs.python.org
http://bugs.python.org/issue10174
___
___
Python-bugs-list mailing list
Unsubscribe: 
http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue10128] multiprocessing.Pool throws exception with __main__.py

2010-10-20 Thread Ask Solem

Changes by Ask Solem a...@opera.com:


--
resolution:  - invalid
status: open - closed

___
Python tracker rep...@bugs.python.org
http://bugs.python.org/issue10128
___
___
Python-bugs-list mailing list
Unsubscribe: 
http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue10128] multiprocessing.Pool throws exception with __main__.py

2010-10-19 Thread Ask Solem

Ask Solem a...@opera.com added the comment:

Is this on Windows?  Does it work for you now?

--

___
Python tracker rep...@bugs.python.org
http://bugs.python.org/issue10128
___
___
Python-bugs-list mailing list
Unsubscribe: 
http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[Ann] Celery 2.1 stable released

2010-10-08 Thread Ask Solem
Hey!

Celery 2.1.0 was just uploaded to PyPI!

This is a backward compatible release in the 2.x series,
and is an recommended upgrade for all users.

What is Celery?


Celery is an open source asynchronous task queue/job queue based on
distributed message passing.  It is focused on real-time operation, but
supports scheduling as well.

The execution units, called tasks, are executed concurrently on one or
more worker nodes.  Tasks can execute asynchronously (in the background)
or synchronously (wait until ready).

For more information go to http://celeryproject.org

Highlights


* Periodic task schedule can now be stored in a database,
   and changes will be reflected at runtime (only supported by django-celery
   at the moment, but this can also be used in general)

   http://celeryq.org/docs/userguide/periodic-tasks.html#periodic-tasks

* New web monitor using the Django Admin interface.
   Can also be used by non-Django users.

  http://celeryq.org/docs/userguide/monitoring.html#django-admin-monitor

* Periodic tasks now supports arguments and keyword arguments.

* celeryctl: A new command line utility to inspect and manage worker
   nodes.

   
http://celeryq.org/docs/userguide/monitoring.html#celeryctl-management-utility

* AMQP Backend: Now supports automatic expiration of results.

* Task expiration:  A date and time which after the task will be considered 
expired
   and will not be executed.

* Lots of bugs fixed.
 
* Lots of documentation improvements.

Changes
===

The full list of changes are available in the changelogs:

* celery: http://celeryq.org/docs/changelog.html
* django-celery: http://celeryq.org/docs/django-celery/changelog.html

Download


* celery: http://pypi.python.org/pypi/celery/2.1.0
* django-celery: http://pypi.python.org/pypi/django-celery/2.1.0

-- 
{Ask Solem,
 +47 98435213 | twitter.com/asksol }.

-- 
http://mail.python.org/mailman/listinfo/python-announce-list

Support the Python Software Foundation:
http://www.python.org/psf/donations/


[issue10037] multiprocessing.pool processes started by worker handler stops working

2010-10-06 Thread Ask Solem

New submission from Ask Solem a...@opera.com:

While working on an autoscaling (yes, people call it that...) feature for 
Celery, I noticed that the processes created by the _handle_workers thread 
doesn't always work.  I have reproduced this in general, by just using the 
maxtasksperchild feature and letting the workers terminate themselves so this 
seems to have always been an issue (just not easy to reproduce unless workers 
are created with some frequency)

I'm not quite sure of the reason yet, but I finally managed to track it down to 
the workers being stuck while receiving from the queue.

The patch attached seems to resolve the issue by polling the queue before 
trying to receive.

I know this is short, I may have some more data later.

--
components: Library (Lib)
files: multiprocessing-worker-poll.patch
keywords: needs review, patch
messages: 118062
nosy: asksol
priority: critical
severity: normal
stage: patch review
status: open
title: multiprocessing.pool processes started by worker handler stops working
type: behavior
versions: Python 2.7, Python 3.1, Python 3.2, Python 3.3
Added file: http://bugs.python.org/file19139/multiprocessing-worker-poll.patch

___
Python tracker rep...@bugs.python.org
http://bugs.python.org/issue10037
___
___
Python-bugs-list mailing list
Unsubscribe: 
http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue8028] self.terminate() from a multiprocessing.Process raises AttributeError exception

2010-10-05 Thread Ask Solem

Ask Solem a...@opera.com added the comment:

Could you please reduce this to the shorted possible example that reproduces 
the problem?

--
nosy: +asksol

___
Python tracker rep...@bugs.python.org
http://bugs.python.org/issue8028
___
___
Python-bugs-list mailing list
Unsubscribe: 
http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue8094] Multiprocessing infinite loop

2010-10-05 Thread Ask Solem

Changes by Ask Solem a...@opera.com:


--
nosy: +asksol

___
Python tracker rep...@bugs.python.org
http://bugs.python.org/issue8094
___
___
Python-bugs-list mailing list
Unsubscribe: 
http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue8144] muliprocessing shutdown infinite loop

2010-10-05 Thread Ask Solem

Ask Solem a...@opera.com added the comment:

Did you finish the code to reproduce the problem?

--
nosy: +asksol

___
Python tracker rep...@bugs.python.org
http://bugs.python.org/issue8144
___
___
Python-bugs-list mailing list
Unsubscribe: 
http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue9801] Can not use append/extend to lists in a multiprocessing manager dict

2010-09-22 Thread Ask Solem

Ask Solem a...@opera.com added the comment:

Maybe surprising but not so weird if you think about what happens
behind the scenes.

When you do

 x = man.list()
 x.append({})

You send an empty dict to the manager to be appended to x

when do:

x[0]
   {}

you receive a local copy of the empty dict from the manager process.


So this:

 x[0][a] = 5

will only modify the local copy.

What you would have to do is:

 x.append({})
 t = x[0]
 t[a] = 5
 x[0] = t

This will not be atomic of course, so this may be something
to take into account.

What maybe could be supported is something like:
 x[0] = manager.dict()
x[0][foo] = bar

but otherwise I wouldn't consider this a bug.

--
nosy: +asksol

___
Python tracker rep...@bugs.python.org
http://bugs.python.org/issue9801
___
___
Python-bugs-list mailing list
Unsubscribe: 
http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue7707] multiprocess.Queue operations during import can lead to deadlocks

2010-09-22 Thread Ask Solem

Ask Solem a...@opera.com added the comment:

I created a small doc patch for this (attached).

--
keywords: +needs review, patch
nosy: +asksol
versions: +Python 3.1 -Python 2.6
Added file: http://bugs.python.org/file18967/multiprocessing-issue7707.patch

___
Python tracker rep...@bugs.python.org
http://bugs.python.org/issue7707
___
___
Python-bugs-list mailing list
Unsubscribe: 
http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue9733] Can't iterate over multiprocessing.managers.DictProxy

2010-09-09 Thread Ask Solem

Ask Solem a...@opera.com added the comment:

 I expected I could iterate over a DictProxy as I do over a
 regular dict.

DictProxy doesn't support iterkeys(), itervalues(), or iteritems() either.
So while

iter(d)

could do
 iter(d.keys())

behind the scenes, it would mask the fact that this would not return
an *iterator* over the keys, but send a potentially long list of keys back to 
the client.

--

___
Python tracker rep...@bugs.python.org
http://bugs.python.org/issue9733
___
___
Python-bugs-list mailing list
Unsubscribe: 
http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue3125] test_multiprocessing causes test_ctypes to fail

2010-09-09 Thread Ask Solem

Ask Solem a...@opera.com added the comment:

As no one has been able to confirm that this is still an issue, I'm closing it 
as out of date. The issue can be reopened if necessary.

--
resolution: accepted - out of date
status: open - closed

___
Python tracker rep...@bugs.python.org
http://bugs.python.org/issue3125
___
___
Python-bugs-list mailing list
Unsubscribe: 
http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue3111] multiprocessing ppc Debian/ ia64 Ubuntu compilation error

2010-09-09 Thread Ask Solem

Ask Solem a...@opera.com added the comment:

As no one is able to confirm that this is still an issue, I'm closing it. It 
can be reopened if necessary.

--
resolution:  - out of date

___
Python tracker rep...@bugs.python.org
http://bugs.python.org/issue3111
___
___
Python-bugs-list mailing list
Unsubscribe: 
http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue3735] allow multiple threads to efficiently send the same requests to a processing.Pool without incurring duplicate processing

2010-08-31 Thread Ask Solem

Changes by Ask Solem a...@opera.com:


--
resolution:  - postponed
stage: unit test needed - needs patch

___
Python tracker rep...@bugs.python.org
http://bugs.python.org/issue3735
___
___
Python-bugs-list mailing list
Unsubscribe: 
http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue4892] Sending Connection-objects over multiprocessing connections fails

2010-08-31 Thread Ask Solem

Changes by Ask Solem a...@opera.com:


--
nosy: +asksol

___
Python tracker rep...@bugs.python.org
http://bugs.python.org/issue4892
___
___
Python-bugs-list mailing list
Unsubscribe: 
http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue3831] Multiprocessing: Expose underlying pipe in queues

2010-08-31 Thread Ask Solem

Changes by Ask Solem a...@opera.com:


--
nosy: +asksol

___
Python tracker rep...@bugs.python.org
http://bugs.python.org/issue3831
___
___
Python-bugs-list mailing list
Unsubscribe: 
http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue5501] Update multiprocessing docs re: freeze_support

2010-08-31 Thread Ask Solem

Changes by Ask Solem a...@opera.com:


--
nosy: +asksol

___
Python tracker rep...@bugs.python.org
http://bugs.python.org/issue5501
___
___
Python-bugs-list mailing list
Unsubscribe: 
http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue8534] multiprocessing not working from egg

2010-08-31 Thread Ask Solem

Changes by Ask Solem a...@opera.com:


--
keywords: +needs review
nosy: +asksol

___
Python tracker rep...@bugs.python.org
http://bugs.python.org/issue8534
___
___
Python-bugs-list mailing list
Unsubscribe: 
http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue3093] Namespace pollution from multiprocessing

2010-08-31 Thread Ask Solem

Changes by Ask Solem a...@opera.com:


--
nosy: +asksol
stage:  - needs patch

___
Python tracker rep...@bugs.python.org
http://bugs.python.org/issue3093
___
___
Python-bugs-list mailing list
Unsubscribe: 
http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue6407] multiprocessing Pool should allow custom task queue

2010-08-31 Thread Ask Solem

Ask Solem a...@opera.com added the comment:

are there really any test/doc changes needed for this?

--

___
Python tracker rep...@bugs.python.org
http://bugs.python.org/issue6407
___
___
Python-bugs-list mailing list
Unsubscribe: 
http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue6407] multiprocessing Pool should allow custom task queue

2010-08-31 Thread Ask Solem

Changes by Ask Solem a...@opera.com:


--
stage: needs patch - unit test needed

___
Python tracker rep...@bugs.python.org
http://bugs.python.org/issue6407
___
___
Python-bugs-list mailing list
Unsubscribe: 
http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue9205] Parent process hanging in multiprocessing if children terminate unexpectedly

2010-08-27 Thread Ask Solem

Ask Solem a...@opera.com added the comment:

 Does the problem make sense/do you have any ideas for an alternate
 solution?

Well, I still haven't given up on the trackjobs patch. I changed it to use a 
single queue for both the acks and the result (see new patch attached:  
multiprocessing-tr...@82502-termination-trackjobs2.patch)

Been running it in production for a few days now, and it seems to work. But the 
tests still hangs from time to time, it seems they hang more frequent now than 
in the first patch (this may actually be a good thing:)

Would you like to try and identify the cause of this hang? Still haven't been 
able to.

I'm not sure about the overhead of using one queue per process either, but I'm 
usually running about 8 processes per CPU core for IO bound jobs (adding more 
processes after that usually doesn't affect performance in positive ways). 
There's also the overhead of the synchronization (ACK). Not sure if this is 
important performance-wise, but at least this makes it harder for me to reason 
about the problem.

--
Added file: 
http://bugs.python.org/file18657/multiprocessing-tr...@82502-termination-trackjobs2.patch

___
Python tracker rep...@bugs.python.org
http://bugs.python.org/issue9205
___
___
Python-bugs-list mailing list
Unsubscribe: 
http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue5573] multiprocessing Pipe poll() and recv() semantics.

2010-08-27 Thread Ask Solem

Changes by Ask Solem a...@opera.com:


--
nosy: +asksol

___
Python tracker rep...@bugs.python.org
http://bugs.python.org/issue5573
___
___
Python-bugs-list mailing list
Unsubscribe: 
http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue3125] test_multiprocessing causes test_ctypes to fail

2010-08-27 Thread Ask Solem

Changes by Ask Solem a...@opera.com:


--
nosy: +asksol

___
Python tracker rep...@bugs.python.org
http://bugs.python.org/issue3125
___
___
Python-bugs-list mailing list
Unsubscribe: 
http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue3111] multiprocessing ppc Debian/ ia64 Ubuntu compilation error

2010-08-27 Thread Ask Solem

Changes by Ask Solem a...@opera.com:


--
nosy: +asksol

___
Python tracker rep...@bugs.python.org
http://bugs.python.org/issue3111
___
___
Python-bugs-list mailing list
Unsubscribe: 
http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue6056] socket.setdefaulttimeout affecting multiprocessing Manager

2010-08-27 Thread Ask Solem

Changes by Ask Solem a...@opera.com:


--
nosy: +asksol

___
Python tracker rep...@bugs.python.org
http://bugs.python.org/issue6056
___
___
Python-bugs-list mailing list
Unsubscribe: 
http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue6362] multiprocessing: handling of errno after signals in sem_acquire()

2010-08-27 Thread Ask Solem

Changes by Ask Solem a...@opera.com:


--
nosy: +asksol

___
Python tracker rep...@bugs.python.org
http://bugs.python.org/issue6362
___
___
Python-bugs-list mailing list
Unsubscribe: 
http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue6407] multiprocessing Pool should allow custom task queue

2010-08-27 Thread Ask Solem

Changes by Ask Solem a...@opera.com:


--
nosy: +asksol

___
Python tracker rep...@bugs.python.org
http://bugs.python.org/issue6407
___
___
Python-bugs-list mailing list
Unsubscribe: 
http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue6417] multiprocessing Process examples: print and getppid

2010-08-27 Thread Ask Solem

Changes by Ask Solem a...@opera.com:


--
nosy: +asksol

___
Python tracker rep...@bugs.python.org
http://bugs.python.org/issue6417
___
___
Python-bugs-list mailing list
Unsubscribe: 
http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue3518] multiprocessing: BaseManager.from_address documented but doesn't exist

2010-08-27 Thread Ask Solem

Changes by Ask Solem a...@opera.com:


--
nosy: +asksol

___
Python tracker rep...@bugs.python.org
http://bugs.python.org/issue3518
___
___
Python-bugs-list mailing list
Unsubscribe: 
http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue6653] Potential memory leak in multiprocessing

2010-08-27 Thread Ask Solem

Changes by Ask Solem a...@opera.com:


--
nosy: +asksol

___
Python tracker rep...@bugs.python.org
http://bugs.python.org/issue6653
___
___
Python-bugs-list mailing list
Unsubscribe: 
http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue7123] Multiprocess Process does not always exit when run from a thread.

2010-08-27 Thread Ask Solem

Changes by Ask Solem a...@opera.com:


--
nosy: +asksol

___
Python tracker rep...@bugs.python.org
http://bugs.python.org/issue7123
___
___
Python-bugs-list mailing list
Unsubscribe: 
http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue7060] test_multiprocessing dictionary changed size errors and hang

2010-08-27 Thread Ask Solem

Changes by Ask Solem a...@opera.com:


--
nosy: +asksol

___
Python tracker rep...@bugs.python.org
http://bugs.python.org/issue7060
___
___
Python-bugs-list mailing list
Unsubscribe: 
http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue5862] multiprocessing 'using a remote manager' example errors and possible 'from_address' code leftover

2010-08-27 Thread Ask Solem

Ask Solem a...@opera.com added the comment:

Duplicate of 3518?

--
nosy: +asksol

___
Python tracker rep...@bugs.python.org
http://bugs.python.org/issue5862
___
___
Python-bugs-list mailing list
Unsubscribe: 
http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue3735] allow multiple threads to efficiently send the same requests to a processing.Pool without incurring duplicate processing

2010-08-27 Thread Ask Solem

Ask Solem a...@opera.com added the comment:

This is a nice feature, but it's also very specific and can be implemented
by extending what's already there.

Could you make a patch for this that applies to the py3k branch? If no one has 
the time for this, then we should probably just close the issue, until someone 
requests it again.

--
nosy: +asksol

___
Python tracker rep...@bugs.python.org
http://bugs.python.org/issue3735
___
___
Python-bugs-list mailing list
Unsubscribe: 
http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue9205] Parent process hanging in multiprocessing if children terminate unexpectedly

2010-08-27 Thread Ask Solem

Ask Solem a...@opera.com added the comment:

New patch attach (termination-trackjobs3.patch).

 Hmm, a few notes. I have a bunch of nitpicks, but those
 can wait for a later iteration. (Just one style nit: I
 noticed a few unneeded whitespace changes... please try
 not to do that, as it makes the patch harder to read.)

Yeah, nitpicks can wait. We need a satisfactory solution first.
I forgot about the whitespace, the reason is that the patch was started
from the previous trackjobs patch.

 - Am I correct that you handle a crashed worker
 by aborting all running jobs?

No. The job's result is marked with the WorkerLostError, the
process is replaced by a new one, and the pool continue to be
functional.

- If you're going to the effort of ACKing, why not record
the mapping of tasks to workers so you can be more selective in your 
termination?

I does have access to that. There's ApplyResult.worker_pids().
It doesn't terminate anything, it just clean up after whatever terminated. The 
MapResult could very well discard the job as a whole,
but my patch doesn't do that (at least not yet).

 Otherwise, what does the ACKing do towards fixing this particular
 issue?

It's what lets us find out what PID is processing the job. (It also
happens to be a required feature to reliably take advantage of
external ack semantics (like in AMQP), and also used by my job timeout
patch to know when a job was started, and then it shows to be useful
in this problem.

 - I think in the final version you'd need to introduce some
 interthread locking, because otherwise you're going to have weird race  
 conditions. I haven't thought too hard about whether you can
 get away with just catching unexpected exceptions, but it's
 probably better to do the locking.

Where is this required?

 - I'm getting hangs infrequently enough to make debugging annoying,
 and I don't have time to track down the bug right now.

Try this:

for i in 1 2 3 4 5; ./python.exe test.regrtest -v test_multiprocessing

it should show up quickly enough (at least on os x)

 Why don't you strip out any changes that are not needed (e.g. AFAICT,  the 
 ACK logic), make sure there aren't weird race conditions,
 and if we start converging on a patch that looks right from a high  level we 
 can try to make it work on all the corner case?

See the updated patch. I can't remove the ACK, but I removed the 
accept_callback, as it's not strictly needed to solve this problem.

--
Added file: 
http://bugs.python.org/file18664/multiprocessing-tr...@82502-termination-trackjobs3.patch

___
Python tracker rep...@bugs.python.org
http://bugs.python.org/issue9205
___
___
Python-bugs-list mailing list
Unsubscribe: 
http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue9205] Parent process hanging in multiprocessing if children terminate unexpectedly

2010-08-27 Thread Ask Solem

Ask Solem a...@opera.com added the comment:

 - A worker removes a job from the queue and is killed before
 sending an ACK.

Yeah, this may be a problem. I was thinking we could make sure the task is 
acked before child process shutdown. Kill -9 is then not safe, but do we really 
want to guarantee that in multiprocessing? In celery we're safe by using AMQP's 
ack trasnaction anyway. The same could be said if there's a problem with the 
queue though. Maybe using ack timeouts? We know how many worker processes are 
free already.

 A worker removes a job from the queue, sends an ACK, and then is
 killed.  Due to bad luck with the scheduler, the parent cleans the
 worker before the parent has recorded the worker pid.
Guess we need to consume from the result queue until it's empty.

 You're now reading from self._cache in one thread but writing it in
 another.

Yeah, I'm not sure if SimpleQueue is managed by a lock already. Should maybe 
use a lock if it isn't.

 What happens if a worker sends a result and then is killed?
In the middle of sending? Or, if not I don't think this matters.

--

___
Python tracker rep...@bugs.python.org
http://bugs.python.org/issue9205
___
___
Python-bugs-list mailing list
Unsubscribe: 
http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue9205] Parent process hanging in multiprocessing if children terminate unexpectedly

2010-08-27 Thread Ask Solem

Ask Solem a...@opera.com added the comment:

By the way, I'm also working on writing some simple benchmarks for the multiple 
queues approach, just to see if theres at all an overhead to
worry about.

--

___
Python tracker rep...@bugs.python.org
http://bugs.python.org/issue9205
___
___
Python-bugs-list mailing list
Unsubscribe: 
http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue8296] multiprocessing.Pool hangs when issuing KeyboardInterrupt

2010-08-25 Thread Ask Solem

Ask Solem a...@opera.com added the comment:

On closer look your patch is also ignoring SystemExit. I think it's beneficial 
to honor SystemExit, so a user could use this as a means to replace the current 
process with a new one.

If we keep that behavior, the real problem here is that the
result handler hangs if the process that reserved a job is gone, which is going 
to be handled
by #9205. Should we mark it as a duplicate?

--

___
Python tracker rep...@bugs.python.org
http://bugs.python.org/issue8296
___
___
Python-bugs-list mailing list
Unsubscribe: 
http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue8296] multiprocessing.Pool hangs when issuing KeyboardInterrupt

2010-08-24 Thread Ask Solem

Ask Solem a...@opera.com added the comment:

This is related to our discussions at #9205 as well 
(http://bugs.python.org/issue9205), as the final patch there will also fix this 
issue.

--

___
Python tracker rep...@bugs.python.org
http://bugs.python.org/issue8296
___
___
Python-bugs-list mailing list
Unsubscribe: 
http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue9205] Parent process hanging in multiprocessing if children terminate unexpectedly

2010-08-20 Thread Ask Solem

Ask Solem a...@opera.com added the comment:

@greg

Been very busy lately, just had some time now to look at your patch.
I'm very ambivalent about using one SimpleQueue per process. What is the reason 
for doing that?

--

___
Python tracker rep...@bugs.python.org
http://bugs.python.org/issue9205
___
___
Python-bugs-list mailing list
Unsubscribe: 
http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue9205] Parent process hanging in multiprocessing if children terminate unexpectedly

2010-07-25 Thread Ask Solem

Ask Solem a...@opera.com added the comment:

 A potential implementation is in termination.patch.  Basically,
 try to shut down gracefully, but if you timeout, just give up and
 kill everything.

You can't have a sensible default timeout, because the worker may be processing 
something important...

 It's a lot less code (one could write an even shorter patch
 that doesn't try to do any additional graceful error handling),
 doesn't add a new monitor thread, doesn't add any more IPC
 mechanism, etc..  FWIW, I don't see any of these changes as bad,
 but I don't feel like I have a sense of how generally useful they
 would be.

Not everything can be simple. Getting this right may require a bit
of code. I think we can get rid of the ack_handler thread by making
the result handler responsible for both acks and results, but I haven't tried 
it yet, and this code is already running in production by many so didn't want 
to change it unless I had to.

--

___
Python tracker rep...@bugs.python.org
http://bugs.python.org/issue9205
___
___
Python-bugs-list mailing list
Unsubscribe: 
http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue9205] Parent process hanging in multiprocessing if children terminate unexpectedly

2010-07-21 Thread Ask Solem

Ask Solem a...@opera.com added the comment:

At first glance, looks like there are a number of sites where you don't 
change the blocking calls to non-blocking calls (e.g. get()).  Almost all of 
the get()s have the potential to be called when there is no possibility for 
them to terminate.

I might recommend referring to my original termination.patch... I believe I 
tracked down the majority of such blocking calls.

I thought the EOF errors would take care of that, at least this has
been running in production on many platforms without that happening.

In the interest of simplicity though, I'm beginning to think that the right 
answer might be to just do something like termination.patch but to 
conditionalize crashing the pool on a pool configuration option.  That way 
the behavior would no worse for your use case.  Does that sound reasonable?

How would you shut down the pool then? And why is that simpler?

--

___
Python tracker rep...@bugs.python.org
http://bugs.python.org/issue9205
___
___
Python-bugs-list mailing list
Unsubscribe: 
http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue9205] Parent process hanging in multiprocessing if children terminate unexpectedly

2010-07-21 Thread Ask Solem

Ask Solem a...@opera.com added the comment:

Btw, the current problem with termination3.patch seems to be that the 
MainProcess somehow appears in self._pool. I have no idea how it gets there. 
Maybe some unrelated issue that appears when forking that late in the tests.

--

___
Python tracker rep...@bugs.python.org
http://bugs.python.org/issue9205
___
___
Python-bugs-list mailing list
Unsubscribe: 
http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue9205] Parent process hanging in multiprocessing if children terminate unexpectedly

2010-07-16 Thread Ask Solem

Ask Solem a...@opera.com added the comment:

 but if you make a blocking call such as in the following program,
 you'll get a hang

Yeah, and for that we could use the same approach as for the maps.

But, I've just implemented the accept callback approach, which should be 
superior. Maps/Apply fails instantly as soon as a worker process crashes, but 
the pool remains fully functional. Patch  
multiprocessing-tr...@82502-termination-trackjobs.patch added.

There seems to be some race conditions left, because some of the tests breaks 
from time to time. Maybe you can pinpoint it before me.

--

___
Python tracker rep...@bugs.python.org
http://bugs.python.org/issue9205
___
___
Python-bugs-list mailing list
Unsubscribe: 
http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue9205] Parent process hanging in multiprocessing if children terminate unexpectedly

2010-07-16 Thread Ask Solem

Changes by Ask Solem a...@opera.com:


Added file: 
http://bugs.python.org/file18026/multiprocessing-tr...@82502-termination-trackjobs.patch

___
Python tracker rep...@bugs.python.org
http://bugs.python.org/issue9205
___
___
Python-bugs-list mailing list
Unsubscribe: 
http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue9205] Parent process hanging in multiprocessing if children terminate unexpectedly

2010-07-15 Thread Ask Solem

Ask Solem a...@opera.com added the comment:

Greg,

 Before I forget, looks like we also need to deal with the
 result from a worker being un-unpickleable:

This is what my patch in bug 9244 does...

 Yep.  Again, as things stand, once you've lost an worker,
 you've lost a task, and you can't really do much about it.
 I guess that depends on your application though... is your
 use-case such that you can lose a task without it mattering?
 If tasks are idempotent, one could have the task handler
 resubmit them, etc..  But really, thinking about the failure
 modes I've seen (OOM kills/user-initiated interrupt) I'm not
 sure under what circumstances I'd like the pool to try to
 recover.

Losing a task is not fun, but there may still be other tasks
running that are just as important. I think you're thinking
from a map_async perspective here.

user-initiated interrupts, this is very important to recover from,
think of some badly written library code suddenly raising SystemExit,
this shouldn't terminate other jobs, and it's probably easy to recover from, so 
why shouldn't it try?

 The idea of recording the mapping of tasks - workers
 seems interesting. Getting all of the corner cases could
 be hard (e.g. making removing a task from the queue and
 recording which worker did the removing atomic, detecting if the worker 
 crashed while still holding the queue lock) and doing
 this would require extra mechanism.  This feature does seem
 to be useful for pools running many different jobs, because
 that way a crashed worker need only terminate one job.

I think I may have an alternative solution. Instead of keeping track of what 
the workers are doing, we could simply change the result handler
so it gives up when there are no more alive processes.

while state != TERMINATE:
result = get(timeout=1)
if all_processes_dead():
break;

--

___
Python tracker rep...@bugs.python.org
http://bugs.python.org/issue9205
___
___
Python-bugs-list mailing list
Unsubscribe: 
http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue9205] Parent process hanging in multiprocessing if children terminate unexpectedly

2010-07-15 Thread Ask Solem

Ask Solem a...@opera.com added the comment:

Ok. I implemented my suggestions in the patch attached
(multiprocessing-tr...@82502-termination2.patch)
What do you think?

Greg, Maybe we could keep the behavior in termination.patch as an option for 
map jobs? It is certainly a problem that map jobs won't terminate until the 
pool is joined.

--
Added file: 
http://bugs.python.org/file18013/multiprocessing-tr...@82502-termination2.patch

___
Python tracker rep...@bugs.python.org
http://bugs.python.org/issue9205
___
___
Python-bugs-list mailing list
Unsubscribe: 
http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue9244] multiprocessing.pool: Worker crashes if result can't be encoded

2010-07-15 Thread Ask Solem

Ask Solem a...@opera.com added the comment:

Updated patch with Greg's suggestions.
(multiprocessing-tr...@82502-handle_worker_encoding_errors2.patch)

--
Added file: 
http://bugs.python.org/file18014/multiprocessing-tr...@82502-handle_worker_encoding_errors2.patch

___
Python tracker rep...@bugs.python.org
http://bugs.python.org/issue9244
___
___
Python-bugs-list mailing list
Unsubscribe: 
http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue9205] Parent process hanging in multiprocessing if children terminate unexpectedly

2010-07-15 Thread Ask Solem

Changes by Ask Solem a...@opera.com:


Removed file: 
http://bugs.python.org/file18013/multiprocessing-tr...@82502-termination2.patch

___
Python tracker rep...@bugs.python.org
http://bugs.python.org/issue9205
___
___
Python-bugs-list mailing list
Unsubscribe: 
http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue9205] Parent process hanging in multiprocessing if children terminate unexpectedly

2010-07-15 Thread Ask Solem

Ask Solem a...@opera.com added the comment:

Just some small cosmetic changes to the patch.
(added multiprocessing-tr...@82502-termination3.patch)

--
Added file: 
http://bugs.python.org/file18015/multiprocessing-tr...@82502-termination3.patch

___
Python tracker rep...@bugs.python.org
http://bugs.python.org/issue9205
___
___
Python-bugs-list mailing list
Unsubscribe: 
http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue9205] Parent process hanging in multiprocessing if children terminate unexpectedly

2010-07-15 Thread Ask Solem

Ask Solem a...@opera.com added the comment:

 Really?  I could be misremembering, but I believe you deal
 with the case of the result being unpickleable.  I.e. you
 deal with the put(result) failing, but not the get() in the
 result handler. 

Your example is demonstrating the pickle error on put(), not on get().

 Does my sample program work with your patch applied?

Yeah, check this out:

/opt/devel/Python/trunk(master)$ patch -p1  
multiprocessing-tr...@82502-handle_worker_encoding_errors2.patch 
patching file Lib/multiprocessing/pool.py
patching file Lib/test/test_multiprocessing.py
/opt/devel/Python/trunk(master)$ ./python.exe  
Python 2.7 (unknown, Jul 13 2010, 13:28:35) 
[GCC 4.2.1 (Apple Inc. build 5659)] on darwin
Type help, copyright, credits or license for more information.
 import multiprocessing
 def foo():
... return lambda: 42
... 
 p = multiprocessing.Pool(2)
 p.apply_async(foo).get()
Traceback (most recent call last):
  File stdin, line 1, in module
  File /opt/devel/Python/trunk/Lib/multiprocessing/pool.py, line 518, in get
raise self._value
multiprocessing.pool.MaybeEncodingError: Error sending result: 'function 
lambda at 0x1005477d0'. Reason: 'Can't pickle type 'function': attribute 
lookup __builtin__.function failed'
 import operator
 p.apply_async(operator.add, (2, 2)).get()
4

 To be clear, in this case I was thinking of KeyboardInterrupts.

In termination2.patch I handle BaseExceptions, by exiting the worker process, 
and then letting the _worker_handler replace the process.

It's very useful, because then people can kill -INT the worker process
if they want to cancel the job, and without breaking other jobs running.

 From our differing use-cases, I do think it could make sense as
 a configuration option, but where it probably belongs is on the
 wait() call of ApplyResult.

Indeed! This could be done by adding listeners for this type of errors.

pool.add_worker_missing_callback(fun)

So MapResults could install a callback like this:

   def __init__():
...
_pool.add_worker_missing_callback(self._on_worker_missing)
...

   def _on_worker_missing(self):
   err = WorkerLostError(
   Worker lost while running map job)
   self._set(None, (False, err))
   
What do you think about that?

IMHO, even though the worker lost could be unrelated to the map job in
question, it would still be a better alternative than crashing the whole pool.

--

___
Python tracker rep...@bugs.python.org
http://bugs.python.org/issue9205
___
___
Python-bugs-list mailing list
Unsubscribe: 
http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue9205] Parent process hanging in multiprocessing if children terminate unexpectedly

2010-07-14 Thread Ask Solem

Ask Solem a...@opera.com added the comment:

There's one more thing

 if exitcode is not None:
   cleaned = True
if exitcode != 0 and not worker._termination_requested:
abnormal.append((worker.pid, exitcode))


Instead of restarting crashed worker processes it will simply bring down
the pool, right?

If so, then I think it's important to decide whether we want to keep
the supervisor functionality, and if so decide on a recovery strategy.

Some alternatives are:

A) Any missing worker brings down the pool.

B) Missing workers will be replaced one-by-one. A maximum-restart-frequency 
decides when the supervisor should give up trying to recover
the pool, and crash it.

C) Same as B, except that any process crashing when trying to get() will bring 
down the pool.

I think the supervisor is a good addition, so I would very much like to keep 
it. It's also a step closer to my goal of adding the enhancements added by 
Celery to multiprocessing.pool.

Using C is only a few changes away from this patch, but B would also be 
possible in combination with my accept_callback patch. It does pose some 
overhead, so it depends on the level of recovery we want to support.

accept_callback: this is a callback that is triggered when the job is reserved 
by a worker process. The acks are sent to an additional Queue, with an 
additional thread processing the acks (hence the mentioned overhead). This 
enables us to keep track of what the worker processes are doing, also get the 
PID of the worker processing any given job (besides from recovery, potential 
uses are monitoring and the ability to terminate a job 
(ApplyResult.terminate?). See 
http://github.com/ask/celery/blob/master/celery/concurrency/processes/pool.py

--

___
Python tracker rep...@bugs.python.org
http://bugs.python.org/issue9205
___
___
Python-bugs-list mailing list
Unsubscribe: 
http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue9244] multiprocessing.pool: Worker crashes if result can't be encoded

2010-07-14 Thread Ask Solem

Ask Solem a...@opera.com added the comment:

 To be clear, the errback change and the unpickleable result
 change are actually orthogonal, right?

Yes, it could be a separate issue. Jesse, do you think I should I open
up a separate issue for this?

 Why not add an error_callback for map_async as well?

That's a good idea!

 Any reason you chose to use a different internal name
 (errback versus error_callback)? It seems cleaner to me
 to be consistent about the name.

It was actually a mistake. The argument was ``errback`` before, so
it's just a leftover from the previous name.

 In general, I'm wary of nonessential whitespace changes...
 did you mean to include these?

Of course not.

 Using assertTrue seems misleading. assertIsNotNone is what
 really mean, right?  Although, I believe that's redundant,
 since presumably self.assertIsInstance(None, KeyError) will
 error out anyway (I haven't verified this).

bool(KeyError(foo)) is True and bool(None) is False, so it works either way. 
It could theoretically result in a false negative if
the exception class tested overrides __nonzero__, but that is unlikely
to happen as the target always returns KeyError anyway (and the test below 
ensures it) It's just a habit of mine, unless I really want to test for 
Noneness, I just use assertTrue, but I'm not against changing it to 
assertIsNotNone either.

 Under what circumstances would these be None?  (Perhaps you
 want wrapped.exc != 'None'?)  The initializer for
 MaybeEncodingError enforces the invariant that exc/value are strings
 right?
It's just to test that these are actually set to something.
Even an empty string passes with assertIsNone instead of assertTrue.
Maybe it's better to test the values set, but I didn't bother.

--

___
Python tracker rep...@bugs.python.org
http://bugs.python.org/issue9244
___
___
Python-bugs-list mailing list
Unsubscribe: 
http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue9205] Parent process hanging in multiprocessing if children terminate unexpectedly

2010-07-14 Thread Ask Solem

Ask Solem a...@opera.com added the comment:

Jesse wrote,


 We can work around the shutdown issue (really, bug 9207) by
 ignoring the exception such as shutdown.patch does, or passing in
 references/adding references to the functions those methods need. Or (as 
 Brett suggested) converting them to class methods and adding references to 
 the class. Or passing them in via the signature like this 
 _handle_workers(arg, _debug=debug), etc.


Greg wrote,

 Another option would be to enable only for the worker handler.  I
 don't have a particularly great sense of what the Right Thing to
 do here is.

I don't think _make_shutdown_safe should be added to the result handler.
If the error can indeed happen there, then we need to solve it in a way that 
enables it to finish the work.

Jesse, how hard is it to fix the worker handler by passing the references? Note 
that _worker_handler is not important to complete shutdown at this point, but 
it may be in the future (it seems termination.patch already changes this)

--

___
Python tracker rep...@bugs.python.org
http://bugs.python.org/issue9205
___
___
Python-bugs-list mailing list
Unsubscribe: 
http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue9205] Parent process hanging in multiprocessing if children terminate unexpectedly

2010-07-13 Thread Ask Solem

Ask Solem a...@opera.com added the comment:

I think I misunderstood the purpose of the patch. This is about handling errors 
on get(), not on put() like I was working on. So sorry for that confusion.

What kind of errors are you having that makes the get() call fail?

If the queue is not working, then I guess the only sensible approach is to 
shutdown the pool like suggested. I'll open up another issue for unpickleable 
errors then.

--

___
Python tracker rep...@bugs.python.org
http://bugs.python.org/issue9205
___
___
Python-bugs-list mailing list
Unsubscribe: 
http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue9244] multiprocessing.pool: Pool crashes if worker can't encode result (with patch)

2010-07-13 Thread Ask Solem

New submission from Ask Solem a...@opera.com:

If the target function returns an unpickleable value the worker process 
crashes. This patch tries to safely handle unpickleable errors, while enabling 
the user to inspect such errors after the fact.

In addition a new argument has been added to apply_async: error_callback.
This is an optional callback that is called if the job raises an exception. The 
signature of the callback is `callback(exc)`.

--
components: Library (Lib)
files: multiprocessing-tr...@82502-handle_worker_encoding_errors.patch
keywords: patch
messages: 110173
nosy: asksol, jnoller
priority: normal
severity: normal
status: open
title: multiprocessing.pool: Pool crashes if worker can't encode result (with 
patch)
type: behavior
versions: Python 2.6, Python 2.7
Added file: 
http://bugs.python.org/file17982/multiprocessing-tr...@82502-handle_worker_encoding_errors.patch

___
Python tracker rep...@bugs.python.org
http://bugs.python.org/issue9244
___
___
Python-bugs-list mailing list
Unsubscribe: 
http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue9205] Parent process hanging in multiprocessing if children terminate unexpectedly

2010-07-13 Thread Ask Solem

Ask Solem a...@opera.com added the comment:

For reference I opened up a new issue for the put() case here: 
http://bugs.python.org/issue9244

--

___
Python tracker rep...@bugs.python.org
http://bugs.python.org/issue9205
___
___
Python-bugs-list mailing list
Unsubscribe: 
http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue9244] multiprocessing.pool: Worker crashes if result can't be encoded result (with patch)

2010-07-13 Thread Ask Solem

Changes by Ask Solem a...@opera.com:


--
title: multiprocessing.pool: Pool crashes if worker can't encode result (with 
patch) - multiprocessing.pool: Worker crashes if result can't be encoded 
result (with patch)

___
Python tracker rep...@bugs.python.org
http://bugs.python.org/issue9244
___
___
Python-bugs-list mailing list
Unsubscribe: 
http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue9244] multiprocessing.pool: Worker crashes if result can't be encoded

2010-07-13 Thread Ask Solem

Changes by Ask Solem a...@opera.com:


--
title: multiprocessing.pool: Worker crashes if result can't be encoded result 
(with patch) - multiprocessing.pool: Worker crashes if result can't be encoded

___
Python tracker rep...@bugs.python.org
http://bugs.python.org/issue9244
___
___
Python-bugs-list mailing list
Unsubscribe: 
http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue9248] multiprocessing.pool: Proposal: waitforslot

2010-07-13 Thread Ask Solem

New submission from Ask Solem a...@opera.com:

This patch adds the `waitforslot` argument to apply_async. If set to `True`, 
apply_async will not return until there is a worker available to process the 
job.

This is implemented by a semaphore that is released by the result handler 
whenever a new result is ready. The semaphore is also released
when the supervisor (worker_handler) finds a worker process that has been
unexpectedly terminated.

This is already in use by Celery 2.0, which ships with its own modified version 
of multiprocessing.pool.

I'm not sure about the name ``waitforslot``, I think I may even hate it, but 
haven't been able to come up with a better name for it yet.

--
components: Library (Lib)
messages: 110193
nosy: asksol, jnoller
priority: normal
severity: normal
status: open
title: multiprocessing.pool: Proposal: waitforslot
versions: Python 2.6, Python 2.7

___
Python tracker rep...@bugs.python.org
http://bugs.python.org/issue9248
___
___
Python-bugs-list mailing list
Unsubscribe: 
http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue9248] multiprocessing.pool: Proposal: waitforslot

2010-07-13 Thread Ask Solem

Changes by Ask Solem a...@opera.com:


--
keywords: +patch
Added file: 
http://bugs.python.org/file17985/multiprocessing-tr...@82502-apply-semaphore.patch

___
Python tracker rep...@bugs.python.org
http://bugs.python.org/issue9248
___
___
Python-bugs-list mailing list
Unsubscribe: 
http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue9205] Parent process hanging in multiprocessing if children terminate unexpectedly

2010-07-12 Thread Ask Solem

Ask Solem a...@opera.com added the comment:

termination.patch, in the result handler you've added:

   while cache and thread._state != TERMINATE and not failed

why are you terminating the second pass after finding a failed process?

Unpickleable errors and other errors occurring in the worker body are not 
exceptional cases, at least not now that the pool is supervised by 
_handle_workers. I think the result should be set also in this case, so the 
user can inspect the exception after the fact.

I have some other suggestions too, so I will review this patch tomorrow.

For shutdown.patch, I thought this only happened in the worker handler, but 
you've enabled this for the result handler too? I don't care about the worker 
handler, but with the result handler I'm worried that I don't know what 
ignoring these exceptions actually means. For example, is there a possibility 
that we may lose results at shutdown?

--
nosy: +asksol

___
Python tracker rep...@bugs.python.org
http://bugs.python.org/issue9205
___
___
Python-bugs-list mailing list
Unsubscribe: 
http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue9205] Parent process hanging in multiprocessing if children terminate unexpectedly

2010-07-12 Thread Ask Solem

Ask Solem a...@opera.com added the comment:

 Unfortunately, if you've lost a worker, you are no
 longer guaranteed that cache will eventually be empty.
 In particular, you may have lost a task, which could
 result in an ApplyResult waiting forever for a _set call.

 More generally, my chief assumption that went into this
 is that the unexpected death of a worker process is
 unrecoverable. It would be nice to have a better workaround
 than just aborting everything, but I couldn't see a way
 to do that.

It would be a problem if the process simply disappeared,
But in this case you have the ability to put a result on the queue,
so it doesn't have to wait forever.

For processes disappearing (if that can at all happen), we could solve
that by storing the jobs a process has accepted (started working on),
so if a worker process is lost, we can mark them as failed too.

 I could be wrong, but that's not what my experiments
 were indicating. In particular, if an unpickleable error occurs,
 then a task has been lost, which means that the relevant map,
 apply, etc. will wait forever for completion of the lost task.

It's lost now, but not if we handle the error...
For a single map operation this behavior may make sense, but what about
someone running the pool as s long-running service for users to submit map 
operations to? Errors in this context are expected to happen, even unpickleable 
errors.

I guess that the worker handler works as a supervisor is a side effect,
as it was made for the maxtasksperchild feature, but for me it's a welcome one. 
With the supervisor in place, multiprocessing.pool is already fairly stable to 
be used for this use case, and there's not much to be done to make it solid 
(Celery is already running for months without issue, unless there's a pickling 
error...)

 That does sound useful. Although, how can you determine the
 job (and the value of i) if it's an unpickleable error?
 It would be nice to be able to retrieve job/i without having
 to unpickle the rest.

I was already working on this issue last week actually, and I managed
to do that in a way that works well enough (at least for me):
http://github.com/ask/celery/commit/eaa4d5ddc06b000576a21264f11e6004b418bda1#diff-1

--

___
Python tracker rep...@bugs.python.org
http://bugs.python.org/issue9205
___
___
Python-bugs-list mailing list
Unsubscribe: 
http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue9207] multiprocessing occasionally spits out exception during shutdown (_handle_workers)

2010-07-09 Thread Ask Solem

Changes by Ask Solem a...@opera.com:


--
nosy: +asksol

___
Python tracker rep...@bugs.python.org
http://bugs.python.org/issue9207
___
___
Python-bugs-list mailing list
Unsubscribe: 
http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue9162] License for multiprocessing files

2010-07-05 Thread Ask Solem

Changes by Ask Solem a...@opera.com:


--
nosy: +asksol

___
Python tracker rep...@bugs.python.org
http://bugs.python.org/issue9162
___
___
Python-bugs-list mailing list
Unsubscribe: 
http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[Ann] Celery 2.0 released

2010-07-02 Thread Ask Solem

Celery 2.0 has been released


We're happy to announce the release of Celery 2.0.

Big thanks to all contributors, testers and users!

What is it?
===

Celery is an asynchronous task queue/job queue based on distributed
message passing. It is focused on real-time operation, but supports
scheduling as well.

The execution units, called tasks, are executed concurrently on a single
or more worker servers. Tasks can execute asynchronously (in the background)
or synchronously (wait until ready).

Celery is already used in production to process millions of tasks a day.

Celery is written in Python, but the protocol can be implemented in
any language. It can also operate with other languages using webhooks.

The recommended message broker is RabbitMQ, but support for Redis and
databases is also available.

You may also be pleased to know that full Django integration exists,
delivered by the django-celery package.

What's new?
===

* Django dependency removed.

Django integration has been moved to a separate package
called django-celery.

SQLAlchemy is now used instead of the Django ORM for the database
result store.

* A curses real-time monitor: celeryev.

* Support for soft and hard time limits.

--time-limit:
The worker processing the task will be
killed and replaced with a new process when this is exceeded.

--soft-time-limit:
The celery.exceptions.SoftTimeLimitExceeded exception
will be raised when this is exceeded. The task can catch this to
clean up before the hard time limit terminates it.

* Periodic tasks schedules can now be expressed using complex
  crontab-like expressions.

For example, you can now use::

 crontab(minute=*/15)

or even::

 crontab(minute=*/30, hour=8-17,1-2,
... day_of_week=thu-fri)

* Built-in way to do task callbacks.

http://celeryq.org/docs/userguide/tasksets.html

* Simplified routing of tasks.

  http://celeryq.org/docs/userguide/routing.html

* TaskSets can now contain several types of tasks.

Tasksets has been refactored to use a new syntax, please see
http://celeryq.org/docs/userguide/tasksets.html for more
information. The previous syntax is still supported but deprecated,
and will be completely removed in Celery 2.2.

* AMQP result backend can now do polling of results.

This means it supports ``result.ready()``, ``.successful()``,
etc.

* AMQP result backend now supports timeouts when waiting
  for results.

* celeryd: --maxtasksperchild

Defines the maximum number of tasks a pool worker can process
before the process is terminated and replaced by a new one.

* It's now possible to use the client side of Celery without
  configuration.


And lots more!
The Changelog contains upgrade instructions and a detailed
list of all changes:

http://celeryproject.org/docs/changelog.html

Thank you for reading it.


Resources
=

:Homepage: http://celeryproject.org

:Download: http://pypi.python.org/pypi/celery

:Documentation: http://celeryproject.org/docs/

:Changelog: http://celeryproject.org/docs/changelog.html

:Code: http://github.com/ask/celery/

:FAQ: http://ask.github.com/celery/faq.html

:Mailing list: http://groups.google.com/group/celery-users

:IRC: #celery at irc.freenode.net.


-- 
{Ask Solem,
  twitter.com/asksol | github.com/ask }.

-- 
http://mail.python.org/mailman/listinfo/python-announce-list

Support the Python Software Foundation:
http://www.python.org/psf/donations/


ANN: Celery 1.0 released

2010-02-11 Thread Ask Solem
===
 Celery 1.0 has been released!
===

We're happy to announce the release of Celery 1.0.

What is it?
===

Celery is a task queue/job queue based on distributed message passing.
It is focused on real-time operation, but supports scheduling as well.

The execution units, called tasks, are executed concurrently on one or
more worker servers. Tasks can execute asynchronously (in the background)
or synchronously (wait until ready).

Celery is already used in production to process millions of tasks a day.

Celery was originally created for use with Django, but is now usable
from any Python project. It can
also operate with other languages via webhooks.

The recommended message broker is RabbitMQ (http://rabbitmq.org), but support
for Redis and databases is also available.

For more information please visit http://celeryproject.org


Features


See http://ask.github.com/celery/getting-started/introduction.html#features

Stable API
==

From this version on the public API is considered stable. This means there 
won't
be any backwards incompatible changes in new minor versions. Changes to the
API will be deprecated; so, for example, if we decided to remove a function
that existed in Celery 1.0:

* Celery 1.2 will contain a backwards-compatible replica of the function which
  will raise a PendingDeprecationWarning.
  This warning is silent by default; you need to explicitly turn on display
  of these warnings.
* Celery 1.4 will contain the backwards-compatible replica, but the warning
  will be promoted to a full-fledged DeprecationWarning. This warning
  is loud by default, and will likely be quite annoying.
* Celery 1.6 will remove the feature outright.

See the Celery Deprecation Timeline for a list of pending removals:
http://ask.github.com/celery/internals/deprecation.html

What's new?
===

* Task decorators

Write tasks as regular functions and decorate them.
There are both task(), and periodic_task() decorators.

* Tasks are automatically registered

Registering the tasks manually was getting tedious, so now you don't have
to anymore. You can still do it manually if you need to, just
disable Task.autoregister. The concept of abstract task classes
has also been introduced, this is like django models, where only the
subclasses of an abstract task is registered.

* Events

If enabled, the worker will send events, telling you what tasks it
executes, their results, and how long it took to execute them. It also
sends out heartbeats, so listeners are able to detect nonfunctional
workers. This is the basis for the new real-time web monitor we're working 
on
(celerymon: http://github.com/ask/celerymon/).

* Rate limiting

Global and per task rate limits. 10 tasks a second? or one an hour? You
decide. It's using the token bucket algorithm, which is
commonly used for network traffic shaping. It accounts for bursts of
activity, so your workers won't be bored by having nothing to do.


* New periodic task service.

Periodic tasks are no longer dispatched by celeryd, but instead by a
separate service called *celerybeat*. This is an optimized, centralized
service dedicated to your periodic tasks, which means you don't have to
worry about deadlocks or race conditions any more. But that does mean you
have to make sure only one instance of this service is running at any one
time.

  **TIP:** If you're only running a single celeryd server, you can embed
  celerybeat inside it. Just add the --beat argument.


* Broadcast commands

If you change your mind and don't want to run a task after all, you
now have the option to revoke it.

Also, you can rate limit tasks or even shut down the worker remotely.

It doesn't have many commands yet, but we're waiting for broadcast
commands to reach its full potential, so please share your ideas
if you have any.

* Multiple queues

The worker is able to receive tasks on multiple queues at once.
This opens up a lot of new possibilities when combined with the impressive
routing support in AMQP.

* Platform agnostic message format.

  The message format has been standardized and is now using the ISO-8601 format
  for dates instead of Python datetime objects. This means you can write task
  consumers in other languages than Python (eceleryd anyone?)

* Timely

  Periodic tasks are now scheduled on the clock, i.e. timedelta(hours=1)
  means every hour at :00 minutes, not every hour from the server starts.
  To revert to the previous behavior you have the option to enable
  PeriodicTask.relative.

* ... and a lot more!

To read about these and other changes in detail, please refer to
the change log: http://celeryproject.org/docs/changelog.html
This document contains crucial information for those
upgrading from a previous version of Celery, so be sure to read the entire
change set 

  1   2   >