you can do what you want, celery is a task queue "handler" on steroids. 
with that you can pass messages, assign more workers if your "message 
queue" is lagging behind (i.e. 2 workers if the queue contains < 100 
updates, 8 workers if queue contains > 1000) and so on.
For basic task you can roll your own, but if your requirements may vary in 
time (and general "dimensions") you'll end up with celery anyway.

I know that seems overkill, but it comes batteries included, with init 
scripts, supervisord confs, daemons, it's heavily tested, etc. etc. etc.

Il giorno mercoledì 16 maggio 2012 02:24:00 UTC+2, cyan ha scritto:
>
>
> I think I'm not mistaken by saying that celery is the most used library 
>> for this kind of operations, it's written in python and founded on rabbitMQ 
>> (also on others, but primarily rabbit) to handle queues. Seems huge, but 
>> it's fairly easy to setup (especially if you planned to use rabbitmq 
>> anyway) and can be used with a few statements to make very simple tasks, 
>> but can be extremely fine-tuned for most of the requirements out there.
>>
>
> A follow-up question, I'm a bit puzzled by the differences between Celery 
> and RabbitMQ. Is Celery the same idea as the built-in scheduler of web2py? 
> In particular, in my specific case, I just need to a channel which 
> transports data from one server to the other, and receiving end will save 
> these data into a database. For that, I guess RabbitMQ alone is sufficient. 
> Why do I need an extra Celery task queue? Is it for some scaling 
> consideration - that is, when I have multiple instances of server on each 
> end? Thanks.
>

Reply via email to