Here is what i found from : http://celeryq.org/docs/configuration.html


By default it dont need any MQ Brokers , works via database backend which is
supported by SQLAlchemy. ALso can use memcache .


   - database (default)

   Use a relational database supported by SQLAlchemy<http://sqlalchemy.org/>.
   See *Database backend
settings*<http://celeryq.org/docs/configuration.html#conf-database-result-backend>
   .
    - cache

   Use memcached <http://memcached.org/> to store the results. See *Cache
   backend 
settings*<http://celeryq.org/docs/configuration.html#conf-cache-result-backend>
   .


We can easily make it work as it from memcached i think , and DAL wont be
hard!

On Sun, Nov 21, 2010 at 4:40 AM, Michele Comitini <
[email protected]> wrote:

> RabbitMQ seems Erlang a good sign, but add too many dependecies.
> Redis is C
>
> If it would be possible to replace sqlalchemy with DAL easily then we
> could integrate it,
> who is going to investigate?
>
>
>
> 2010/11/20 Phyo Arkar <[email protected]>:
> > One thing i am not clear about celery
> >
> > It needs a MQ Backend to installed and configured right? (RabbitMQ,Redis)
> > etc ?
> > They are whole new thing for me and they are Java/C  , so much
> dependencies.
> >
> > Please  Celerify lol :D
> >
> > On Sun, Nov 21, 2010 at 4:17 AM, Michele Comitini
> > <[email protected]> wrote:
> >>
> >> +1
> >>
> >>
> >>
> >> 2010/11/20 Phyo Arkar <[email protected]>:
> >> > Wow
> >> >
> >> > celery is freaking awesome!
> >> >
> >> > http://pypi.python.org/pypi/celery/2.1.3#example
> >> >
> >> > I think we need it in web2py!. all other web frameworks have it now!.
> >> >
> >> > On 11/19/10, Niphlod <[email protected]> wrote:
> >> >> the only thing on multiprocessing's queue that I don't like (it's not
> >> >> its fault, but psycopg's one) is that I have to create multiple
> >> >> connections (one for every process) to the database.
> >> >>
> >> >> Multiprocessing's queue, threading one, deque are where you end up
> >> >> with your hands dirty ....
> >> >> if you need persistence and "security" usually you need to :
> >> >> - take the message and store somewhere (table called "queued" ? )
> >> >> - give it an uuid
> >> >> - prepare a field in "result" store (usually a table, uuid and blob
> >> >> columns)
> >> >> - someone reading the "queued" shot an update to that result store
> >> >> when it has the result
> >> >> - retrieve results, send it away and/or delete it from pool comparing
> >> >> with the "queued" tables as soon as possible
> >> >>
> >> >> If you are not a "persistent" maniac, you can always store the
> message
> >> >> in a deque, pop() it and you're done!
> >> >>
> >> >> I found myself in one or two cases facing some issues and for the
> next
> >> >> time I'm going to have a look to pyres seems nice, simple, and
> stable,
> >> >> and ultimately nicer to "hack in" the code, said a friend of mine)
> and
> >> >> if I don't make it I'm going to learn celery one time for all (seems
> >> >> the best implementation out there) .
> >> >>
> >> >>
> >> >>
> >> >> On 19 Nov, 04:26, mdipierro <[email protected]> wrote:
> >> >>> Do you have an example...?
> >> >>>
> >> >>> On Nov 18, 9:16 pm, Phyo Arkar <[email protected]> wrote:
> >> >>>
> >> >>>
> >> >>>
> >> >>> > i use Multiprocessing's Queue across processess which works to
> parse
> >> >>> > huge list of files and communicate back with database. they work
> >> >>> > great.
> >> >>>
> >> >>> > On 11/19/10, Pystar <[email protected]> wrote:
> >> >>>
> >> >>> > > I would like to know if any one here has used any message queue?
> >> >>> > > and
> >> >>> > > which one comes recommended?- Nascondi testo citato
> >> >>>
> >> >>> - Mostra testo citato -
> >> >
> >
> >
>

Reply via email to