the only thing on multiprocessing's queue that I don't like (it's not
its fault, but psycopg's one) is that I have to create multiple
connections (one for every process) to the database.

Multiprocessing's queue, threading one, deque are where you end up
with your hands dirty ....
if you need persistence and "security" usually you need to :
- take the message and store somewhere (table called "queued" ? )
- give it an uuid
- prepare a field in "result" store (usually a table, uuid and blob
columns)
- someone reading the "queued" shot an update to that result store
when it has the result
- retrieve results, send it away and/or delete it from pool comparing
with the "queued" tables as soon as possible

If you are not a "persistent" maniac, you can always store the message
in a deque, pop() it and you're done!

I found myself in one or two cases facing some issues and for the next
time I'm going to have a look to pyres seems nice, simple, and stable,
and ultimately nicer to "hack in" the code, said a friend of mine) and
if I don't make it I'm going to learn celery one time for all (seems
the best implementation out there) .



On 19 Nov, 04:26, mdipierro <[email protected]> wrote:
> Do you have an example...?
>
> On Nov 18, 9:16 pm, Phyo Arkar <[email protected]> wrote:
>
>
>
> > i use Multiprocessing's Queue across processess which works to parse
> > huge list of files and communicate back with database. they work
> > great.
>
> > On 11/19/10, Pystar <[email protected]> wrote:
>
> > > I would like to know if any one here has used any message queue? and
> > > which one comes recommended?- Nascondi testo citato
>
> - Mostra testo citato -

Reply via email to