We should expand that section. the current queue recipe guarantees that things 
are consumed at most once. to guarantee at least the consumer creates an 
ephemeral node queue-X-inprocess to indicate that the node is being processed. 
once the queue element has been processed the consumer deletes queue-X and 
queue-X-inprocess (in that order).

using an emphemeral node means that if a consumer crashes, the *-inprocess node 
will be deleted allowing the queue elements it was working on to be consumed by 
someone else. putting the *-inprocess nodes at the same level of the queue-X 
nodes allows the consumer to get the list of queue elements and the inprocess 
flags with the same getChildren call. the *-inprocess flag ensures that only 
one consumer is processing a given item. by deleting queue-X before 
queue-X-inprocess we make sure that no other consumer will see queue-X as 
available for consumption after it is processed and before it is deleted.

this is at last once, because the consumer has a race condition. the consumer 
may process the item and then crash before it can delete the corresponding 
queue-X node.


-----Original Message-----
From: Stuart White [mailto:stuart.whi...@gmail.com] 
Sent: Thursday, January 08, 2009 7:15 AM
To: zookeeper-user@hadoop.apache.org
Subject: Distributed queue: how to ensure no lost items?

I'm interested in using ZooKeeper to provide a distributed
producer/consumer queue for my distributed application.

Of course I've been studying the recipes provided for queues, barriers, etc...

My question is: how can I prevent packets of work from being lost if a
process crashes?

For example, following the distributed queue recipe, when a consumer
takes an item from the queue, it removes the first "item" znode under
the "queue" znode.  But, if the consumer immediately crashes after
removing the item from the queue, that item is lost.

Is there a recipe or recommended approach to ensure that no queue
items are lost in the event of process failure?


Reply via email to