A while ago, there was a request to add a feature in the Jackrabbit
clustering implementation: to be able to tell if an event has been
processed already by another node participating in the cluster.

The obvious use case is that of a producer and many consumers, with
the producer generating lots of new set of nodes and the consumers
rendering them. Assuming that rendering may be a cpu intensive
operation, there would be a clear advantage if the load could be
spread.

Back then I thought it was not so needed, because the item associated
with the event could have been locked by the first observer that sees
it, so that any other listener would just ignore the event and move on
to the next.
Now, after looking at the implementation, I actually wonder if the
locking approach would even work: is it guaranteed that if Server A
sees a new node in the Journal and locks it (with an open scoped lock)
another server B cannot lock it anymore? It seems to me that it really
depends on the read and lock operation being atomic, and I am afraid
it is not.

Implementing an event queue, as suggested back then by Tobias, would
still present the same problem of locking the queue: unless there's
only one cluster node listening on that queue, how to prevent that
another may attempt (and possibly succeed) at locking the queue?

Any hints on how to (efficiently) implement this?

Thanks,
Alessandro

Reply via email to