On 2011-01-26 02.04, Niclas Hedhman wrote:
I have a number of "feeds" of external events, that simply just
"arrive" to my system at high rate. The consumer is only allowed to
spend a very short time (ms) to take the event of the queue in a
transaction, or it will time out (assuming the consumer has died and
some other consumer will need to process it).
So, "transaction 1" is to get the event into a persisted store of some
kind and then commit that transaction.
Then one need to dispatch a 'request' to a worker to do some
computation, which in my case can take seconds. And in this lies a
coordination problem (unless using MQ in HA setup, or similar, which
was Rickard's argument). The 'requesting party' may die. The
'processing party' may die. Both may die. And either way, the
'request' must get processed once. I have concluded that to make this
simpler, the 'requesting party' must simply not exist, and the
'request' is part of the "transaction 1". A quick transacted 'forward'
of some kind, for instance Event Sourcing...
This sounds similar to the usecase that Jim Webber presented at last
Oredev. Check out the video and see if his approach can be applied in
your scenario:
http://vimeo.com/17156605
Basically, one coordinating server (or set) that redirects HTTP requests
to compute machines. Use HTTP semantics for success/failure handling.
/Rickard
_______________________________________________
qi4j-dev mailing list
[email protected]
http://lists.ops4j.org/mailman/listinfo/qi4j-dev