On Wed, Jan 26, 2011 at 4:52 AM, Raoul Duke <[email protected]> wrote: > On Tue, Jan 25, 2011 at 5:21 AM, Niclas Hedhman <[email protected]> wrote: >> a transaction of that request, but once you introduce fire-forget >> semantics, things quickly get very tricky > > what do you mean in particular? do you mean having SLAs that work for > the overall use cases? do you mean dealing with inevitable > inconsistencies?
I will give you an example; I have a number of "feeds" of external events, that simply just "arrive" to my system at high rate. The consumer is only allowed to spend a very short time (ms) to take the event of the queue in a transaction, or it will time out (assuming the consumer has died and some other consumer will need to process it). So, "transaction 1" is to get the event into a persisted store of some kind and then commit that transaction. Then one need to dispatch a 'request' to a worker to do some computation, which in my case can take seconds. And in this lies a coordination problem (unless using MQ in HA setup, or similar, which was Rickard's argument). The 'requesting party' may die. The 'processing party' may die. Both may die. And either way, the 'request' must get processed once. I have concluded that to make this simpler, the 'requesting party' must simply not exist, and the 'request' is part of the "transaction 1". A quick transacted 'forward' of some kind, for instance Event Sourcing... Now, Event Sourcing solves the problem of not loosing anything, and seems to be great way to have many independent consumers, but for the scenario I am looking at, the consumers need to be coordinated in a relatively high speed fashion. I could have a single consumer of the Atom feed that Rickard was talking about, but that introduces quite a bit of architectural overhead in the fail-over mechanism. Instead, I am opting for a distributed, transacted offer/take semantic setup, based on Hazelcasts maps and locks. It goes something like this in Qi4j terms; The external event is converted into a Domain Event (Immutable Entity) and stored for audit reasons. The Domain Event is 'published' synchronously within the JVM, and some service(s) sees that this should trigger 'worker'. An Execution (ValueComposite) is created, converted to JSON and put() to a 'worker map'. The external event feed is committed. The worker services continuously reads the various 'worker maps' and upon seeing an entry, tries to lock() it and if succeeded, reads the JSON, converts it back to the Execution object (ValueBuilder.withJSON() and calls execute() on it. When completed, remove the entry from the map. Not sure if that answers your questions... Cheers -- Niclas Hedhman, Software Developer http://www.qi4j.org - New Energy for Java I live here; http://tinyurl.com/3xugrbk I work here; http://tinyurl.com/24svnvk I relax here; http://tinyurl.com/2cgsug _______________________________________________ qi4j-dev mailing list [email protected] http://lists.ops4j.org/mailman/listinfo/qi4j-dev

