Hiram Chirino wrote:
> 
>> > I wonder if the simplest way to implement the described split and
>> > aggregate pattern could be to just combine the splitter and aggregator
>> > into 1 delegate processor.  For example it would take it's input,
>> > split it and forward each part to the delegate for processing.  Then
>> > it would wait for the result of processing and aggregate those
>> > results.  The delegate processor would know how many things it split
>> > up so it would know how many things to aggregate.
>>
>> Thats a great idea for a nice simple solution that solves many
>> problems :) Increasingly it might help if we can try to roll up
>> smaller patterns into larger patterns (or protocols?) so combining a
>> splitter and aggregator together into a single unit might help
>> simplify things for folks (and make it easier to avoid making
>> mistakes).
>>
>> I guess the downside of this approach is that only one large message
>> can be processed at once. For example using Invoices and LineItems as
>> an example; if you had a pool of processors, you could only let folks
>> process one Invoice at once (as the thread would block before sending
>> any other invoice's LineItems to be processed) but that might avoid
>> complications.
>>
> 
> Actually that would not be true.  It should be safe to call
> Splitter/Aggregator concurrently since it's state would be local the
> exchange being processed.  For example:
> 
> from("jms:queue:input").thread(5).splitAndAggregate(...)...
> 
> And the processing of the split parts would not have to happen in
> sequentially either if the  Splitter/Aggregator was implemented an
> AsyncProcessor.  It could do something like:
> 
>   ...splitAndAggregate(...).thread(5).processor(stepProcessor);
> 
> basically it would sequentially send all the steps to the thread
> processor but those would return right away since an async thread
> would be started to complete process each step.  The
> Splitter/Aggregator  would then just listen to the async completion
> events so that it could aggregate the results and return when all the
> steps have been aggregated.
> 

Completely agree with this approach, but this creates a strong dependency on
the DSL, doesn´t? I mean, you have to do always things in this way:
from("someEndpoint").<concurrency.>splitAndAggregate().<concurrency.>withThisProcessor().to("someOtherEndpoint");

Isn´t this too coupled? Or the DSL is flexible enough for doing something
equivalent to...:

from("seda:lotsOfWork").multicastSplitter("markedWithThisToken").to("seda:oneProcessor",
"seda:anotherProcessor");

from("seda:oneProcessor").beanRef("doSomethingLenghty1", "withPOJOs")
.to("seda:joinResults");

from("seda:anotherProcessor").beanRef("doSomethingLenghty2", "withBeans")
.multicast().to("seda:joinResults", "jpa:butSaveThisOnes");

from("seda:joinResults").multicastAggregator("markedWithThisToken",
nonBatchStrategy)
.to("direct:result");

(Omitted threads and details... :)

Well, it´s simple but very powerful and flexible... like Camel? ;O)

Thanks, Alberto.
-- 
View this message in context: 
http://www.nabble.com/Aggregator-strategies-%28again%29-tf4750834s22882.html#a13591937
Sent from the Camel - Users mailing list archive at Nabble.com.

Reply via email to