On Tue, Feb 23, 2010 at 5:44 PM, wmoussel <[email protected]> wrote:
>
> So you mean my direct:processFile and direct:processLine are multi-threaded
> or I have to put a thread(10) somewhere?
>
> It looks to me that it's processing one file at a time. Is it just an
> illusion?
>
That is because this one is not concurrent.
In fact you dont want that as its a bitch if having 20 threads
fighting over the same files.
from("file:input?move=output").
So you simply just add .threads(20) right after it to let it be concurrent
from("file:input?move=output").threads(20)
I wrote a blog entry on it
http://davsclaus.blogspot.com/2009/05/on-road-to-camel-20-concurrency-with.html
Notice at that time we talked about name threads async. Which we kept as is.
> Thanks
>
>
> Claus Ibsen-2 wrote:
>>
>> Hi
>>
>> direct can easily be concurrent. Just see direct as if it was a direct
>> method call that is invoked in the same thread.
>> So if you got 10 different threads that concurrently route over direct
>> then you got concurrency.
>>
>> Its only SEDA which uses a buffer in between (in fact its a Queue)
>> which is used to transfer the data from on thread to another.
>>
>>
>> On Tue, Feb 23, 2010 at 4:34 PM, wmoussel <[email protected]> wrote:
>>>
>>> Hi,
>>>
>>> I'm basically trying to achieve this:
>>>
>>> from("file:input?move=output").loadBalance().to("direct:Q1","direct:Q2","direct:Q3")
>>>
>>> from("direct:Q1").setHeader("queue",constant("Q1")).to("direct:processFile")
>>> from("direct:Q2").setHeader("queue",constant("Q2")).to("direct:processFile")
>>> from("direct:Q3").setHeader("queue",constant("Q3")).to("direct:processFile")
>>>
>>> from("direct:processFile").split(body(),myConcatenateLinesStrategy).parallelProcessing()
>>> .process(...)
>>> [...]
>>> .to("direct:processLine")
>>> .end()
>>> .process(myEndOfProcessingFileProcessor)
>>> // Only Now move file to output
>>> ;
>>>
>>> I think there is no point of having :
>>> - parallelProcessing on the splitter since i'm using
>>> to("direct:processLine") which runs one only thread
>>> - loadBalance since they all point to the same direct
>>>
>>> And if I use seda?concurrentConsumers=10 instead, the file is moved
>>> before
>>> it's processed (UnitOfWork stops at to("seda:") I think) ...
>>>
>>> From what I get of thread(10) it wouldn't help either because the
>>> exchange
>>> wouldn't get to the myEndOfProcessingFileProcessor part
>>>
>>> Why is there no way to have direct?concurrentConsumers=10 ? Or Am I
>>> missing
>>> something ?
>>>
>>> Thanks a lot !
>>>
>>> Wandrille
>>>
>>> --
>>> View this message in context:
>>> http://old.nabble.com/Using-Concurrent-Consumers-within-a-splitter-tp27705170p27705170.html
>>> Sent from the Camel - Users mailing list archive at Nabble.com.
>>>
>>>
>>
>>
>>
>> --
>> Claus Ibsen
>> Apache Camel Committer
>>
>> Author of Camel in Action: http://www.manning.com/ibsen/
>> Open Source Integration: http://fusesource.com
>> Blog: http://davsclaus.blogspot.com/
>> Twitter: http://twitter.com/davsclaus
>>
>>
>
> --
> View this message in context:
> http://old.nabble.com/Using-Concurrent-Consumers-within-a-splitter-tp27705170p27706374.html
> Sent from the Camel - Users mailing list archive at Nabble.com.
>
>
--
Claus Ibsen
Apache Camel Committer
Author of Camel in Action: http://www.manning.com/ibsen/
Open Source Integration: http://fusesource.com
Blog: http://davsclaus.blogspot.com/
Twitter: http://twitter.com/davsclaus