Hi,

I have different mediations. Each mediation handles an incoming file type
(csv, txt delimited or fixed length) during the prorcessing.
In same csv and txt file Tokenizer works fine with \n or \r\n or \r. 
Few minutes ago, I found solution for a txt delimited file adding a
charset=iso-8859-1 option to uri of file component avoiding converterBodyTo
and as token \n.

Thank you very much for your help/suggestions. The memory usage is
significantly improved.

Now, I'll try to improve ActiveMq performance because the speed of process
on Producer side (Read-Split-StoreInQueue) is very slow (Put in queue
~2msg/s in some process). My goal is to store the whole file splitted line
by line in queue as soon as possible to increase reliability.

Thank you a lot again.

Best Regards

Michele










--
View this message in context: 
http://camel.465427.n5.nabble.com/Best-Strategy-to-process-a-large-number-of-rows-in-File-tp5779856p5781298.html
Sent from the Camel - Users mailing list archive at Nabble.com.

Reply via email to