There are a number of answers to that question but this should be relatively easy to fix. Since you are running out of memory you should probably bump the max memory in the karaf startup batch file. But you will also want to limit the number of rows that you bring into memory. You already have the consumer set to streaming which is what you want.
Since you are using the ActiveMQ component you can set the maximum number of items on the queue. You could also use the SEDA or disruptor queues and set the size from 100 to 1,000 limiting the number of in memory objects. You may want to try that just for testing purposes and then switch back to JMS. You don't appear to be using JMS here for its distributed capabilities nor for its ability to have durable subscribers or persistence mechanics on the queue. The SEDA component is much simpler in that it is just a List/queue in memory with configurable numbers of threads associated with it. Doing that will let you see if you still have a memory problem. Later, if you decide that you want JMS in the mix then you can easily swap that back in but then only deal with the problems associated with the queue and AMQ configuration. One item you could then look at is having the queue set up to limit the number of in memory objects but have the queue itself backed by the persistent store (kahadb). That's very simple to do and ships as a standard out of the box configuration. You might also want to consider putting the processor and Gson marshalling prior to the queue if the REST endpoint is the true bottleneck. The incoming data can be unmarshalled, processed, converted via Gson and then put on the queue. When the consumer takes it off the queue it does exactly one thing with it - invokes the REST endpoint. How big are these objects. You'd mentioned that there are only 50,000 rows in the file so that isn't much. -- View this message in context: http://camel.465427.n5.nabble.com/Best-Strategy-to-process-a-large-number-of-rows-in-File-tp5779856p5779973.html Sent from the Camel - Users mailing list archive at Nabble.com.