Hi,

How big is your document?

You are converting that thing into a String, which is generally unadvisable for 
large content (as it will consume twice the file size in memory). The second 
question is what your ReaderDataFormat does with the data. Does it copy it 
again or does it support streaming?

Best regards
Stephan

-----Original Message-----
From: Michele [mailto:michele.mazzi...@finconsgroup.com] 
Sent: Mittwoch, 13. April 2016 11:06
To: users@camel.apache.org
Subject: Re: Best Strategy to process a large number of rows in File

Hi,

I'm here again because I don't resolved my problem.

After several checks, I noticed this:

memory-usage.png
<http://camel.465427.n5.nabble.com/file/n5780965/memory-usage.png>  

Why does thread related to seda://processAndStoreInQueue consume much
memory? How to optimize memory usage? 

This is my route to manage file (in this case is a csv with 40000 rows) with
large number of rows:

<route id="FileRetriever_Route">
                        <from
uri="{{uri.inbound}}?scheduler=quartz2&amp;scheduler.cron={{poll.consumer.scheduler}}&amp;scheduler.triggerId=FileRetriever&amp;scheduler.triggerGroup=IF_CBIKIT{{uri.inbound.options}}"
/>
                        <setHeader
headerName="ImportDateTime"><simple>${date:now:yyyyMMdd-HHmmss}</simple></setHeader>
                        <setHeader
headerName="MsgCorrelationId"><simple>CBIKIT_INBOUND_${in.header.ImportDateTime}</simple></setHeader>
                        <setHeader headerName="breadcrumbId">
                      
<simple>Import-${in.header.CamelFileName}-${in.header.ImportDateTime}-${in.header.breadcrumbId}</simple>
                </setHeader>
                        <log message="START - FileRetriever_Route - Found file
${in.header.CamelFileName}" />
                        
                        <to uri="seda:processAndStoreInQueue" />
                        <log message="END - FileRetriever_Route" />
                </route>
                
                <route id="ProcessAndStoreInQueue_Route" >
                        <from uri="seda:processAndStoreInQueue" />
                                                
                        <convertBodyTo type="java.lang.String" charset="UTF-8"/>
                        <unmarshal ref="ReaderDataFormat" />
                        <log message="Content File size received ${body.size}" 
/>
                        
                        <split streaming="true" parallelProcessing="false">
                                <simple>${body}</simple>
                                
                                <choice>
                                        <when>
                                                <simple></simple>
                                                <setHeader
headerName="CamelSplitIndex"><simple>${in.header.CamelSplitIndex}</simple></setHeader>
                                                <process 
ref="BodyEnricherProcessor" />
                                                <to
uri="dozer:transform?mappingFile=file:{{crt2.apps.home}}{{dozer.mapping.path}}&amp;targetModel=java.util.LinkedHashMap"
/>
                                                <marshal ref="Gson" />
                                                <log message="Message 
transformed ${in.header.CamelSplitIndex} -
${body}" />
                                                <to 
uri="activemq:queue:IF_CBIKIT"  />  
                                        </when>
                                        <otherwise>
                                                <log message="Message discarded 
${in.header.CamelSplitIndex} -
${body}" />
                                        </otherwise>
                                </choice>
                        </split>
                </route>

Now Out Of Memory is generated by:
    Heap dumped on OutOfMemoryError exception
    Thread causing OutOfMemoryError exception: Camel
(IF_CBIKIT-Inbound-Context) thread #18 - seda://processAndStoreInQueue

Please help me!

Thanks a lot in advance

Best Regards 

Michele



--
View this message in context: 
http://camel.465427.n5.nabble.com/Best-Strategy-to-process-a-large-number-of-rows-in-File-tp5779856p5780965.html
Sent from the Camel - Users mailing list archive at Nabble.com.

Reply via email to