Hi,

I'm using Camel 2.7.2 with Spring 3.0 and running inside the Virgo (Equinox) 
OSGi container on OS X and on Linux. I'm using a file endpoint to watch a 
directory for new files and send the content of those files to a bean method 
defined in my Spring context. My application will run for some time, but then 
runs out of heap space. I've profiled the application until it runs out of 
memory and examined the heap dump. The most frequent class in the heap is 
HashMap$Entry and most of those entries seem to hold String objects 
representing the path to files that Camel has previously picked up. It seems 
like this growing HashMap is what is causing me to run out of memory.

Here is my Spring configuration for Camel:

<bean name="publisher" class="gov.noaa.nws.iris.textingest.Publisher">
<property name="rabbitTemplate" ref="rabbitTemplate" />
</bean>

<camel:camelContext id="textIngestContext" autoStartup="true">
<camel:endpoint id="fileinput" 
uri="file://${watchDirectory}?readLock=fileLock&amp;delete=true" />
<camel:route autoStartup="true" startupOrder="2">
<camel:from ref="fileinput" />
<camel:setHeader headerName="filename">
<camel:simple>${file:onlyname.noext}</camel:simple>
</camel:setHeader>
<camel:to uri="bean:publisher?method=publishFile" />
</camel:route>
</camel:camelContext>

You can download the tar'd up heap dump at 
http://venus1.wrh.noaa.gov/scratch/sutula/heap_dump.hprof.tgz.

I've run a similar Camel endpoint in the past, but with much lower rates of 
files being picked up by the endpoint. I don't think I had memory issues then. 
This makes me think it's something related to the rate at which the endpoint is 
picking up data... Does the HashMap recording previous files get purged of 
entries that are n seconds old? In that case, high data rates would cause the 
HashMap to be larger.

Any help on this would be greatly appreciated.

Thanks,
Aaron




Reply via email to