I've now created a small Maven example project that demonstrate the issue. 
http://old.nabble.com/file/p27635967/camel-mina-outofmemory-example.zip
camel-mina-outofmemory-example.zip 

You should run the test in "MinaProducerRetainedInMemoryExample". To speed
up the process I use the following VM arguments:
-XX:+HeapDumpOnOutOfMemoryError -XX:MaxPermSize=25m -Xmx25m

Here is a screen dump from VisualVM where you can see the aggregated memory
consumption and eventually the high GC activity when memory is running out:
http://old.nabble.com/file/p27635967/mina_camel_leak_visualvm.png 

Here's the screen dump from Eclipse MAT:
http://old.nabble.com/file/p27635967/mat_problem_suspects.png 

If you like I could also provide the heap dump. If there's anything else I
can do to assist just tell me.

/Johan


Johan Haleby wrote:
> 
> I'll try to find time for this tomorrow.
> 
> /Johan
> 
> 
> Claus Ibsen-2 wrote:
>> 
>> On Wed, Feb 17, 2010 at 4:59 PM, Johan Haleby <[email protected]>
>> wrote:
>>>
>>> I don't suppose it matters much but there are actually 5 different Mina
>>> endpoints involved (not 4 as I mentioned earlier). These are:
>>> mina:tcp://localhost:6200?sync=false&textline=false
>>> mina:tcp://localhost:6201?sync=false&textline=false
>>> mina:tcp://localhost:6202?sync=false&textline=false
>>> mina:tcp://localhost:6203?sync=false&textline=false
>>> mina:tcp://localhost:6204?sync=false&textline=false
>>>
>>> It's always to one of these addresses to which a message is sent.
>>>
>>> I've also tried to do a producer.stop() after each producer.sendBody(..)
>>> but
>>> it doesn't help at all. I still get the OutOfMemoryError and the same
>>> number
>>> of MinaProducers are still retained in memory.
>>>
>> 
>> I wonder what keeps it in the memory then. Could you create a small
>> project and attach it as a ZIP to a JIRA ticket?
>> 
>> 
>> 
>> 
>>> /Johan
>>>
>>>
>>> Claus Ibsen-2 wrote:
>>>>
>>>> On Wed, Feb 17, 2010 at 4:39 PM, Johan Haleby <[email protected]>
>>>> wrote:
>>>>>
>>>>> In the test I'm sending messages between 4 different endpoints but
>>>>> they're
>>>>> treated as dynamic I suppose. What I mean is that the message body of
>>>>> each
>>>>> incoming message contains the reply address(es) (which could be
>>>>> completely
>>>>> dynamic). Thus I use a recipientList to distribute a reply message to
>>>>> each
>>>>> reply address stated in the message body.
>>>>>
>>>>
>>>> I am sure if you close the producer template after use you should
>>>> probably not see this as it contains an internal cache.
>>>> But that one should only hit 1000, so I wonder why it grows to 20000+.
>>>>
>>>> Well could you post a few samples of the endpoints you send to the
>>>> recipient list. If they are all 100% different then I would assume
>>>> Camel creates a new producer per message, as it would be a cache miss
>>>> in the producer template.
>>>>
>>>>
>>>>
>>>>> /Johan
>>>>>
>>>>>
>>>>> Claus Ibsen-2 wrote:
>>>>>>
>>>>>> Hi
>>>>>>
>>>>>> On Wed, Feb 17, 2010 at 4:17 PM, Johan Haleby
>>>>>> <[email protected]>
>>>>>> wrote:
>>>>>>>
>>>>>>> The debug message ("Closing session when complete at address") is
>>>>>>> printed
>>>>>>> all
>>>>>>> the time so it should close the session. I've investigated a bit
>>>>>>> further
>>>>>>> and
>>>>>>> I see that the problem seem to be that DefaultProducerTemplate holds
>>>>>>> a
>>>>>>> SpringCamelContext which in turn holds an ArrayList which seem to
>>>>>>> contain
>>>>>>> all MinaProducers that has ever been created.
>>>>>>>
>>>>>>> I've uploaded a screenshot of the heap dump:
>>>>>>> http://old.nabble.com/file/p27625230/mina_camel_retaining_resources.png
>>>>>>> mina_camel_retaining_resources.png
>>>>>>>
>>>>>>
>>>>>> Are you using a lot of dynamic endpoints which are different?
>>>>>>
>>>>>>
>>>>>>> /Johan
>>>>>>>
>>>>>>>
>>>>>>> Claus Ibsen-2 wrote:
>>>>>>>>
>>>>>>>> Hi
>>>>>>>>
>>>>>>>> Could you google that exception and see what other Mina users have
>>>>>>>> encountered.
>>>>>>>>
>>>>>>>> And you see this DEBUG logging when you send to that remote
>>>>>>>> endpoint?
>>>>>>>>
>>>>>>>>             if (LOG.isDebugEnabled()) {
>>>>>>>>                 LOG.debug("Closing session when complete at
>>>>>>>> address: "
>>>>>>>> + endpoint.getAddress());
>>>>>>>>             }
>>>>>>>>
>>>>>>>>
>>>>>>>>
>>>>>>>> On Wed, Feb 17, 2010 at 3:27 PM, Johan Haleby
>>>>>>>> <[email protected]>
>>>>>>>> wrote:
>>>>>>>>>
>>>>>>>>> Hi,
>>>>>>>>>
>>>>>>>>> I'm running into problems when using Camel (2.1 and 2.2) with Mina
>>>>>>>>> endpoints
>>>>>>>>> configured to disconnect the session after each received message.
>>>>>>>>> When
>>>>>>>>> lots
>>>>>>>>> of messages are sent over a longer period of time I eventually run
>>>>>>>>> out
>>>>>>>>> of
>>>>>>>>> memory and the system either throws OutOfMemoryError or spends
>>>>>>>>> long
>>>>>>>>> intervals in GC. When analyzing the heap dump in Eclipse MAT it
>>>>>>>>> finds
>>>>>>>>> this
>>>>>>>>> leak suspect:
>>>>>>>>>
>>>>>>>>> 27,840 instances of
>>>>>>>>> "org.apache.mina.transport.socket.nio.SocketSessionImpl", loaded
>>>>>>>>> by
>>>>>>>>> "sun.misc.Launcher$AppClassLoader @ 0xad65d850" occupy 20,662,464
>>>>>>>>> (20.42%)
>>>>>>>>> bytes
>>>>>>>>>
>>>>>>>>> The endpoint is configured like:
>>>>>>>>> mina:tcp://localhost:6200?sync=false&textline=false and the
>>>>>>>>> message
>>>>>>>>> is
>>>>>>>>> sent
>>>>>>>>> to the endpoint using a
>>>>>>>>>  http://camel.apache.org/recipient-list.html
>>>>>>>>> recipient list .
>>>>>>>>>
>>>>>>>>> What could be the cause of this? E.g. do I need to stop the
>>>>>>>>> producer
>>>>>>>>> template after each sent message? Right now the ProducerTemplate
>>>>>>>>> is a
>>>>>>>>> singleton used concurrently by multiple threads.
>>>>>>>>>
>>>>>>>>> /Johan
>>>>>>>>> --
>>>>>>>>> View this message in context:
>>>>>>>>> http://old.nabble.com/SocketSessionImpl-in-Mina-component-retained-in-memory-indefinitely-tp27624487p27624487.html
>>>>>>>>> Sent from the Camel - Users mailing list archive at Nabble.com.
>>>>>>>>>
>>>>>>>>>
>>>>>>>>
>>>>>>>>
>>>>>>>>
>>>>>>>> --
>>>>>>>> Claus Ibsen
>>>>>>>> Apache Camel Committer
>>>>>>>>
>>>>>>>> Author of Camel in Action: http://www.manning.com/ibsen/
>>>>>>>> Open Source Integration: http://fusesource.com
>>>>>>>> Blog: http://davsclaus.blogspot.com/
>>>>>>>> Twitter: http://twitter.com/davsclaus
>>>>>>>>
>>>>>>>>
>>>>>>>
>>>>>>> --
>>>>>>> View this message in context:
>>>>>>> http://old.nabble.com/SocketSessionImpl-in-Mina-component-retained-in-memory-indefinitely-tp27624487p27625230.html
>>>>>>> Sent from the Camel - Users mailing list archive at Nabble.com.
>>>>>>>
>>>>>>>
>>>>>>
>>>>>>
>>>>>>
>>>>>> --
>>>>>> Claus Ibsen
>>>>>> Apache Camel Committer
>>>>>>
>>>>>> Author of Camel in Action: http://www.manning.com/ibsen/
>>>>>> Open Source Integration: http://fusesource.com
>>>>>> Blog: http://davsclaus.blogspot.com/
>>>>>> Twitter: http://twitter.com/davsclaus
>>>>>>
>>>>>>
>>>>>
>>>>> --
>>>>> View this message in context:
>>>>> http://old.nabble.com/SocketSessionImpl-in-Mina-component-retained-in-memory-indefinitely-tp27624487p27625648.html
>>>>> Sent from the Camel - Users mailing list archive at Nabble.com.
>>>>>
>>>>>
>>>>
>>>>
>>>>
>>>> --
>>>> Claus Ibsen
>>>> Apache Camel Committer
>>>>
>>>> Author of Camel in Action: http://www.manning.com/ibsen/
>>>> Open Source Integration: http://fusesource.com
>>>> Blog: http://davsclaus.blogspot.com/
>>>> Twitter: http://twitter.com/davsclaus
>>>>
>>>>
>>>
>>> --
>>> View this message in context:
>>> http://old.nabble.com/SocketSessionImpl-in-Mina-component-retained-in-memory-indefinitely-tp27624487p27625979.html
>>> Sent from the Camel - Users mailing list archive at Nabble.com.
>>>
>>>
>> 
>> 
>> 
>> -- 
>> Claus Ibsen
>> Apache Camel Committer
>> 
>> Author of Camel in Action: http://www.manning.com/ibsen/
>> Open Source Integration: http://fusesource.com
>> Blog: http://davsclaus.blogspot.com/
>> Twitter: http://twitter.com/davsclaus
>> 
>> 
> 
> 

-- 
View this message in context: 
http://old.nabble.com/SocketSessionImpl-in-Mina-component-retained-in-memory-indefinitely-tp27624487p27635967.html
Sent from the Camel - Users mailing list archive at Nabble.com.

Reply via email to