Memory 'Leak' when causing auto-expandable ByteBuffer to expand and change 
buffer-pool (stack).
-----------------------------------------------------------------------------------------------

         Key: DIRMINA-62
         URL: http://issues.apache.org/jira/browse/DIRMINA-62
     Project: Directory MINA
        Type: Bug
    Versions: 0.7.2    
 Environment: Non-specific.
    Reporter: Mark Atwell
 Assigned to: Trustin Lee 


We have been using the excellent MINA library - BTW how do you pronounce this: 
Minner? or Minor? or...?

Anyway, we had an apparent memory leak when using the MINA code with 
auto-expandable ByteBuffers.

I've tracked it down to the allocate/de-allocate algorithm and buffering.

The problem is that we originally requested a small initial buffer and then 
putXXX() tons of things into it, causing it to grow. However, when the buffer 
is released (implicitly by calling ...write), the now large buffer gets 
released to a different pool (stack). Since these are unbounded, pools, the 
large-buffer pool just aggregates the big buffers - which practically never get 
used.

I originally thought that I could just release the underlying ByteBuffer when 
the pool reaches some maximum, but no joy. It looks like I may need to rely on 
garbage-collection kicking in, but this is far from effective (For JavaSoft's 
Lame 'solution', see: 
http://bugs.sun.com/bugdatabase/view_bug.do?bug_id=4879883 ). Why can't the NIO 
class have a deallocate/release call?! Grrrr! :o(

I believe a better/more elegant solution may be to modify 
ByteBuffer.ensureCapacity() to also use the stack/pools rather than allocate 
more native ByteBuffers... and I guess that this would be faster also? I've 
tested this and it seems to work fine:

The change is in ByteBuffer.ensureCapacity. Change:

    java.nio.ByteBuffer newBuf = isDirect() ? 
java.nio.ByteBuffer.allocateDirect(newCapacity) : 
java.nio.ByteBuffer.allocate(newCapacity);

To:

    java.nio.ByteBuffer newBuf = allocate0(newCapacity,isDirect());

Obviously one of the things one can do in the interim is work out (or 
approximate/over-estimate) the maximum buffer size, but if we want to do this 
with any degree of accuracy we need to encode our buffer first, which rather 
defeats the purpose/benefit of the ByteBuffer.


-- 
This message is automatically generated by JIRA.
-
If you think it was sent incorrectly contact one of the administrators:
   http://issues.apache.org/jira/secure/Administrators.jspa
-
For more information on JIRA, see:
   http://www.atlassian.com/software/jira

Reply via email to