Hi all,

it seems I've been hit by a fairly serious memory leak that appears
to be in MINA's heap byte buffer system. Running a MINA-based server
with "ByteBuffer.setUseDirectBuffers (false)" triggers it. This is
confirmed by running a client/server test case where the client sends
and receives 64KB messages in a tight loop (Mac OS X 10.4.11, JDK 1.5)

With ByteBuffer.setUseDirectBuffers (true), I see CPU usage between
116% - 123% and memory usage (RSS) varying rapidly between 18.27MB
and 19.05MB for a 30 minute run. Total messages sent in this mode =
1,479,415.

With ByteBuffer.setUseDirectBuffers (false) and the same test, I see
lower CPU (109%-110%) and memory gradually increasing, with almost no
fluctuation, e.g.

  RSS
  18.88
  19.09
  19.23
  19.31
  19.50

Total messages sent in this mode = 1,138,246 (i.e. 76% of the
performance).

In the real server (Linux, Fedora Core 6, Java(TM) SE Runtime
Environment build 1.6.0_03-b05), RSS usually starts at 24MB and
increases over 24 hours to 45MB. Using jmap in histogram mode to
profile memory usage, I see output like:

        Instances  Size      Class
   1:   2192677    35082832
java.util.concurrent.ConcurrentLinkedQueue$Node
   2:      8285    23427664  [B
   3:     51674     9060984  [C

(this is after 2 days of operational use and at 109MB RSS - I also
have a log showing the growth over 2 days)

So, a small number of byte arrays and a huge (and always increasing)
number of ConcurrentLinkedQueue$Node's is sucking up about 78MB of heap.

I'm not wrapping any of my own buffers with ByteBuffer.wrap (), which
is the only bug on the list that I've been able to find. The only
buffer allocation code is:

   ByteBuffer buffer = ByteBuffer.allocate (4096);
   buffer.setAutoExpand (true);

in the server's ProtocolEncoder implementation.

Any ideas?

Cheers,

Matthew.

Reply via email to