On Fri, 2011-05-20 at 05:03 +0530, Asankha C. Perera wrote: > Hi All > > Using DirectByteBufferAllocator could cause non-heap memory to be > allocated which the OS may find difficult to reclaim. On some JVMs this > could even lead to a OOM and/or crash (Seen on Debian esp with 32 bit > JDK). Using the Heap variant seems to be a solution, although it > increases the GC overhead. > > I was also looking at the ExpandableBuffer.expand() method, and was not > able to understand why the size is increased as: > int newcapacity = (this.buffer.capacity() + 1) << 1; The default 8192 > causes the buffer to become 16386.. what is the reason behind adding 1 > here.. >
There is no particular reason but this algorithm appears to be used by many JRE classes such as StringBuilder. > I am thinking of extending the ByteBufferAllocator interface so that one > could write an implementation that could cache and reuse buffers. e.g. a > pre-allocated number of direct buffers and/or some heap buffers may make > this much more optimal. Once a buffer becomes free, the same could be > handed back to the allocator, who could then cache or discard that. Will > this be a good idea to implement? > There appears to be a general consensus that memory pooling in Java does not really make much sense. The overhead of maintaining a pool of objects on the heap is believed to be greater than garbage collecting unused ones and allocating new ones when needed. Direct buffers may be more expensive to allocate / deallocate so pooling those may prove worthwhile on some platforms. Oleg --------------------------------------------------------------------- To unsubscribe, e-mail: [email protected] For additional commands, e-mail: [email protected]
