[
https://issues.apache.org/jira/browse/QPID-6800?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14962671#comment-14962671
]
Rob Godfrey commented on QPID-6800:
-----------------------------------
I wonder if something like this is what you intended:
{code}
private void allocateDataBuffers(byte[] data, int offset, int len) throws
IOException
{
if (_closed)
{
throw new IOException("Stream is closed");
}
int size = (_isDirect && _maximumBufferSize > 0) ?
Math.min(_maximumBufferSize, len) : len;
final QpidByteBuffer current = _isDirect ?
QpidByteBuffer.allocateDirect(size) : QpidByteBuffer.allocateDirect(size);
current.put(data, offset, size);
current.flip();
_buffers.add(current);
if (len > size)
{
allocateDataBuffers(data, offset + size, len - size);
}
}
{code}
Personally I'd maybe use a do { } while (...) loop rather than recursion, but I
don't think it makes a great deal of difference either way
> Use cached direct buffers for message compression/decompression
> ---------------------------------------------------------------
>
> Key: QPID-6800
> URL: https://issues.apache.org/jira/browse/QPID-6800
> Project: Qpid
> Issue Type: Bug
> Components: Java Broker
> Reporter: Keith Wall
> Assignee: Rob Godfrey
> Fix For: qpid-java-6.0
>
>
> Currently when the 0-8..0-91 and 0-10 protocol engines compress/decompress
> message payloads on behalf of consumers, (non-cached) direct memory is
> allocated for the modified content and then shortly after released. This
> approach is risky as the JVM may not release this memory in a timely manner
> and an OutOfMemoryError direct may occur.
> Switch the protocol engines to use cached direct memory for
> compressing/decompressing message payloads.
--
This message was sent by Atlassian JIRA
(v6.3.4#6332)
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]