[
https://issues.apache.org/jira/browse/ARTEMIS-1811?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]
Francesco Nigro updated ARTEMIS-1811:
-------------------------------------
Description:
JournalStorageManager::addBytesToLargeMessage and
LargeServerMessageImpl::DecodingContext::decode are relying on the pooling of
direct ByteBuffers performed internally by NIO.
Those buffers are pooled until certain size limit (ie
jdk.nio.maxCachedBufferSize, as shown on
[https://bugs.openjdk.java.net/browse/JDK-8147468]) and when not pooled are
cleaned at the end of its usage.
That's stress the native memory allocator and would lead to poor performances
and potential OOMs as well, dependently on the written message chunk size.
The proposed solutions are:
# perform ad hoc direct ByteBuffer caching on the write path thanks to the
read lock
# replace the NIO SequentialFile usage and just use RandomAccessFile that
provide the right API to append byte[] without creating additional native copies
was:
JournalStorageManager::addBytesToLargeMessage is relying on the pooling of
direct ByteBuffers performed internally by NIO.
Those buffers are pooled until certain size limit (ie
jdk.nio.maxCachedBufferSize, as shown on
[https://bugs.openjdk.java.net/browse/JDK-8147468]) and when not pooled are
cleaned at the end of its usage.
That's stress the native memory allocator and would lead to poor performances
and potential OOMs as well, dependently on the written message chunk size.
The proposed solutions are:
# perform ad hoc direct ByteBuffer caching on the write path thanks to the
read lock
# replace the NIO SequentialFile usage and just use RandomAccessFile that
provide the right API to append byte[] without creating additional native copies
> NIOSequentialFile should use RandomAccessFile with heap ByteBuffers
> -------------------------------------------------------------------
>
> Key: ARTEMIS-1811
> URL: https://issues.apache.org/jira/browse/ARTEMIS-1811
> Project: ActiveMQ Artemis
> Issue Type: Improvement
> Components: Broker
> Affects Versions: 2.5.0
> Reporter: Francesco Nigro
> Assignee: Francesco Nigro
> Priority: Major
>
> JournalStorageManager::addBytesToLargeMessage and
> LargeServerMessageImpl::DecodingContext::decode are relying on the pooling of
> direct ByteBuffers performed internally by NIO.
> Those buffers are pooled until certain size limit (ie
> jdk.nio.maxCachedBufferSize, as shown on
> [https://bugs.openjdk.java.net/browse/JDK-8147468]) and when not pooled are
> cleaned at the end of its usage.
> That's stress the native memory allocator and would lead to poor performances
> and potential OOMs as well, dependently on the written message chunk size.
> The proposed solutions are:
> # perform ad hoc direct ByteBuffer caching on the write path thanks to the
> read lock
> # replace the NIO SequentialFile usage and just use RandomAccessFile that
> provide the right API to append byte[] without creating additional native
> copies
--
This message was sent by Atlassian JIRA
(v7.6.3#76005)