[
https://issues.apache.org/jira/browse/ARTEMIS-1811?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]
Francesco Nigro updated ARTEMIS-1811:
-------------------------------------
Description:
JournalStorageManager::addBytesToLargeMessage and
LargeServerMessageImpl::DecodingContext::encode are relying on the pooling of
direct ByteBuffers performed internally by NIO.
Those buffers are pooled until certain size limit (ie
jdk.nio.maxCachedBufferSize, as shown on
[https://bugs.openjdk.java.net/browse/JDK-8147468]) otherwise are freed right
after the write succeed.
If the property jdk.nio.maxCachedBufferSize isn't set, the direct buffers are
always pooled regardless of the size, leading to OOM issues on high load of
variable sized writes due to the amount of direct memory allocated and not
released/late released.
The proposed solutions are:
# perform ad hoc direct ByteBuffer caching on the write path thanks to the
read lock
# replace the NIO SequentialFile usage and just use RandomAccessFile that
provide the right API to append byte[] without creating leaking native copies
was:
JournalStorageManager::addBytesToLargeMessage and
LargeServerMessageImpl::DecodingContext::encode are relying on the pooling of
direct ByteBuffers performed internally by NIO.
Those buffers are pooled until certain size limit (ie
jdk.nio.maxCachedBufferSize, as shown on
[https://bugs.openjdk.java.net/browse/JDK-8147468]) otherwise are freed right
after the write succeed.
If the property jdk.nio.maxCachedBufferSize isn't set, the direct buffers are
always pooled regardless of the size, leading to OOM issues on high load of
variable sized writes due to the amount of direct memory allocated and not
released/late released.
The proposed solutions are:
# perform ad hoc direct ByteBuffer caching on the write path thanks to the
read lock
# replace the NIO SequentialFile usage and just use RandomAccessFile that
provide the right API to append byte[] without creating additional native copies
> NIOSequentialFile should use RandomAccessFile with heap ByteBuffers
> -------------------------------------------------------------------
>
> Key: ARTEMIS-1811
> URL: https://issues.apache.org/jira/browse/ARTEMIS-1811
> Project: ActiveMQ Artemis
> Issue Type: Improvement
> Components: Broker
> Affects Versions: 2.5.0
> Reporter: Francesco Nigro
> Assignee: Francesco Nigro
> Priority: Major
> Time Spent: 2h 40m
> Remaining Estimate: 0h
>
> JournalStorageManager::addBytesToLargeMessage and
> LargeServerMessageImpl::DecodingContext::encode are relying on the pooling of
> direct ByteBuffers performed internally by NIO.
> Those buffers are pooled until certain size limit (ie
> jdk.nio.maxCachedBufferSize, as shown on
> [https://bugs.openjdk.java.net/browse/JDK-8147468]) otherwise are freed right
> after the write succeed.
> If the property jdk.nio.maxCachedBufferSize isn't set, the direct buffers are
> always pooled regardless of the size, leading to OOM issues on high load of
> variable sized writes due to the amount of direct memory allocated and not
> released/late released.
> The proposed solutions are:
> # perform ad hoc direct ByteBuffer caching on the write path thanks to the
> read lock
> # replace the NIO SequentialFile usage and just use RandomAccessFile that
> provide the right API to append byte[] without creating leaking native copies
--
This message was sent by Atlassian Jira
(v8.3.4#803005)