[ 
https://issues.apache.org/jira/browse/ARTEMIS-1811?focusedWorklogId=316061&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-316061
 ]

ASF GitHub Bot logged work on ARTEMIS-1811:
-------------------------------------------

                Author: ASF GitHub Bot
            Created on: 21/Sep/19 05:00
            Start Date: 21/Sep/19 05:00
    Worklog Time Spent: 10m 
      Work Description: franz1981 commented on issue #2844: ARTEMIS-1811 NIO 
Seq File should use RandomAccessFile with heap buffers
URL: https://github.com/apache/activemq-artemis/pull/2844#issuecomment-533606931
 
 
   @wy96f @clebertsuconic 
   Just for completeness that's what would change:
   
   - 
[Java_java_io_RandomAccessFile_writeBytes->writeBytes](https://github.com/frohoff/jdk8u-jdk/blob/da0da73ab82ed714dc5be94acd2f0d00fbdfe2e9/src/share/native/java/io/RandomAccessFile.c#L85)
   - 
[writeBytes->IO_Write](https://github.com/frohoff/jdk8u-jdk/blob/da0da73ab82ed714dc5be94acd2f0d00fbdfe2e9/src/share/native/java/io/io_util.c#L189)
 by using a stack buffer if java byte[] length is < `BUF_SIZE` (=== 8192 bytes) 
or with a fresh new buffer allocated using malloc/free: in both cases 
GetByteArrayRegion/SetByteArrayRegion are used to perform a copy from/to the 
provided java `byte[]`
   - [IO_Write === 
handleWrite](https://github.com/frohoff/jdk8u-jdk/blob/da0da73ab82ed714dc5be94acd2f0d00fbdfe2e9/src/solaris/native/java/io/io_util_md.h#L71)
   - 
[handleWrite->write](https://github.com/frohoff/jdk8u-jdk/blob/da0da73ab82ed714dc5be94acd2f0d00fbdfe2e9/src/solaris/native/java/io/io_util_md.c#L164)
   
   I see that this PR has few advantages vs 
https://github.com/apache/activemq-artemis/pull/2832:
   
   - it is simpler/less impactfull on artemis code base
   - although copies always happen, no leaks can happen
   
   I see that for small writes/reads the perf hit is not very high because copy 
java byte[]<->stack allocated native byte[] copy is cheap if compared to the 
cost of the write/read syscall and won't impact scalability of the native 
allocator, because malloc/free isn't used.
   
   I'm worried, because this change would enable a non-transparent handling of 
large writes/reads on JNI that could silently kill performance due to 
`malloc/free` + the large copy: that's exactly our use case for OpenWire and 
AMQP (until we'll get streaming of large messages in) and sadly it would affect 
CORE too, given that CORE writes/reads in 100KB sized chunks, that's >  
`BUF_SIZE` (ie 8192 bytes).
   
   PLEASE DO NOT MERGE: I would like to discuss with you about it :)
 
----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
-------------------

    Worklog Id:     (was: 316061)
    Time Spent: 1h 40m  (was: 1.5h)

> NIOSequentialFile should use RandomAccessFile with heap ByteBuffers
> -------------------------------------------------------------------
>
>                 Key: ARTEMIS-1811
>                 URL: https://issues.apache.org/jira/browse/ARTEMIS-1811
>             Project: ActiveMQ Artemis
>          Issue Type: Improvement
>          Components: Broker
>    Affects Versions: 2.5.0
>            Reporter: Francesco Nigro
>            Assignee: Francesco Nigro
>            Priority: Major
>          Time Spent: 1h 40m
>  Remaining Estimate: 0h
>
> JournalStorageManager::addBytesToLargeMessage and 
> LargeServerMessageImpl::DecodingContext::encode are relying on the pooling of 
> direct ByteBuffers performed internally by NIO.
> Those buffers are pooled until certain size limit (ie 
> jdk.nio.maxCachedBufferSize, as shown on 
> [https://bugs.openjdk.java.net/browse/JDK-8147468]) otherwise are freed right 
> after the write succeed.
> If the property jdk.nio.maxCachedBufferSize isn't set, the direct buffers are 
> always pooled regardless of the size, leading to OOM issues on high load of 
> variable sized writes due to the amount of direct memory allocated and not 
> released/late released.
> The proposed solutions are:
>  # perform ad hoc direct ByteBuffer caching on the write path thanks to the 
> read lock
>  # replace the NIO SequentialFile usage and just use RandomAccessFile that 
> provide the right API to append byte[] without creating additional native 
> copies



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

Reply via email to