[jira] [Work logged] (ARTEMIS-2482) Large messages could leak native ByteBuffers

2019-09-20 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/ARTEMIS-2482?focusedWorklogId=315494=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-315494
 ]

ASF GitHub Bot logged work on ARTEMIS-2482:
---

Author: ASF GitHub Bot
Created on: 20/Sep/19 06:59
Start Date: 20/Sep/19 06:59
Worklog Time Spent: 10m 
  Work Description: franz1981 commented on issue #2832: ARTEMIS-2482 Large 
messages could leak native ByteBuffers
URL: https://github.com/apache/activemq-artemis/pull/2832#issuecomment-533434059
 
 
   @clebertsuconic @michaelandrepearce Thanks to the work of @wy96f on 
replication, I've taken a further look into 
https://docs.oracle.com/javase/7/docs/api/java/io/RandomAccessFile.html#getChannel()
 and I think this issue can be solved much more easily by using 
`RandomAccessFile` when necessary (to write/read byte[] without leaks :))
   I will provide soon a PR for it, but I can close this one, that is very 
complex and uneasy to be maintained
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 315494)
Time Spent: 7h 50m  (was: 7h 40m)

> Large messages could leak native ByteBuffers
> 
>
> Key: ARTEMIS-2482
> URL: https://issues.apache.org/jira/browse/ARTEMIS-2482
> Project: ActiveMQ Artemis
>  Issue Type: Bug
>  Components: AMQP, Broker, OpenWire
>Affects Versions: 2.10.0
>Reporter: Francesco Nigro
>Priority: Major
>  Time Spent: 7h 50m
>  Remaining Estimate: 0h
>
> JournalStorageManager::addBytesToLargeMessage and 
> LargeServerMessageImpl::DecodingContext::encode are relying on the pooling of 
> direct ByteBuffers performed internally by NIO.
> Those buffers are pooled until certain size limit (ie 
> jdk.nio.maxCachedBufferSize, as shown on 
> https://bugs.openjdk.java.net/browse/JDK-8147468) otherwise are freed right 
> after the write succeed.
> If the property jdk.nio.maxCachedBufferSize isn't set, the direct buffers are 
> always pooled regardless of the size, leading to OOM issues on high load of 
> variable sized writes due to the amount of direct memory allocated and not 
> released/late released.
> This should be an alternative fix for 
> https://issues.apache.org/jira/browse/ARTEMIS-1811 and it check if such 
> pooling is happening, making large messages to be read/written in chunks by 
> using the Netty ByteBuf pool to handle any intermediate buffer.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Work logged] (ARTEMIS-2482) Large messages could leak native ByteBuffers

2019-09-20 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/ARTEMIS-2482?focusedWorklogId=315495=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-315495
 ]

ASF GitHub Bot logged work on ARTEMIS-2482:
---

Author: ASF GitHub Bot
Created on: 20/Sep/19 06:59
Start Date: 20/Sep/19 06:59
Worklog Time Spent: 10m 
  Work Description: franz1981 commented on pull request #2832: ARTEMIS-2482 
Large messages could leak native ByteBuffers
URL: https://github.com/apache/activemq-artemis/pull/2832
 
 
   
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 315495)
Time Spent: 8h  (was: 7h 50m)

> Large messages could leak native ByteBuffers
> 
>
> Key: ARTEMIS-2482
> URL: https://issues.apache.org/jira/browse/ARTEMIS-2482
> Project: ActiveMQ Artemis
>  Issue Type: Bug
>  Components: AMQP, Broker, OpenWire
>Affects Versions: 2.10.0
>Reporter: Francesco Nigro
>Priority: Major
>  Time Spent: 8h
>  Remaining Estimate: 0h
>
> JournalStorageManager::addBytesToLargeMessage and 
> LargeServerMessageImpl::DecodingContext::encode are relying on the pooling of 
> direct ByteBuffers performed internally by NIO.
> Those buffers are pooled until certain size limit (ie 
> jdk.nio.maxCachedBufferSize, as shown on 
> https://bugs.openjdk.java.net/browse/JDK-8147468) otherwise are freed right 
> after the write succeed.
> If the property jdk.nio.maxCachedBufferSize isn't set, the direct buffers are 
> always pooled regardless of the size, leading to OOM issues on high load of 
> variable sized writes due to the amount of direct memory allocated and not 
> released/late released.
> This should be an alternative fix for 
> https://issues.apache.org/jira/browse/ARTEMIS-1811 and it check if such 
> pooling is happening, making large messages to be read/written in chunks by 
> using the Netty ByteBuf pool to handle any intermediate buffer.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Work logged] (ARTEMIS-2482) Large messages could leak native ByteBuffers

2019-09-19 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/ARTEMIS-2482?focusedWorklogId=314935=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-314935
 ]

ASF GitHub Bot logged work on ARTEMIS-2482:
---

Author: ASF GitHub Bot
Created on: 19/Sep/19 10:02
Start Date: 19/Sep/19 10:02
Worklog Time Spent: 10m 
  Work Description: wy96f commented on pull request #2832: ARTEMIS-2482 
Large messages could leak native ByteBuffers
URL: https://github.com/apache/activemq-artemis/pull/2832#discussion_r326091918
 
 

 ##
 File path: 
artemis-server/src/main/java/org/apache/activemq/artemis/core/persistence/impl/journal/LargeServerMessageImpl.java
 ##
 @@ -511,6 +512,44 @@ protected void closeFile() throws Exception {
   }
}
 
+   private static int read(final SequentialFile file, final ByteBuffer 
bufferRead) throws Exception {
 
 Review comment:
   Agreed :)
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 314935)
Time Spent: 7h 40m  (was: 7.5h)

> Large messages could leak native ByteBuffers
> 
>
> Key: ARTEMIS-2482
> URL: https://issues.apache.org/jira/browse/ARTEMIS-2482
> Project: ActiveMQ Artemis
>  Issue Type: Bug
>  Components: AMQP, Broker, OpenWire
>Affects Versions: 2.10.0
>Reporter: Francesco Nigro
>Priority: Major
>  Time Spent: 7h 40m
>  Remaining Estimate: 0h
>
> JournalStorageManager::addBytesToLargeMessage and 
> LargeServerMessageImpl::DecodingContext::encode are relying on the pooling of 
> direct ByteBuffers performed internally by NIO.
> Those buffers are pooled until certain size limit (ie 
> jdk.nio.maxCachedBufferSize, as shown on 
> https://bugs.openjdk.java.net/browse/JDK-8147468) otherwise are freed right 
> after the write succeed.
> If the property jdk.nio.maxCachedBufferSize isn't set, the direct buffers are 
> always pooled regardless of the size, leading to OOM issues on high load of 
> variable sized writes due to the amount of direct memory allocated and not 
> released/late released.
> This should be an alternative fix for 
> https://issues.apache.org/jira/browse/ARTEMIS-1811 and it check if such 
> pooling is happening, making large messages to be read/written in chunks by 
> using the Netty ByteBuf pool to handle any intermediate buffer.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Work logged] (ARTEMIS-2482) Large messages could leak native ByteBuffers

2019-09-18 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/ARTEMIS-2482?focusedWorklogId=314242=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-314242
 ]

ASF GitHub Bot logged work on ARTEMIS-2482:
---

Author: ASF GitHub Bot
Created on: 18/Sep/19 10:48
Start Date: 18/Sep/19 10:48
Worklog Time Spent: 10m 
  Work Description: franz1981 commented on pull request #2832: ARTEMIS-2482 
Large messages could leak native ByteBuffers
URL: https://github.com/apache/activemq-artemis/pull/2832#discussion_r325575480
 
 

 ##
 File path: 
artemis-server/src/main/java/org/apache/activemq/artemis/core/persistence/impl/journal/LargeServerMessageImpl.java
 ##
 @@ -511,6 +512,44 @@ protected void closeFile() throws Exception {
   }
}
 
+   private static int read(final SequentialFile file, final ByteBuffer 
bufferRead) throws Exception {
 
 Review comment:
   > If we use netty pools, we need to read the file every time we read a 
message, right? Will this somewhat affect the perf?
   
   We need to find a proper solution, but is a leak and we use a heap buffer 
too there so it means a leak on IOUtil too...
   I see the same on decoding side for replicated files and on encoding side 
for non netty connections...basically everytime we performa FileChannel ops 
with heap ByteBuffers.
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 314242)
Time Spent: 7.5h  (was: 7h 20m)

> Large messages could leak native ByteBuffers
> 
>
> Key: ARTEMIS-2482
> URL: https://issues.apache.org/jira/browse/ARTEMIS-2482
> Project: ActiveMQ Artemis
>  Issue Type: Bug
>  Components: AMQP, Broker, OpenWire
>Affects Versions: 2.10.0
>Reporter: Francesco Nigro
>Priority: Major
>  Time Spent: 7.5h
>  Remaining Estimate: 0h
>
> JournalStorageManager::addBytesToLargeMessage and 
> LargeServerMessageImpl::DecodingContext::encode are relying on the pooling of 
> direct ByteBuffers performed internally by NIO.
> Those buffers are pooled until certain size limit (ie 
> jdk.nio.maxCachedBufferSize, as shown on 
> https://bugs.openjdk.java.net/browse/JDK-8147468) otherwise are freed right 
> after the write succeed.
> If the property jdk.nio.maxCachedBufferSize isn't set, the direct buffers are 
> always pooled regardless of the size, leading to OOM issues on high load of 
> variable sized writes due to the amount of direct memory allocated and not 
> released/late released.
> This should be an alternative fix for 
> https://issues.apache.org/jira/browse/ARTEMIS-1811 and it check if such 
> pooling is happening, making large messages to be read/written in chunks by 
> using the Netty ByteBuf pool to handle any intermediate buffer.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Work logged] (ARTEMIS-2482) Large messages could leak native ByteBuffers

2019-09-18 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/ARTEMIS-2482?focusedWorklogId=314214=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-314214
 ]

ASF GitHub Bot logged work on ARTEMIS-2482:
---

Author: ASF GitHub Bot
Created on: 18/Sep/19 09:32
Start Date: 18/Sep/19 09:32
Worklog Time Spent: 10m 
  Work Description: franz1981 commented on pull request #2832: ARTEMIS-2482 
Large messages could leak native ByteBuffers
URL: https://github.com/apache/activemq-artemis/pull/2832#discussion_r325575480
 
 

 ##
 File path: 
artemis-server/src/main/java/org/apache/activemq/artemis/core/persistence/impl/journal/LargeServerMessageImpl.java
 ##
 @@ -511,6 +512,44 @@ protected void closeFile() throws Exception {
   }
}
 
+   private static int read(final SequentialFile file, final ByteBuffer 
bufferRead) throws Exception {
 
 Review comment:
   > If we use netty pools, we need to read the file every time we read a 
message, right? Will this somewhat affect the perf?
   We need to find a proper solution, but is a leak and we use a heap buffer 
too there so it means a leak on IOUtil too...
   I see the same on decoding side for replicated files and on encoding side 
for non netty connections...basically everytime we performa FileChannel ops 
with heap ByteBuffers.
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 314214)
Time Spent: 7h 20m  (was: 7h 10m)

> Large messages could leak native ByteBuffers
> 
>
> Key: ARTEMIS-2482
> URL: https://issues.apache.org/jira/browse/ARTEMIS-2482
> Project: ActiveMQ Artemis
>  Issue Type: Bug
>  Components: AMQP, Broker, OpenWire
>Affects Versions: 2.10.0
>Reporter: Francesco Nigro
>Priority: Major
>  Time Spent: 7h 20m
>  Remaining Estimate: 0h
>
> JournalStorageManager::addBytesToLargeMessage and 
> LargeServerMessageImpl::DecodingContext::encode are relying on the pooling of 
> direct ByteBuffers performed internally by NIO.
> Those buffers are pooled until certain size limit (ie 
> jdk.nio.maxCachedBufferSize, as shown on 
> https://bugs.openjdk.java.net/browse/JDK-8147468) otherwise are freed right 
> after the write succeed.
> If the property jdk.nio.maxCachedBufferSize isn't set, the direct buffers are 
> always pooled regardless of the size, leading to OOM issues on high load of 
> variable sized writes due to the amount of direct memory allocated and not 
> released/late released.
> This should be an alternative fix for 
> https://issues.apache.org/jira/browse/ARTEMIS-1811 and it check if such 
> pooling is happening, making large messages to be read/written in chunks by 
> using the Netty ByteBuf pool to handle any intermediate buffer.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Work logged] (ARTEMIS-2482) Large messages could leak native ByteBuffers

2019-09-18 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/ARTEMIS-2482?focusedWorklogId=314210=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-314210
 ]

ASF GitHub Bot logged work on ARTEMIS-2482:
---

Author: ASF GitHub Bot
Created on: 18/Sep/19 09:25
Start Date: 18/Sep/19 09:25
Worklog Time Spent: 10m 
  Work Description: wy96f commented on pull request #2832: ARTEMIS-2482 
Large messages could leak native ByteBuffers
URL: https://github.com/apache/activemq-artemis/pull/2832#discussion_r325571766
 
 

 ##
 File path: 
artemis-server/src/main/java/org/apache/activemq/artemis/core/persistence/impl/journal/LargeServerMessageImpl.java
 ##
 @@ -511,6 +512,44 @@ protected void closeFile() throws Exception {
   }
}
 
+   private static int read(final SequentialFile file, final ByteBuffer 
bufferRead) throws Exception {
 
 Review comment:
   Do you mean readFileBuffer in Page? In most cases, we sequentially read 
messages from page and readFileBuffer might contains several messages(depends 
on message size) in buffer. This way we can just read following messages from 
buffer(no need bothering file system). If we use netty pools, we need to read 
the file every time we read a message, right? Will this somewhat affect the 
perf? 
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 314210)
Time Spent: 7h 10m  (was: 7h)

> Large messages could leak native ByteBuffers
> 
>
> Key: ARTEMIS-2482
> URL: https://issues.apache.org/jira/browse/ARTEMIS-2482
> Project: ActiveMQ Artemis
>  Issue Type: Bug
>  Components: AMQP, Broker, OpenWire
>Affects Versions: 2.10.0
>Reporter: Francesco Nigro
>Priority: Major
>  Time Spent: 7h 10m
>  Remaining Estimate: 0h
>
> JournalStorageManager::addBytesToLargeMessage and 
> LargeServerMessageImpl::DecodingContext::encode are relying on the pooling of 
> direct ByteBuffers performed internally by NIO.
> Those buffers are pooled until certain size limit (ie 
> jdk.nio.maxCachedBufferSize, as shown on 
> https://bugs.openjdk.java.net/browse/JDK-8147468) otherwise are freed right 
> after the write succeed.
> If the property jdk.nio.maxCachedBufferSize isn't set, the direct buffers are 
> always pooled regardless of the size, leading to OOM issues on high load of 
> variable sized writes due to the amount of direct memory allocated and not 
> released/late released.
> This should be an alternative fix for 
> https://issues.apache.org/jira/browse/ARTEMIS-1811 and it check if such 
> pooling is happening, making large messages to be read/written in chunks by 
> using the Netty ByteBuf pool to handle any intermediate buffer.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Work logged] (ARTEMIS-2482) Large messages could leak native ByteBuffers

2019-09-17 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/ARTEMIS-2482?focusedWorklogId=313657=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-313657
 ]

ASF GitHub Bot logged work on ARTEMIS-2482:
---

Author: ASF GitHub Bot
Created on: 17/Sep/19 12:13
Start Date: 17/Sep/19 12:13
Worklog Time Spent: 10m 
  Work Description: franz1981 commented on pull request #2832: ARTEMIS-2482 
Large messages could leak native ByteBuffers
URL: https://github.com/apache/activemq-artemis/pull/2832#discussion_r325131535
 
 

 ##
 File path: 
artemis-server/src/main/java/org/apache/activemq/artemis/core/persistence/impl/journal/LargeServerMessageImpl.java
 ##
 @@ -511,6 +512,44 @@ protected void closeFile() throws Exception {
   }
}
 
+   private static int read(final SequentialFile file, final ByteBuffer 
bufferRead) throws Exception {
 
 Review comment:
   @wy96f I still think that `Page` has the same issue and should use Netty 
pools instead of retaining direct memory as a state
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 313657)
Time Spent: 7h  (was: 6h 50m)

> Large messages could leak native ByteBuffers
> 
>
> Key: ARTEMIS-2482
> URL: https://issues.apache.org/jira/browse/ARTEMIS-2482
> Project: ActiveMQ Artemis
>  Issue Type: Bug
>  Components: AMQP, Broker, OpenWire
>Affects Versions: 2.10.0
>Reporter: Francesco Nigro
>Priority: Major
>  Time Spent: 7h
>  Remaining Estimate: 0h
>
> JournalStorageManager::addBytesToLargeMessage and 
> LargeServerMessageImpl::DecodingContext::encode are relying on the pooling of 
> direct ByteBuffers performed internally by NIO.
> Those buffers are pooled until certain size limit (ie 
> jdk.nio.maxCachedBufferSize, as shown on 
> https://bugs.openjdk.java.net/browse/JDK-8147468) otherwise are freed right 
> after the write succeed.
> If the property jdk.nio.maxCachedBufferSize isn't set, the direct buffers are 
> always pooled regardless of the size, leading to OOM issues on high load of 
> variable sized writes due to the amount of direct memory allocated and not 
> released/late released.
> This should be an alternative fix for 
> https://issues.apache.org/jira/browse/ARTEMIS-1811 and it check if such 
> pooling is happening, making large messages to be read/written in chunks by 
> using the Netty ByteBuf pool to handle any intermediate buffer.



--
This message was sent by Atlassian Jira
(v8.3.2#803003)


[jira] [Work logged] (ARTEMIS-2482) Large messages could leak native ByteBuffers

2019-09-16 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/ARTEMIS-2482?focusedWorklogId=313001=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-313001
 ]

ASF GitHub Bot logged work on ARTEMIS-2482:
---

Author: ASF GitHub Bot
Created on: 16/Sep/19 14:03
Start Date: 16/Sep/19 14:03
Worklog Time Spent: 10m 
  Work Description: franz1981 commented on pull request #2832: ARTEMIS-2482 
Large messages could leak native ByteBuffers
URL: https://github.com/apache/activemq-artemis/pull/2832#discussion_r324693072
 
 

 ##
 File path: 
artemis-server/src/main/java/org/apache/activemq/artemis/core/persistence/impl/journal/LargeServerMessageImpl.java
 ##
 @@ -511,6 +512,44 @@ protected void closeFile() throws Exception {
   }
}
 
+   private static int read(final SequentialFile file, final ByteBuffer 
bufferRead) throws Exception {
 
 Review comment:
   @wy96f 
   It was the second point on my list :) 
   Yes, it can be done, but deserve lot of attention and probably some changes 
on the API on SessionCallback...
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 313001)
Time Spent: 6h 50m  (was: 6h 40m)

> Large messages could leak native ByteBuffers
> 
>
> Key: ARTEMIS-2482
> URL: https://issues.apache.org/jira/browse/ARTEMIS-2482
> Project: ActiveMQ Artemis
>  Issue Type: Bug
>  Components: AMQP, Broker, OpenWire
>Affects Versions: 2.10.0
>Reporter: Francesco Nigro
>Priority: Major
>  Time Spent: 6h 50m
>  Remaining Estimate: 0h
>
> JournalStorageManager::addBytesToLargeMessage and 
> LargeServerMessageImpl::DecodingContext::encode are relying on the pooling of 
> direct ByteBuffers performed internally by NIO.
> Those buffers are pooled until certain size limit (ie 
> jdk.nio.maxCachedBufferSize, as shown on 
> https://bugs.openjdk.java.net/browse/JDK-8147468) otherwise are freed right 
> after the write succeed.
> If the property jdk.nio.maxCachedBufferSize isn't set, the direct buffers are 
> always pooled regardless of the size, leading to OOM issues on high load of 
> variable sized writes due to the amount of direct memory allocated and not 
> released/late released.
> This should be an alternative fix for 
> https://issues.apache.org/jira/browse/ARTEMIS-1811 and it check if such 
> pooling is happening, making large messages to be read/written in chunks by 
> using the Netty ByteBuf pool to handle any intermediate buffer.



--
This message was sent by Atlassian Jira
(v8.3.2#803003)


[jira] [Work logged] (ARTEMIS-2482) Large messages could leak native ByteBuffers

2019-09-16 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/ARTEMIS-2482?focusedWorklogId=312932=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-312932
 ]

ASF GitHub Bot logged work on ARTEMIS-2482:
---

Author: ASF GitHub Bot
Created on: 16/Sep/19 12:54
Start Date: 16/Sep/19 12:54
Worklog Time Spent: 10m 
  Work Description: wy96f commented on pull request #2832: ARTEMIS-2482 
Large messages could leak native ByteBuffers
URL: https://github.com/apache/activemq-artemis/pull/2832#discussion_r324656398
 
 

 ##
 File path: 
artemis-server/src/main/java/org/apache/activemq/artemis/core/persistence/impl/journal/LargeServerMessageImpl.java
 ##
 @@ -511,6 +512,44 @@ protected void closeFile() throws Exception {
   }
}
 
+   private static int read(final SequentialFile file, final ByteBuffer 
bufferRead) throws Exception {
 
 Review comment:
   @franz1981 Given this is a hot path and in most 
cases(LargeMessageDeliverer::deliver, 
ClientProducerImpl::largeMessageSendServer, CoreMessage::getLargeMessageBuffer, 
etc) "bufferRead" is a heap buffer, can we construct "bufferRead" by using 
PooledByteBufAllocator before calling to save from coping buffer?
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 312932)
Time Spent: 6h 40m  (was: 6.5h)

> Large messages could leak native ByteBuffers
> 
>
> Key: ARTEMIS-2482
> URL: https://issues.apache.org/jira/browse/ARTEMIS-2482
> Project: ActiveMQ Artemis
>  Issue Type: Bug
>  Components: AMQP, Broker, OpenWire
>Affects Versions: 2.10.0
>Reporter: Francesco Nigro
>Priority: Major
>  Time Spent: 6h 40m
>  Remaining Estimate: 0h
>
> JournalStorageManager::addBytesToLargeMessage and 
> LargeServerMessageImpl::DecodingContext::encode are relying on the pooling of 
> direct ByteBuffers performed internally by NIO.
> Those buffers are pooled until certain size limit (ie 
> jdk.nio.maxCachedBufferSize, as shown on 
> https://bugs.openjdk.java.net/browse/JDK-8147468) otherwise are freed right 
> after the write succeed.
> If the property jdk.nio.maxCachedBufferSize isn't set, the direct buffers are 
> always pooled regardless of the size, leading to OOM issues on high load of 
> variable sized writes due to the amount of direct memory allocated and not 
> released/late released.
> This should be an alternative fix for 
> https://issues.apache.org/jira/browse/ARTEMIS-1811 and it check if such 
> pooling is happening, making large messages to be read/written in chunks by 
> using the Netty ByteBuf pool to handle any intermediate buffer.



--
This message was sent by Atlassian Jira
(v8.3.2#803003)


[jira] [Work logged] (ARTEMIS-2482) Large messages could leak native ByteBuffers

2019-09-16 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/ARTEMIS-2482?focusedWorklogId=312930=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-312930
 ]

ASF GitHub Bot logged work on ARTEMIS-2482:
---

Author: ASF GitHub Bot
Created on: 16/Sep/19 12:54
Start Date: 16/Sep/19 12:54
Worklog Time Spent: 10m 
  Work Description: wy96f commented on pull request #2832: ARTEMIS-2482 
Large messages could leak native ByteBuffers
URL: https://github.com/apache/activemq-artemis/pull/2832#discussion_r324656398
 
 

 ##
 File path: 
artemis-server/src/main/java/org/apache/activemq/artemis/core/persistence/impl/journal/LargeServerMessageImpl.java
 ##
 @@ -511,6 +512,44 @@ protected void closeFile() throws Exception {
   }
}
 
+   private static int read(final SequentialFile file, final ByteBuffer 
bufferRead) throws Exception {
 
 Review comment:
   Given this is a hot path and in most cases(LargeMessageDeliverer::deliver, 
ClientProducerImpl::largeMessageSendServer, CoreMessage::getLargeMessageBuffer, 
etc) "bufferRead" is a heap buffer, can we construct "bufferRead" by using 
PooledByteBufAllocator before calling to save from coping buffer?
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 312930)
Time Spent: 6.5h  (was: 6h 20m)

> Large messages could leak native ByteBuffers
> 
>
> Key: ARTEMIS-2482
> URL: https://issues.apache.org/jira/browse/ARTEMIS-2482
> Project: ActiveMQ Artemis
>  Issue Type: Bug
>  Components: AMQP, Broker, OpenWire
>Affects Versions: 2.10.0
>Reporter: Francesco Nigro
>Priority: Major
>  Time Spent: 6.5h
>  Remaining Estimate: 0h
>
> JournalStorageManager::addBytesToLargeMessage and 
> LargeServerMessageImpl::DecodingContext::encode are relying on the pooling of 
> direct ByteBuffers performed internally by NIO.
> Those buffers are pooled until certain size limit (ie 
> jdk.nio.maxCachedBufferSize, as shown on 
> https://bugs.openjdk.java.net/browse/JDK-8147468) otherwise are freed right 
> after the write succeed.
> If the property jdk.nio.maxCachedBufferSize isn't set, the direct buffers are 
> always pooled regardless of the size, leading to OOM issues on high load of 
> variable sized writes due to the amount of direct memory allocated and not 
> released/late released.
> This should be an alternative fix for 
> https://issues.apache.org/jira/browse/ARTEMIS-1811 and it check if such 
> pooling is happening, making large messages to be read/written in chunks by 
> using the Netty ByteBuf pool to handle any intermediate buffer.



--
This message was sent by Atlassian Jira
(v8.3.2#803003)


[jira] [Work logged] (ARTEMIS-2482) Large messages could leak native ByteBuffers

2019-09-16 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/ARTEMIS-2482?focusedWorklogId=312816=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-312816
 ]

ASF GitHub Bot logged work on ARTEMIS-2482:
---

Author: ASF GitHub Bot
Created on: 16/Sep/19 08:46
Start Date: 16/Sep/19 08:46
Worklog Time Spent: 10m 
  Work Description: franz1981 commented on issue #2832: ARTEMIS-2482 Large 
messages could leak native ByteBuffers
URL: https://github.com/apache/activemq-artemis/pull/2832#issuecomment-531690229
 
 
   @wy96f fair enough :)
   Still waiting some results from the integration tests on the CI and will be 
ready to be merged, although I prefer to improve code quality of this PR, there 
are some changes due to JDBC that makes is uglier than I wished
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 312816)
Time Spent: 6h 20m  (was: 6h 10m)

> Large messages could leak native ByteBuffers
> 
>
> Key: ARTEMIS-2482
> URL: https://issues.apache.org/jira/browse/ARTEMIS-2482
> Project: ActiveMQ Artemis
>  Issue Type: Bug
>  Components: AMQP, Broker, OpenWire
>Affects Versions: 2.10.0
>Reporter: Francesco Nigro
>Priority: Major
>  Time Spent: 6h 20m
>  Remaining Estimate: 0h
>
> JournalStorageManager::addBytesToLargeMessage and 
> LargeServerMessageImpl::DecodingContext::encode are relying on the pooling of 
> direct ByteBuffers performed internally by NIO.
> Those buffers are pooled until certain size limit (ie 
> jdk.nio.maxCachedBufferSize, as shown on 
> https://bugs.openjdk.java.net/browse/JDK-8147468) otherwise are freed right 
> after the write succeed.
> If the property jdk.nio.maxCachedBufferSize isn't set, the direct buffers are 
> always pooled regardless of the size, leading to OOM issues on high load of 
> variable sized writes due to the amount of direct memory allocated and not 
> released/late released.
> This should be an alternative fix for 
> https://issues.apache.org/jira/browse/ARTEMIS-1811 and it check if such 
> pooling is happening, making large messages to be read/written in chunks by 
> using the Netty ByteBuf pool to handle any intermediate buffer.



--
This message was sent by Atlassian Jira
(v8.3.2#803003)


[jira] [Work logged] (ARTEMIS-2482) Large messages could leak native ByteBuffers

2019-09-16 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/ARTEMIS-2482?focusedWorklogId=312813=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-312813
 ]

ASF GitHub Bot logged work on ARTEMIS-2482:
---

Author: ASF GitHub Bot
Created on: 16/Sep/19 08:39
Start Date: 16/Sep/19 08:39
Worklog Time Spent: 10m 
  Work Description: wy96f commented on issue #2832: ARTEMIS-2482 Large 
messages could leak native ByteBuffers
URL: https://github.com/apache/activemq-artemis/pull/2832#issuecomment-531688339
 
 
   As you said, size of tiny/small/medium caches is quite small. Even if not 
freed in time, the impact is very little compared to IOUtil that pool native 
ByteBuffers(default TEMP_BUF_POOL_SIZE=8) for each thread :)
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 312813)
Time Spent: 6h 10m  (was: 6h)

> Large messages could leak native ByteBuffers
> 
>
> Key: ARTEMIS-2482
> URL: https://issues.apache.org/jira/browse/ARTEMIS-2482
> Project: ActiveMQ Artemis
>  Issue Type: Bug
>  Components: AMQP, Broker, OpenWire
>Affects Versions: 2.10.0
>Reporter: Francesco Nigro
>Priority: Major
>  Time Spent: 6h 10m
>  Remaining Estimate: 0h
>
> JournalStorageManager::addBytesToLargeMessage and 
> LargeServerMessageImpl::DecodingContext::encode are relying on the pooling of 
> direct ByteBuffers performed internally by NIO.
> Those buffers are pooled until certain size limit (ie 
> jdk.nio.maxCachedBufferSize, as shown on 
> https://bugs.openjdk.java.net/browse/JDK-8147468) otherwise are freed right 
> after the write succeed.
> If the property jdk.nio.maxCachedBufferSize isn't set, the direct buffers are 
> always pooled regardless of the size, leading to OOM issues on high load of 
> variable sized writes due to the amount of direct memory allocated and not 
> released/late released.
> This should be an alternative fix for 
> https://issues.apache.org/jira/browse/ARTEMIS-1811 and it check if such 
> pooling is happening, making large messages to be read/written in chunks by 
> using the Netty ByteBuf pool to handle any intermediate buffer.



--
This message was sent by Atlassian Jira
(v8.3.2#803003)


[jira] [Work logged] (ARTEMIS-2482) Large messages could leak native ByteBuffers

2019-09-16 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/ARTEMIS-2482?focusedWorklogId=312790=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-312790
 ]

ASF GitHub Bot logged work on ARTEMIS-2482:
---

Author: ASF GitHub Bot
Created on: 16/Sep/19 06:32
Start Date: 16/Sep/19 06:32
Worklog Time Spent: 10m 
  Work Description: franz1981 commented on issue #2832: ARTEMIS-2482 Large 
messages could leak native ByteBuffers
URL: https://github.com/apache/activemq-artemis/pull/2832#issuecomment-531656312
 
 
   @wy96f 
   > So the cache will be freed and used by other threads when thread dies?
   
   Yes, it is, but my concern is that it needs a finalization run before 
getting there ie not that deterministic as I wish :)
   
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 312790)
Time Spent: 6h  (was: 5h 50m)

> Large messages could leak native ByteBuffers
> 
>
> Key: ARTEMIS-2482
> URL: https://issues.apache.org/jira/browse/ARTEMIS-2482
> Project: ActiveMQ Artemis
>  Issue Type: Bug
>  Components: AMQP, Broker, OpenWire
>Affects Versions: 2.10.0
>Reporter: Francesco Nigro
>Priority: Major
>  Time Spent: 6h
>  Remaining Estimate: 0h
>
> JournalStorageManager::addBytesToLargeMessage and 
> LargeServerMessageImpl::DecodingContext::encode are relying on the pooling of 
> direct ByteBuffers performed internally by NIO.
> Those buffers are pooled until certain size limit (ie 
> jdk.nio.maxCachedBufferSize, as shown on 
> https://bugs.openjdk.java.net/browse/JDK-8147468) otherwise are freed right 
> after the write succeed.
> If the property jdk.nio.maxCachedBufferSize isn't set, the direct buffers are 
> always pooled regardless of the size, leading to OOM issues on high load of 
> variable sized writes due to the amount of direct memory allocated and not 
> released/late released.
> This should be an alternative fix for 
> https://issues.apache.org/jira/browse/ARTEMIS-1811 and it check if such 
> pooling is happening, making large messages to be read/written in chunks by 
> using the Netty ByteBuf pool to handle any intermediate buffer.



--
This message was sent by Atlassian Jira
(v8.3.2#803003)


[jira] [Work logged] (ARTEMIS-2482) Large messages could leak native ByteBuffers

2019-09-16 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/ARTEMIS-2482?focusedWorklogId=312785=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-312785
 ]

ASF GitHub Bot logged work on ARTEMIS-2482:
---

Author: ASF GitHub Bot
Created on: 16/Sep/19 06:21
Start Date: 16/Sep/19 06:21
Worklog Time Spent: 10m 
  Work Description: wy96f commented on issue #2832: ARTEMIS-2482 Large 
messages could leak native ByteBuffers
URL: https://github.com/apache/activemq-artemis/pull/2832#issuecomment-531654209
 
 
   @franz1981 
   
   > I have to add more detail and I was wrong: the tiny/small/medium thread 
local caches are actually leaking memory regions (with default 
io.netty.allocator.useCacheForAllThread) that need the holding thread to die in 
order to make them available to other threads again
   
   I assumed memory released into tiny/small/medium cache would be leaked if 
holding thread terminates until I looked at the code in PoolThreadCache:
   ```
   /// TODO: In the future when we move to Java9+ we should use 
java.lang.ref.Cleaner.
   @Override
   protected void finalize() throws Throwable {
   try {
   super.finalize();
   } finally {
   free();
   }
   }
   
   /**
*  Should be called if the Thread that uses this cache is about to 
exist to release resources out of the cache
*/
   void free() {
   // As free() may be called either by the finalizer or by 
FastThreadLocal.onRemoval(...) we need to ensure
   // we only call this one time.
   if (freed.compareAndSet(false, true)) {
   int numFreed = free(tinySubPageDirectCaches) +
   free(smallSubPageDirectCaches) +
   free(normalDirectCaches) +
   free(tinySubPageHeapCaches) +
   free(smallSubPageHeapCaches) +
   free(normalHeapCaches);
   
   if (numFreed > 0 && logger.isDebugEnabled()) {
   logger.debug("Freed {} thread-local buffer(s) from thread: 
{}", numFreed,
   Thread.currentThread().getName());
   }
   
   if (directArena != null) {
   directArena.numThreadCaches.getAndDecrement();
   }
   
   if (heapArena != null) {
   heapArena.numThreadCaches.getAndDecrement();
   }
   }
   }
   ```
   So the cache will be freed and used by other threads when thread dies? If 
so, it seems no any leak problems with this pr, no need with improvements?
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 312785)
Time Spent: 5h 50m  (was: 5h 40m)

> Large messages could leak native ByteBuffers
> 
>
> Key: ARTEMIS-2482
> URL: https://issues.apache.org/jira/browse/ARTEMIS-2482
> Project: ActiveMQ Artemis
>  Issue Type: Bug
>  Components: AMQP, Broker, OpenWire
>Affects Versions: 2.10.0
>Reporter: Francesco Nigro
>Priority: Major
>  Time Spent: 5h 50m
>  Remaining Estimate: 0h
>
> JournalStorageManager::addBytesToLargeMessage and 
> LargeServerMessageImpl::DecodingContext::encode are relying on the pooling of 
> direct ByteBuffers performed internally by NIO.
> Those buffers are pooled until certain size limit (ie 
> jdk.nio.maxCachedBufferSize, as shown on 
> https://bugs.openjdk.java.net/browse/JDK-8147468) otherwise are freed right 
> after the write succeed.
> If the property jdk.nio.maxCachedBufferSize isn't set, the direct buffers are 
> always pooled regardless of the size, leading to OOM issues on high load of 
> variable sized writes due to the amount of direct memory allocated and not 
> released/late released.
> This should be an alternative fix for 
> https://issues.apache.org/jira/browse/ARTEMIS-1811 and it check if such 
> pooling is happening, making large messages to be read/written in chunks by 
> using the Netty ByteBuf pool to handle any intermediate buffer.



--
This message was sent by Atlassian Jira
(v8.3.2#803003)


[jira] [Work logged] (ARTEMIS-2482) Large messages could leak native ByteBuffers

2019-09-11 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/ARTEMIS-2482?focusedWorklogId=310470=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-310470
 ]

ASF GitHub Bot logged work on ARTEMIS-2482:
---

Author: ASF GitHub Bot
Created on: 11/Sep/19 10:29
Start Date: 11/Sep/19 10:29
Worklog Time Spent: 10m 
  Work Description: franz1981 commented on issue #2832: ARTEMIS-2482 Large 
messages could leak native ByteBuffers
URL: https://github.com/apache/activemq-artemis/pull/2832#issuecomment-530319911
 
 
   @wy96f I've taken a second look to the way Netty handle thread local 
`ByteBuf`pools and I can confirm that 
`-Dio.netty.allocator.useCacheForAllThreads=false` should save non-netty 
threads from creating such thread local regions for tiny/small/medium sized 
buffers.
   While related to 
   > but they just contain wrappers to direct memory that cannot be released 
(is part of the pool and they have no cleaner). The impact should be way less 
then IOUtil that pool native ByteBuffers holding exclusively native memory that 
won't be reused anymore...
   
   I have to add more detail and I was wrong: the tiny/small/medium thread 
local caches are actually leaking memory regions (with default 
`io.netty.allocator.useCacheForAllThread`) that need the holding thread to die 
in order to make them available to other threads again, but their size is quite 
small by default and the size I've chosen as `LARGE_MESSAGE_CHUNK_SIZE` (ie 100 
* 1024) is `medium` but doesn't fall in any of those caches, so no real leak 
will happen until we allocate something smaller (could happen but is rare).
   
   IMO we should consider 2 separate improvements for a separate PR:
   - allow threads to stay alive forever and in a fixed number, but just idle 
(and maybe using the FJ thread pool executor instead of the AMQ thread pool 
executor in that case): in that case we can consider such caches to be ok to be 
used by *all* threads (including non-netty ones), because the leak is just to 
cope with future load
   - for the AMQ thread pool executor as it is, make the presence of such 
caches configurable and off by default
   
   wdyt?
   Thanks for the comment, good catch!
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 310470)
Time Spent: 5.5h  (was: 5h 20m)

> Large messages could leak native ByteBuffers
> 
>
> Key: ARTEMIS-2482
> URL: https://issues.apache.org/jira/browse/ARTEMIS-2482
> Project: ActiveMQ Artemis
>  Issue Type: Bug
>  Components: AMQP, Broker, OpenWire
>Affects Versions: 2.10.0
>Reporter: Francesco Nigro
>Priority: Major
>  Time Spent: 5.5h
>  Remaining Estimate: 0h
>
> JournalStorageManager::addBytesToLargeMessage and 
> LargeServerMessageImpl::DecodingContext::encode are relying on the pooling of 
> direct ByteBuffers performed internally by NIO.
> Those buffers are pooled until certain size limit (ie 
> jdk.nio.maxCachedBufferSize, as shown on 
> https://bugs.openjdk.java.net/browse/JDK-8147468) otherwise are freed right 
> after the write succeed.
> If the property jdk.nio.maxCachedBufferSize isn't set, the direct buffers are 
> always pooled regardless of the size, leading to OOM issues on high load of 
> variable sized writes due to the amount of direct memory allocated and not 
> released/late released.
> This should be an alternative fix for 
> https://issues.apache.org/jira/browse/ARTEMIS-1811 and it check if such 
> pooling is happening, making large messages to be read/written in chunks by 
> using the Netty ByteBuf pool to handle any intermediate buffer.



--
This message was sent by Atlassian Jira
(v8.3.2#803003)


[jira] [Work logged] (ARTEMIS-2482) Large messages could leak native ByteBuffers

2019-09-11 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/ARTEMIS-2482?focusedWorklogId=310472=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-310472
 ]

ASF GitHub Bot logged work on ARTEMIS-2482:
---

Author: ASF GitHub Bot
Created on: 11/Sep/19 10:29
Start Date: 11/Sep/19 10:29
Worklog Time Spent: 10m 
  Work Description: franz1981 commented on issue #2832: ARTEMIS-2482 Large 
messages could leak native ByteBuffers
URL: https://github.com/apache/activemq-artemis/pull/2832#issuecomment-530319911
 
 
   @wy96f I've taken a second look to the way Netty handle thread local 
`ByteBuf`pools and I can confirm that 
`-Dio.netty.allocator.useCacheForAllThreads=false` should save non-netty 
threads from creating such thread local regions for tiny/small/medium sized 
buffers.
   While related to 
   > but they just contain wrappers to direct memory that cannot be released 
(is part of the pool and they have no cleaner). The impact should be way less 
then IOUtil that pool native ByteBuffers holding exclusively native memory that 
won't be reused anymore...
   
   I have to add more detail and I was wrong: the tiny/small/medium thread 
local caches are actually leaking memory regions (with default 
`io.netty.allocator.useCacheForAllThread`) that need the holding thread to die 
in order to make them available to other threads again, but their size is quite 
small by default and the size I've chosen as `LARGE_MESSAGE_CHUNK_SIZE` (ie 100 
* 1024) is `medium` but doesn't fall in any of those caches, so no real leak 
will happen until we allocate something smaller (could happen but is rare).
   
   IMO we should consider 2 separate improvements for a separate PR:
   - allow threads to stay alive forever and in a fixed number, but just idle 
(and maybe using the FJ thread pool executor instead of the AMQ thread pool 
executor in that case): with this we can consider such caches to be ok to be 
used by *all* threads (including non-netty ones), because the leak is just to 
cope with future load and they won't be deallocated just due to inactivity
   - for the AMQ thread pool executor as it is, make the presence of such 
caches configurable and off by default
   
   wdyt?
   Thanks for the comment, good catch!
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 310472)
Time Spent: 5h 40m  (was: 5.5h)

> Large messages could leak native ByteBuffers
> 
>
> Key: ARTEMIS-2482
> URL: https://issues.apache.org/jira/browse/ARTEMIS-2482
> Project: ActiveMQ Artemis
>  Issue Type: Bug
>  Components: AMQP, Broker, OpenWire
>Affects Versions: 2.10.0
>Reporter: Francesco Nigro
>Priority: Major
>  Time Spent: 5h 40m
>  Remaining Estimate: 0h
>
> JournalStorageManager::addBytesToLargeMessage and 
> LargeServerMessageImpl::DecodingContext::encode are relying on the pooling of 
> direct ByteBuffers performed internally by NIO.
> Those buffers are pooled until certain size limit (ie 
> jdk.nio.maxCachedBufferSize, as shown on 
> https://bugs.openjdk.java.net/browse/JDK-8147468) otherwise are freed right 
> after the write succeed.
> If the property jdk.nio.maxCachedBufferSize isn't set, the direct buffers are 
> always pooled regardless of the size, leading to OOM issues on high load of 
> variable sized writes due to the amount of direct memory allocated and not 
> released/late released.
> This should be an alternative fix for 
> https://issues.apache.org/jira/browse/ARTEMIS-1811 and it check if such 
> pooling is happening, making large messages to be read/written in chunks by 
> using the Netty ByteBuf pool to handle any intermediate buffer.



--
This message was sent by Atlassian Jira
(v8.3.2#803003)


[jira] [Work logged] (ARTEMIS-2482) Large messages could leak native ByteBuffers

2019-09-11 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/ARTEMIS-2482?focusedWorklogId=310469=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-310469
 ]

ASF GitHub Bot logged work on ARTEMIS-2482:
---

Author: ASF GitHub Bot
Created on: 11/Sep/19 10:28
Start Date: 11/Sep/19 10:28
Worklog Time Spent: 10m 
  Work Description: franz1981 commented on issue #2832: ARTEMIS-2482 Large 
messages could leak native ByteBuffers
URL: https://github.com/apache/activemq-artemis/pull/2832#issuecomment-530319911
 
 
   @wy96f I've taken a second look to the way Netty handle thread local 
`ByteBuf`pools and I can confirm that 
`-Dio.netty.allocator.useCacheForAllThreads=false` should save non-netty 
threads from creating such thread local regions for tiny/small/medium sized 
buffers.
   While related to 
   > but they just contain wrappers to direct memory that cannot be released 
(is part of the pool and they have no cleaner). The impact should be way less 
then IOUtil that pool native ByteBuffers holding exclusively native memory that 
won't be reused anymore...
   
   I have to add more detail and I was wrong: the tiny/small/medium thread 
local caches are actually leaking memory regions (with default 
`io.netty.allocator.useCacheForAllThread`) that need the holding thread to die 
in order to make them available to other threads again, but their size is quite 
small by default and the size I've chosen as `LARGE_MESSAGE_CHUNK_SIZE` (ie 100 
* 1024) is `medium` but doesn't fall in any of those caches, so no real leak 
will happen until we allocate something smaller (could happen but is rare).
   
   IMO we should consider 2 separate improvements for a separate PR:
   - allow threads to stay alive forever, but just idle (and maybe using the FJ 
thread pool executor instead of the AMQ thread pool executor in that case): in 
that case we can consider such caches to be ok to be used by *all* threads 
(including non-netty ones), because the leak is just to cope with future load
   - for the AMQ thread pool executor as it is, make the presence of such 
caches configurable and off by default
   
   wdyt?
   Thanks for the comment, good catch!
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 310469)
Time Spent: 5h 20m  (was: 5h 10m)

> Large messages could leak native ByteBuffers
> 
>
> Key: ARTEMIS-2482
> URL: https://issues.apache.org/jira/browse/ARTEMIS-2482
> Project: ActiveMQ Artemis
>  Issue Type: Bug
>  Components: AMQP, Broker, OpenWire
>Affects Versions: 2.10.0
>Reporter: Francesco Nigro
>Priority: Major
>  Time Spent: 5h 20m
>  Remaining Estimate: 0h
>
> JournalStorageManager::addBytesToLargeMessage and 
> LargeServerMessageImpl::DecodingContext::encode are relying on the pooling of 
> direct ByteBuffers performed internally by NIO.
> Those buffers are pooled until certain size limit (ie 
> jdk.nio.maxCachedBufferSize, as shown on 
> https://bugs.openjdk.java.net/browse/JDK-8147468) otherwise are freed right 
> after the write succeed.
> If the property jdk.nio.maxCachedBufferSize isn't set, the direct buffers are 
> always pooled regardless of the size, leading to OOM issues on high load of 
> variable sized writes due to the amount of direct memory allocated and not 
> released/late released.
> This should be an alternative fix for 
> https://issues.apache.org/jira/browse/ARTEMIS-1811 and it check if such 
> pooling is happening, making large messages to be read/written in chunks by 
> using the Netty ByteBuf pool to handle any intermediate buffer.



--
This message was sent by Atlassian Jira
(v8.3.2#803003)


[jira] [Work logged] (ARTEMIS-2482) Large messages could leak native ByteBuffers

2019-09-11 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/ARTEMIS-2482?focusedWorklogId=310467=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-310467
 ]

ASF GitHub Bot logged work on ARTEMIS-2482:
---

Author: ASF GitHub Bot
Created on: 11/Sep/19 10:25
Start Date: 11/Sep/19 10:25
Worklog Time Spent: 10m 
  Work Description: franz1981 commented on issue #2832: ARTEMIS-2482 Large 
messages could leak native ByteBuffers
URL: https://github.com/apache/activemq-artemis/pull/2832#issuecomment-530319911
 
 
   @wy96f I've taken a second look to the way Netty handle thread local pools 
and I can confirm that `-Dio.netty.allocator.useCacheForAllThreads=false` 
should save non-netty threads from creating such thread local regions for 
tiny/small/medium sized buffers.
   While related to 
   > but they just contain wrappers to direct memory that cannot be released 
(is part of the pool and they have no cleaner). The impact should be way less 
then IOUtil that pool native ByteBuffers holding exclusively native memory that 
won't be reused anymore...
   
   I have to add more detail and I was wrong: the tiny/small/medium thread 
local caches are actually leaking memory regions (with default 
`io.netty.allocator.useCacheForAllThread`) that need the holding thread to die 
in order to make them available to other threads again, but their size is quite 
small by default and the size I've chosen as `LARGE_MESSAGE_CHUNK_SIZE` (ie 100 
* 1024) is `medium` but doesn't fall in any of those caches, so no real leak 
will happen until we allocate something smaller (could happen but is rare).
   IMO we should consider 2 separate improvements for a separate PR:
   - allow threads to stay alive forever, but just idle (and maybe using the FJ 
thread pool executor instead of the AMQ thread pool executor in that case): in 
that case we can consider such caches to be ok to be used by *all* threads 
(including non-netty ones), because the leak is just to cope with future load
   - for the AMQ thread pool executor as it is, make the presence of such 
caches configurable and off by default
   
   wdyt?
   Thanks for the comment, good catch!
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 310467)
Time Spent: 5h  (was: 4h 50m)

> Large messages could leak native ByteBuffers
> 
>
> Key: ARTEMIS-2482
> URL: https://issues.apache.org/jira/browse/ARTEMIS-2482
> Project: ActiveMQ Artemis
>  Issue Type: Bug
>  Components: AMQP, Broker, OpenWire
>Affects Versions: 2.10.0
>Reporter: Francesco Nigro
>Priority: Major
>  Time Spent: 5h
>  Remaining Estimate: 0h
>
> JournalStorageManager::addBytesToLargeMessage and 
> LargeServerMessageImpl::DecodingContext::encode are relying on the pooling of 
> direct ByteBuffers performed internally by NIO.
> Those buffers are pooled until certain size limit (ie 
> jdk.nio.maxCachedBufferSize, as shown on 
> https://bugs.openjdk.java.net/browse/JDK-8147468) otherwise are freed right 
> after the write succeed.
> If the property jdk.nio.maxCachedBufferSize isn't set, the direct buffers are 
> always pooled regardless of the size, leading to OOM issues on high load of 
> variable sized writes due to the amount of direct memory allocated and not 
> released/late released.
> This should be an alternative fix for 
> https://issues.apache.org/jira/browse/ARTEMIS-1811 and it check if such 
> pooling is happening, making large messages to be read/written in chunks by 
> using the Netty ByteBuf pool to handle any intermediate buffer.



--
This message was sent by Atlassian Jira
(v8.3.2#803003)


[jira] [Work logged] (ARTEMIS-2482) Large messages could leak native ByteBuffers

2019-09-11 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/ARTEMIS-2482?focusedWorklogId=310468=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-310468
 ]

ASF GitHub Bot logged work on ARTEMIS-2482:
---

Author: ASF GitHub Bot
Created on: 11/Sep/19 10:25
Start Date: 11/Sep/19 10:25
Worklog Time Spent: 10m 
  Work Description: franz1981 commented on issue #2832: ARTEMIS-2482 Large 
messages could leak native ByteBuffers
URL: https://github.com/apache/activemq-artemis/pull/2832#issuecomment-530319911
 
 
   @wy96f I've taken a second look to the way Netty handle thread local pools 
and I can confirm that `-Dio.netty.allocator.useCacheForAllThreads=false` 
should save non-netty threads from creating such thread local regions for 
tiny/small/medium sized buffers.
   While related to 
   > but they just contain wrappers to direct memory that cannot be released 
(is part of the pool and they have no cleaner). The impact should be way less 
then IOUtil that pool native ByteBuffers holding exclusively native memory that 
won't be reused anymore...
   
   I have to add more detail and I was wrong: the tiny/small/medium thread 
local caches are actually leaking memory regions (with default 
`io.netty.allocator.useCacheForAllThread`) that need the holding thread to die 
in order to make them available to other threads again, but their size is quite 
small by default and the size I've chosen as `LARGE_MESSAGE_CHUNK_SIZE` (ie 100 
* 1024) is `medium` but doesn't fall in any of those caches, so no real leak 
will happen until we allocate something smaller (could happen but is rare).
   
   IMO we should consider 2 separate improvements for a separate PR:
   - allow threads to stay alive forever, but just idle (and maybe using the FJ 
thread pool executor instead of the AMQ thread pool executor in that case): in 
that case we can consider such caches to be ok to be used by *all* threads 
(including non-netty ones), because the leak is just to cope with future load
   - for the AMQ thread pool executor as it is, make the presence of such 
caches configurable and off by default
   
   wdyt?
   Thanks for the comment, good catch!
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 310468)
Time Spent: 5h 10m  (was: 5h)

> Large messages could leak native ByteBuffers
> 
>
> Key: ARTEMIS-2482
> URL: https://issues.apache.org/jira/browse/ARTEMIS-2482
> Project: ActiveMQ Artemis
>  Issue Type: Bug
>  Components: AMQP, Broker, OpenWire
>Affects Versions: 2.10.0
>Reporter: Francesco Nigro
>Priority: Major
>  Time Spent: 5h 10m
>  Remaining Estimate: 0h
>
> JournalStorageManager::addBytesToLargeMessage and 
> LargeServerMessageImpl::DecodingContext::encode are relying on the pooling of 
> direct ByteBuffers performed internally by NIO.
> Those buffers are pooled until certain size limit (ie 
> jdk.nio.maxCachedBufferSize, as shown on 
> https://bugs.openjdk.java.net/browse/JDK-8147468) otherwise are freed right 
> after the write succeed.
> If the property jdk.nio.maxCachedBufferSize isn't set, the direct buffers are 
> always pooled regardless of the size, leading to OOM issues on high load of 
> variable sized writes due to the amount of direct memory allocated and not 
> released/late released.
> This should be an alternative fix for 
> https://issues.apache.org/jira/browse/ARTEMIS-1811 and it check if such 
> pooling is happening, making large messages to be read/written in chunks by 
> using the Netty ByteBuf pool to handle any intermediate buffer.



--
This message was sent by Atlassian Jira
(v8.3.2#803003)


[jira] [Work logged] (ARTEMIS-2482) Large messages could leak native ByteBuffers

2019-09-11 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/ARTEMIS-2482?focusedWorklogId=310465=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-310465
 ]

ASF GitHub Bot logged work on ARTEMIS-2482:
---

Author: ASF GitHub Bot
Created on: 11/Sep/19 10:24
Start Date: 11/Sep/19 10:24
Worklog Time Spent: 10m 
  Work Description: franz1981 commented on issue #2832: ARTEMIS-2482 Large 
messages could leak native ByteBuffers
URL: https://github.com/apache/activemq-artemis/pull/2832#issuecomment-530319911
 
 
   @wy96f I've taken a second look to the way Netty handle thread local pools 
and I can confirm that `-Dio.netty.allocator.useCacheForAllThreads=false` 
should save non-netty threads from creating such thread local regions for 
tiny/small/medium sized buffers.
   While related to 
   > but they just contain wrappers to direct memory that cannot be released 
(is part of the pool and they have no cleaner). The impact should be way less 
then IOUtil that pool native ByteBuffers holding exclusively native memory that 
won't be reused anymore...
   
   I have to add more detail and I was wrong: the tiny/small/medium thread 
local caches are actually leaking memory regions (with default 
`io.netty.allocator.useCacheForAllThread`) that need the holding thread to die 
in order to make the available to other threads, but their size is quite small 
by default and the size I've chosen as `LARGE_MESSAGE_CHUNK_SIZE` (ie 100 * 
1024) is `medium` but doesn't fall in any of those caches, so no real leak will 
happen until we allocate something smaller (could happen but is rare).
   IMO we should consider (for a separate issue) 2 separate improvements:
   - allow threads to stay alive forever, but just idle (and maybe using the FJ 
thread pool executor instead of the AMQ thread pool executor in that case): in 
that case we can consider such caches to be ok to be used by *all* threads 
(including non-netty ones), because the leak is just to cope with future load
   - for the AMQ thread pool executor as it is, make the presence of such 
caches configurable and off by default
   
   wdyt?
   Thanks for the comment, good catch!
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 310465)
Time Spent: 4h 40m  (was: 4.5h)

> Large messages could leak native ByteBuffers
> 
>
> Key: ARTEMIS-2482
> URL: https://issues.apache.org/jira/browse/ARTEMIS-2482
> Project: ActiveMQ Artemis
>  Issue Type: Bug
>  Components: AMQP, Broker, OpenWire
>Affects Versions: 2.10.0
>Reporter: Francesco Nigro
>Priority: Major
>  Time Spent: 4h 40m
>  Remaining Estimate: 0h
>
> JournalStorageManager::addBytesToLargeMessage and 
> LargeServerMessageImpl::DecodingContext::encode are relying on the pooling of 
> direct ByteBuffers performed internally by NIO.
> Those buffers are pooled until certain size limit (ie 
> jdk.nio.maxCachedBufferSize, as shown on 
> https://bugs.openjdk.java.net/browse/JDK-8147468) otherwise are freed right 
> after the write succeed.
> If the property jdk.nio.maxCachedBufferSize isn't set, the direct buffers are 
> always pooled regardless of the size, leading to OOM issues on high load of 
> variable sized writes due to the amount of direct memory allocated and not 
> released/late released.
> This should be an alternative fix for 
> https://issues.apache.org/jira/browse/ARTEMIS-1811 and it check if such 
> pooling is happening, making large messages to be read/written in chunks by 
> using the Netty ByteBuf pool to handle any intermediate buffer.



--
This message was sent by Atlassian Jira
(v8.3.2#803003)


[jira] [Work logged] (ARTEMIS-2482) Large messages could leak native ByteBuffers

2019-09-11 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/ARTEMIS-2482?focusedWorklogId=310466=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-310466
 ]

ASF GitHub Bot logged work on ARTEMIS-2482:
---

Author: ASF GitHub Bot
Created on: 11/Sep/19 10:24
Start Date: 11/Sep/19 10:24
Worklog Time Spent: 10m 
  Work Description: franz1981 commented on issue #2832: ARTEMIS-2482 Large 
messages could leak native ByteBuffers
URL: https://github.com/apache/activemq-artemis/pull/2832#issuecomment-530319911
 
 
   @wy96f I've taken a second look to the way Netty handle thread local pools 
and I can confirm that `-Dio.netty.allocator.useCacheForAllThreads=false` 
should save non-netty threads from creating such thread local regions for 
tiny/small/medium sized buffers.
   While related to 
   > but they just contain wrappers to direct memory that cannot be released 
(is part of the pool and they have no cleaner). The impact should be way less 
then IOUtil that pool native ByteBuffers holding exclusively native memory that 
won't be reused anymore...
   
   I have to add more detail and I was wrong: the tiny/small/medium thread 
local caches are actually leaking memory regions (with default 
`io.netty.allocator.useCacheForAllThread`) that need the holding thread to die 
in order to make them available to other threads again, but their size is quite 
small by default and the size I've chosen as `LARGE_MESSAGE_CHUNK_SIZE` (ie 100 
* 1024) is `medium` but doesn't fall in any of those caches, so no real leak 
will happen until we allocate something smaller (could happen but is rare).
   IMO we should consider (for a separate issue) 2 separate improvements:
   - allow threads to stay alive forever, but just idle (and maybe using the FJ 
thread pool executor instead of the AMQ thread pool executor in that case): in 
that case we can consider such caches to be ok to be used by *all* threads 
(including non-netty ones), because the leak is just to cope with future load
   - for the AMQ thread pool executor as it is, make the presence of such 
caches configurable and off by default
   
   wdyt?
   Thanks for the comment, good catch!
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 310466)
Time Spent: 4h 50m  (was: 4h 40m)

> Large messages could leak native ByteBuffers
> 
>
> Key: ARTEMIS-2482
> URL: https://issues.apache.org/jira/browse/ARTEMIS-2482
> Project: ActiveMQ Artemis
>  Issue Type: Bug
>  Components: AMQP, Broker, OpenWire
>Affects Versions: 2.10.0
>Reporter: Francesco Nigro
>Priority: Major
>  Time Spent: 4h 50m
>  Remaining Estimate: 0h
>
> JournalStorageManager::addBytesToLargeMessage and 
> LargeServerMessageImpl::DecodingContext::encode are relying on the pooling of 
> direct ByteBuffers performed internally by NIO.
> Those buffers are pooled until certain size limit (ie 
> jdk.nio.maxCachedBufferSize, as shown on 
> https://bugs.openjdk.java.net/browse/JDK-8147468) otherwise are freed right 
> after the write succeed.
> If the property jdk.nio.maxCachedBufferSize isn't set, the direct buffers are 
> always pooled regardless of the size, leading to OOM issues on high load of 
> variable sized writes due to the amount of direct memory allocated and not 
> released/late released.
> This should be an alternative fix for 
> https://issues.apache.org/jira/browse/ARTEMIS-1811 and it check if such 
> pooling is happening, making large messages to be read/written in chunks by 
> using the Netty ByteBuf pool to handle any intermediate buffer.



--
This message was sent by Atlassian Jira
(v8.3.2#803003)


[jira] [Work logged] (ARTEMIS-2482) Large messages could leak native ByteBuffers

2019-09-11 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/ARTEMIS-2482?focusedWorklogId=310443=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-310443
 ]

ASF GitHub Bot logged work on ARTEMIS-2482:
---

Author: ASF GitHub Bot
Created on: 11/Sep/19 09:43
Start Date: 11/Sep/19 09:43
Worklog Time Spent: 10m 
  Work Description: franz1981 commented on issue #2832: ARTEMIS-2482 Large 
messages could leak native ByteBuffers
URL: https://github.com/apache/activemq-artemis/pull/2832#issuecomment-530284944
 
 
   @wy96f Yes and no :)
   They would leak (at first look, but let me take a better look), but they 
just contain wrappers to direct memory that cannot be released (is part of the 
pool and they have no cleaner). The impact should be way less then `IOUtil` 
that pool native `ByteBuffer`s holding exclusively native memory that won't be 
reused anymore...
   
   Ideally we shouldn't have *any* thread locals, but we use them for several 
stuff (including the NIO factories and page read/write): I'm planning to 
replaced them by using the Netty pool, because of the difference on how thread 
locals are used. 
   Our thread locals in Artemis just hold memory that cannot be reused if not 
by the same thread, while for Netty, thread locals are just a way to cache 
wrappers that has a very limited impact on GC, while the pool really allows to 
reuse memory where is most needed...
   
   Just to add more detail: consider that 
`PooledBytebufAllocator.PoolThreadLocalCache::initialValue` contains this 
comment, while creating a thread-local `PoolThreadCache`:
   ```
   @Override
   protected synchronized PoolThreadCache initialValue() {
   final PoolArena heapArena = leastUsedArena(heapArenas);
   final PoolArena directArena = 
leastUsedArena(directArenas);
   
   Thread current = Thread.currentThread();
   if (useCacheForAllThreads || current instanceof 
FastThreadLocalThread) {
   return new PoolThreadCache(
   heapArena, directArena, tinyCacheSize, 
smallCacheSize, normalCacheSize,
   DEFAULT_MAX_CACHED_BUFFER_CAPACITY, 
DEFAULT_CACHE_TRIM_INTERVAL);
   }
   // No caching so just use 0 as sizes.
   return new PoolThreadCache(heapArena, directArena, 0, 0, 0, 0, 
0);
   }
   ```
   So, if we are not very happy about the Netty caching on non-Netty threads, 
we can disable those caches on by setting 
`-Dio.netty.allocator.useCacheForAllThreads=false`
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 310443)
Time Spent: 4h 20m  (was: 4h 10m)

> Large messages could leak native ByteBuffers
> 
>
> Key: ARTEMIS-2482
> URL: https://issues.apache.org/jira/browse/ARTEMIS-2482
> Project: ActiveMQ Artemis
>  Issue Type: Bug
>  Components: AMQP, Broker, OpenWire
>Affects Versions: 2.10.0
>Reporter: Francesco Nigro
>Priority: Major
>  Time Spent: 4h 20m
>  Remaining Estimate: 0h
>
> JournalStorageManager::addBytesToLargeMessage and 
> LargeServerMessageImpl::DecodingContext::encode are relying on the pooling of 
> direct ByteBuffers performed internally by NIO.
> Those buffers are pooled until certain size limit (ie 
> jdk.nio.maxCachedBufferSize, as shown on 
> https://bugs.openjdk.java.net/browse/JDK-8147468) otherwise are freed right 
> after the write succeed.
> If the property jdk.nio.maxCachedBufferSize isn't set, the direct buffers are 
> always pooled regardless of the size, leading to OOM issues on high load of 
> variable sized writes due to the amount of direct memory allocated and not 
> released/late released.
> This should be an alternative fix for 
> https://issues.apache.org/jira/browse/ARTEMIS-1811 and it check if such 
> pooling is happening, making large messages to be read/written in chunks by 
> using the Netty ByteBuf pool to handle any intermediate buffer.



--
This message was sent by Atlassian Jira
(v8.3.2#803003)


[jira] [Work logged] (ARTEMIS-2482) Large messages could leak native ByteBuffers

2019-09-11 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/ARTEMIS-2482?focusedWorklogId=310442=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-310442
 ]

ASF GitHub Bot logged work on ARTEMIS-2482:
---

Author: ASF GitHub Bot
Created on: 11/Sep/19 09:43
Start Date: 11/Sep/19 09:43
Worklog Time Spent: 10m 
  Work Description: franz1981 commented on issue #2832: ARTEMIS-2482 Large 
messages could leak native ByteBuffers
URL: https://github.com/apache/activemq-artemis/pull/2832#issuecomment-530284944
 
 
   @wy96f Yes and no :)
   They would leak (at first look, but let me take a better look), but they 
just contain wrappers to direct memory that cannot be released (is part of the 
pool and they have no cleaner). The impact should be way less then `IOUtil` 
that pool native `ByteBuffer`s holding exclusively native memory that won't be 
reused anymore...
   
   Ideally we shouldn't have *any* thread locals, but we use them for several 
stuff (including the NIO factories and page read/write): I'm planning to 
replaced them by using the Netty pool, because of the difference on how thread 
locals are used. 
   Our thread locals in Artemis just hold memory that cannot be reused if not 
by the same thread, while for Netty, thread locals are just a way to cache 
wrappers that has a very limited impact on GC, while the pool really allows to 
reuse memory where is most needed...
   
   Just to add more detail: consider that 
`PooledBytebufAllocator.PoolThreadLocalCache::initialValue` contains this 
comment, while creating a thread-local `PoolThreadCache`:
   ```
   @Override
   protected synchronized PoolThreadCache initialValue() {
   final PoolArena heapArena = leastUsedArena(heapArenas);
   final PoolArena directArena = 
leastUsedArena(directArenas);
   
   Thread current = Thread.currentThread();
   if (useCacheForAllThreads || current instanceof 
FastThreadLocalThread) {
   return new PoolThreadCache(
   heapArena, directArena, tinyCacheSize, 
smallCacheSize, normalCacheSize,
   DEFAULT_MAX_CACHED_BUFFER_CAPACITY, 
DEFAULT_CACHE_TRIM_INTERVAL);
   }
   // No caching so just use 0 as sizes.
   return new PoolThreadCache(heapArena, directArena, 0, 0, 0, 0, 
0);
   }
   ```
   So, if we are not very happy about the Netty caching on non-Netty threads, 
we can disable those caches on by setting 
`-Dio.netty.allocator.useCacheForAllThreads=false`
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 310442)
Time Spent: 4h 10m  (was: 4h)

> Large messages could leak native ByteBuffers
> 
>
> Key: ARTEMIS-2482
> URL: https://issues.apache.org/jira/browse/ARTEMIS-2482
> Project: ActiveMQ Artemis
>  Issue Type: Bug
>  Components: AMQP, Broker, OpenWire
>Affects Versions: 2.10.0
>Reporter: Francesco Nigro
>Priority: Major
>  Time Spent: 4h 10m
>  Remaining Estimate: 0h
>
> JournalStorageManager::addBytesToLargeMessage and 
> LargeServerMessageImpl::DecodingContext::encode are relying on the pooling of 
> direct ByteBuffers performed internally by NIO.
> Those buffers are pooled until certain size limit (ie 
> jdk.nio.maxCachedBufferSize, as shown on 
> https://bugs.openjdk.java.net/browse/JDK-8147468) otherwise are freed right 
> after the write succeed.
> If the property jdk.nio.maxCachedBufferSize isn't set, the direct buffers are 
> always pooled regardless of the size, leading to OOM issues on high load of 
> variable sized writes due to the amount of direct memory allocated and not 
> released/late released.
> This should be an alternative fix for 
> https://issues.apache.org/jira/browse/ARTEMIS-1811 and it check if such 
> pooling is happening, making large messages to be read/written in chunks by 
> using the Netty ByteBuf pool to handle any intermediate buffer.



--
This message was sent by Atlassian Jira
(v8.3.2#803003)


[jira] [Work logged] (ARTEMIS-2482) Large messages could leak native ByteBuffers

2019-09-11 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/ARTEMIS-2482?focusedWorklogId=310445=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-310445
 ]

ASF GitHub Bot logged work on ARTEMIS-2482:
---

Author: ASF GitHub Bot
Created on: 11/Sep/19 09:43
Start Date: 11/Sep/19 09:43
Worklog Time Spent: 10m 
  Work Description: franz1981 commented on issue #2832: ARTEMIS-2482 Large 
messages could leak native ByteBuffers
URL: https://github.com/apache/activemq-artemis/pull/2832#issuecomment-530284944
 
 
   @wy96f Yes and no :)
   They would leak (at first look, but let me take a better look), but they 
just contain wrappers to direct memory that cannot be released (is part of the 
pool and they have no cleaner). The impact should be way less then `IOUtil` 
that pool native `ByteBuffer`s holding exclusively native memory that won't be 
reused anymore...
   
   Ideally we shouldn't have *any* thread locals, but we use them for several 
stuff (including the NIO factories and page read/write): I'm planning to 
replaced them by using the Netty pool, because of the difference on how thread 
locals are used. 
   Our thread locals in Artemis just hold memory that cannot be reused if not 
by the same thread, while for Netty, thread locals are just a way to cache 
wrappers that has a very limited impact on GC, while the pool really allows to 
reuse memory where is most needed...
   
   Just to add more detail: consider that 
`PooledBytebufAllocator.PoolThreadLocalCache::initialValue` contains this 
comment, while creating a thread-local `PoolThreadCache`:
   ```
   @Override
   protected synchronized PoolThreadCache initialValue() {
   final PoolArena heapArena = leastUsedArena(heapArenas);
   final PoolArena directArena = 
leastUsedArena(directArenas);
   
   Thread current = Thread.currentThread();
   if (useCacheForAllThreads || current instanceof 
FastThreadLocalThread) {
   return new PoolThreadCache(
   heapArena, directArena, tinyCacheSize, 
smallCacheSize, normalCacheSize,
   DEFAULT_MAX_CACHED_BUFFER_CAPACITY, 
DEFAULT_CACHE_TRIM_INTERVAL);
   }
   // No caching so just use 0 as sizes.
   return new PoolThreadCache(heapArena, directArena, 0, 0, 0, 0, 
0);
   }
   ```
   So, if we are not very happy about the Netty caching on non-Netty threads, 
we can disable those caches by setting 
`-Dio.netty.allocator.useCacheForAllThreads=false` 
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 310445)
Time Spent: 4.5h  (was: 4h 20m)

> Large messages could leak native ByteBuffers
> 
>
> Key: ARTEMIS-2482
> URL: https://issues.apache.org/jira/browse/ARTEMIS-2482
> Project: ActiveMQ Artemis
>  Issue Type: Bug
>  Components: AMQP, Broker, OpenWire
>Affects Versions: 2.10.0
>Reporter: Francesco Nigro
>Priority: Major
>  Time Spent: 4.5h
>  Remaining Estimate: 0h
>
> JournalStorageManager::addBytesToLargeMessage and 
> LargeServerMessageImpl::DecodingContext::encode are relying on the pooling of 
> direct ByteBuffers performed internally by NIO.
> Those buffers are pooled until certain size limit (ie 
> jdk.nio.maxCachedBufferSize, as shown on 
> https://bugs.openjdk.java.net/browse/JDK-8147468) otherwise are freed right 
> after the write succeed.
> If the property jdk.nio.maxCachedBufferSize isn't set, the direct buffers are 
> always pooled regardless of the size, leading to OOM issues on high load of 
> variable sized writes due to the amount of direct memory allocated and not 
> released/late released.
> This should be an alternative fix for 
> https://issues.apache.org/jira/browse/ARTEMIS-1811 and it check if such 
> pooling is happening, making large messages to be read/written in chunks by 
> using the Netty ByteBuf pool to handle any intermediate buffer.



--
This message was sent by Atlassian Jira
(v8.3.2#803003)


[jira] [Work logged] (ARTEMIS-2482) Large messages could leak native ByteBuffers

2019-09-11 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/ARTEMIS-2482?focusedWorklogId=310439=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-310439
 ]

ASF GitHub Bot logged work on ARTEMIS-2482:
---

Author: ASF GitHub Bot
Created on: 11/Sep/19 09:35
Start Date: 11/Sep/19 09:35
Worklog Time Spent: 10m 
  Work Description: franz1981 commented on issue #2832: ARTEMIS-2482 Large 
messages could leak native ByteBuffers
URL: https://github.com/apache/activemq-artemis/pull/2832#issuecomment-530284944
 
 
   @wy96f Yes and no :)
   They would leak (at first look, but let me take a better look), but they 
just contain wrappers to direct memory that cannot be released (is part of the 
pool and they have no cleaner). The impact should be way less then `IOUtil` 
that pool native `ByteBuffer`s holding exclusively native memory that won't be 
reused anymore...
   
   Ideally we shouldn't have *any* thread locals, but we use them for several 
stuff (including the NIO factories and page read/write): I'm planning to 
replaced them by using the Netty pool, because of the difference on how thread 
locals are used. 
   Our thread locals in Artemis just hold memory that cannot be reused if not 
by the same thread, while for Netty, thread locals are just a way to cache 
wrappers that has a very limited impact on GC, while the pool really allows to 
reuse memory where is most needed...
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 310439)
Time Spent: 4h  (was: 3h 50m)

> Large messages could leak native ByteBuffers
> 
>
> Key: ARTEMIS-2482
> URL: https://issues.apache.org/jira/browse/ARTEMIS-2482
> Project: ActiveMQ Artemis
>  Issue Type: Bug
>  Components: AMQP, Broker, OpenWire
>Affects Versions: 2.10.0
>Reporter: Francesco Nigro
>Priority: Major
>  Time Spent: 4h
>  Remaining Estimate: 0h
>
> JournalStorageManager::addBytesToLargeMessage and 
> LargeServerMessageImpl::DecodingContext::encode are relying on the pooling of 
> direct ByteBuffers performed internally by NIO.
> Those buffers are pooled until certain size limit (ie 
> jdk.nio.maxCachedBufferSize, as shown on 
> https://bugs.openjdk.java.net/browse/JDK-8147468) otherwise are freed right 
> after the write succeed.
> If the property jdk.nio.maxCachedBufferSize isn't set, the direct buffers are 
> always pooled regardless of the size, leading to OOM issues on high load of 
> variable sized writes due to the amount of direct memory allocated and not 
> released/late released.
> This should be an alternative fix for 
> https://issues.apache.org/jira/browse/ARTEMIS-1811 and it check if such 
> pooling is happening, making large messages to be read/written in chunks by 
> using the Netty ByteBuf pool to handle any intermediate buffer.



--
This message was sent by Atlassian Jira
(v8.3.2#803003)


[jira] [Work logged] (ARTEMIS-2482) Large messages could leak native ByteBuffers

2019-09-11 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/ARTEMIS-2482?focusedWorklogId=310418=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-310418
 ]

ASF GitHub Bot logged work on ARTEMIS-2482:
---

Author: ASF GitHub Bot
Created on: 11/Sep/19 08:51
Start Date: 11/Sep/19 08:51
Worklog Time Spent: 10m 
  Work Description: franz1981 commented on issue #2832: ARTEMIS-2482 Large 
messages could leak native ByteBuffers
URL: https://github.com/apache/activemq-artemis/pull/2832#issuecomment-530284944
 
 
   @wy96f Yes and no :)
   They would leak (at first look, but let me take a better look), but they 
just contain wrappers to direct memory that cannot be released (is part of the 
pool). The impact should be way less then `IOUtil` that pool native 
`ByteBuffer`s holding exclusively native memory that won't be reused anymore...
   
   Ideally we shouldn't have *any* thread locals, but we use them for several 
stuff (including the NIO factories and page read/write): I'm planning to 
replaced them by using the Netty pool, because of the difference on how thread 
locals are used. 
   Our thread locals in Artemis just hold memory that cannot be reused if not 
by the same thread, while for Netty, thread locals are just a way to cache 
wrappers that has a very limited impact on GC, while the pool really allows to 
reuse memory where is most needed...
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 310418)
Time Spent: 3h 40m  (was: 3.5h)

> Large messages could leak native ByteBuffers
> 
>
> Key: ARTEMIS-2482
> URL: https://issues.apache.org/jira/browse/ARTEMIS-2482
> Project: ActiveMQ Artemis
>  Issue Type: Bug
>  Components: AMQP, Broker, OpenWire
>Affects Versions: 2.10.0
>Reporter: Francesco Nigro
>Priority: Major
>  Time Spent: 3h 40m
>  Remaining Estimate: 0h
>
> JournalStorageManager::addBytesToLargeMessage and 
> LargeServerMessageImpl::DecodingContext::encode are relying on the pooling of 
> direct ByteBuffers performed internally by NIO.
> Those buffers are pooled until certain size limit (ie 
> jdk.nio.maxCachedBufferSize, as shown on 
> https://bugs.openjdk.java.net/browse/JDK-8147468) otherwise are freed right 
> after the write succeed.
> If the property jdk.nio.maxCachedBufferSize isn't set, the direct buffers are 
> always pooled regardless of the size, leading to OOM issues on high load of 
> variable sized writes due to the amount of direct memory allocated and not 
> released/late released.
> This should be an alternative fix for 
> https://issues.apache.org/jira/browse/ARTEMIS-1811 and it check if such 
> pooling is happening, making large messages to be read/written in chunks by 
> using the Netty ByteBuf pool to handle any intermediate buffer.



--
This message was sent by Atlassian Jira
(v8.3.2#803003)


[jira] [Work logged] (ARTEMIS-2482) Large messages could leak native ByteBuffers

2019-09-11 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/ARTEMIS-2482?focusedWorklogId=310412=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-310412
 ]

ASF GitHub Bot logged work on ARTEMIS-2482:
---

Author: ASF GitHub Bot
Created on: 11/Sep/19 08:47
Start Date: 11/Sep/19 08:47
Worklog Time Spent: 10m 
  Work Description: franz1981 commented on issue #2832: ARTEMIS-2482 Large 
messages could leak native ByteBuffers
URL: https://github.com/apache/activemq-artemis/pull/2832#issuecomment-530284944
 
 
   @wy96f Yes and no :)
   They would leak (at first look, but let me take a better look), but they 
would contain just wrappers to direct memory that cannot be released (is part 
of the pool) and no cleaner (so they won't collect anything, when contains 
ByteBuffers). The impact should be way less then `IOUtil` that pool native 
`ByteBuffer`s holding exclusively native memory that won't be reused anymore...
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 310412)
Time Spent: 3h 10m  (was: 3h)

> Large messages could leak native ByteBuffers
> 
>
> Key: ARTEMIS-2482
> URL: https://issues.apache.org/jira/browse/ARTEMIS-2482
> Project: ActiveMQ Artemis
>  Issue Type: Bug
>  Components: AMQP, Broker, OpenWire
>Affects Versions: 2.10.0
>Reporter: Francesco Nigro
>Priority: Major
>  Time Spent: 3h 10m
>  Remaining Estimate: 0h
>
> JournalStorageManager::addBytesToLargeMessage and 
> LargeServerMessageImpl::DecodingContext::encode are relying on the pooling of 
> direct ByteBuffers performed internally by NIO.
> Those buffers are pooled until certain size limit (ie 
> jdk.nio.maxCachedBufferSize, as shown on 
> https://bugs.openjdk.java.net/browse/JDK-8147468) otherwise are freed right 
> after the write succeed.
> If the property jdk.nio.maxCachedBufferSize isn't set, the direct buffers are 
> always pooled regardless of the size, leading to OOM issues on high load of 
> variable sized writes due to the amount of direct memory allocated and not 
> released/late released.
> This should be an alternative fix for 
> https://issues.apache.org/jira/browse/ARTEMIS-1811 and it check if such 
> pooling is happening, making large messages to be read/written in chunks by 
> using the Netty ByteBuf pool to handle any intermediate buffer.



--
This message was sent by Atlassian Jira
(v8.3.2#803003)


[jira] [Work logged] (ARTEMIS-2482) Large messages could leak native ByteBuffers

2019-09-11 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/ARTEMIS-2482?focusedWorklogId=310411=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-310411
 ]

ASF GitHub Bot logged work on ARTEMIS-2482:
---

Author: ASF GitHub Bot
Created on: 11/Sep/19 08:47
Start Date: 11/Sep/19 08:47
Worklog Time Spent: 10m 
  Work Description: franz1981 commented on issue #2832: ARTEMIS-2482 Large 
messages could leak native ByteBuffers
URL: https://github.com/apache/activemq-artemis/pull/2832#issuecomment-530284944
 
 
   @wy96f Yes and no :)
   The would leak (at first look, but let me take a better look), but they 
would contain just wrappers to direct memory that cannot be released (is part 
of the pool) and no cleaner (so they won't collect anything, when contains 
ByteBuffers). The impact should be way less then `IOUtil` that pool native 
`ByteBuffer`s holding exclusively native memory that won't be reused anymore...
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 310411)
Time Spent: 3h  (was: 2h 50m)

> Large messages could leak native ByteBuffers
> 
>
> Key: ARTEMIS-2482
> URL: https://issues.apache.org/jira/browse/ARTEMIS-2482
> Project: ActiveMQ Artemis
>  Issue Type: Bug
>  Components: AMQP, Broker, OpenWire
>Affects Versions: 2.10.0
>Reporter: Francesco Nigro
>Priority: Major
>  Time Spent: 3h
>  Remaining Estimate: 0h
>
> JournalStorageManager::addBytesToLargeMessage and 
> LargeServerMessageImpl::DecodingContext::encode are relying on the pooling of 
> direct ByteBuffers performed internally by NIO.
> Those buffers are pooled until certain size limit (ie 
> jdk.nio.maxCachedBufferSize, as shown on 
> https://bugs.openjdk.java.net/browse/JDK-8147468) otherwise are freed right 
> after the write succeed.
> If the property jdk.nio.maxCachedBufferSize isn't set, the direct buffers are 
> always pooled regardless of the size, leading to OOM issues on high load of 
> variable sized writes due to the amount of direct memory allocated and not 
> released/late released.
> This should be an alternative fix for 
> https://issues.apache.org/jira/browse/ARTEMIS-1811 and it check if such 
> pooling is happening, making large messages to be read/written in chunks by 
> using the Netty ByteBuf pool to handle any intermediate buffer.



--
This message was sent by Atlassian Jira
(v8.3.2#803003)


[jira] [Work logged] (ARTEMIS-2482) Large messages could leak native ByteBuffers

2019-09-11 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/ARTEMIS-2482?focusedWorklogId=310410=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-310410
 ]

ASF GitHub Bot logged work on ARTEMIS-2482:
---

Author: ASF GitHub Bot
Created on: 11/Sep/19 08:42
Start Date: 11/Sep/19 08:42
Worklog Time Spent: 10m 
  Work Description: wy96f commented on issue #2832: ARTEMIS-2482 Large 
messages could leak native ByteBuffers
URL: https://github.com/apache/activemq-artemis/pull/2832#issuecomment-530283213
 
 
   > @wy96f
   > 
   > > PooledByteBufAllocator::directBuffer uses java's ThreadLocal. Wouldn't 
this result in leak in the case if thread terminates due to idle timeout as in 
#2199 ?
   > 
   > Do you mean the thread local arena? Or the thread local NIO ByteBuffer?
   > Anyway, at the end of its usage we always release the `ByteBuf` and Netty 
will take care to release any referenced resources: it's a pool so it leaks "by 
definition"
   
   I mean tinySubPageDirectCaches/smallSubPageDirectCaches/normalDirectCaches 
in PoolThreadCache. PoolChunk would be added into these caches first if 
released(not yet into shared arena). Given PoolThreadCache is thread local, 
would theses caches leak if thread terminates?
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 310410)
Time Spent: 2h 50m  (was: 2h 40m)

> Large messages could leak native ByteBuffers
> 
>
> Key: ARTEMIS-2482
> URL: https://issues.apache.org/jira/browse/ARTEMIS-2482
> Project: ActiveMQ Artemis
>  Issue Type: Bug
>  Components: AMQP, Broker, OpenWire
>Affects Versions: 2.10.0
>Reporter: Francesco Nigro
>Priority: Major
>  Time Spent: 2h 50m
>  Remaining Estimate: 0h
>
> JournalStorageManager::addBytesToLargeMessage and 
> LargeServerMessageImpl::DecodingContext::encode are relying on the pooling of 
> direct ByteBuffers performed internally by NIO.
> Those buffers are pooled until certain size limit (ie 
> jdk.nio.maxCachedBufferSize, as shown on 
> https://bugs.openjdk.java.net/browse/JDK-8147468) otherwise are freed right 
> after the write succeed.
> If the property jdk.nio.maxCachedBufferSize isn't set, the direct buffers are 
> always pooled regardless of the size, leading to OOM issues on high load of 
> variable sized writes due to the amount of direct memory allocated and not 
> released/late released.
> This should be an alternative fix for 
> https://issues.apache.org/jira/browse/ARTEMIS-1811 and it check if such 
> pooling is happening, making large messages to be read/written in chunks by 
> using the Netty ByteBuf pool to handle any intermediate buffer.



--
This message was sent by Atlassian Jira
(v8.3.2#803003)


[jira] [Work logged] (ARTEMIS-2482) Large messages could leak native ByteBuffers

2019-09-11 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/ARTEMIS-2482?focusedWorklogId=310391=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-310391
 ]

ASF GitHub Bot logged work on ARTEMIS-2482:
---

Author: ASF GitHub Bot
Created on: 11/Sep/19 08:13
Start Date: 11/Sep/19 08:13
Worklog Time Spent: 10m 
  Work Description: franz1981 commented on issue #2832: ARTEMIS-2482 Large 
messages could leak native ByteBuffers
URL: https://github.com/apache/activemq-artemis/pull/2832#issuecomment-530273041
 
 
   @wy96f 
   
   > PooledByteBufAllocator::directBuffer uses java's ThreadLocal. Wouldn't 
this result in leak in the case if thread terminates due to idle timeout as in 
#2199 ?
   
   Do you mean the thread local arena? Or the thread local NIO ByteBuffer?
   Anyway, at the end of its usage we always release the `ByteBuf` and Netty 
will take care to release any referenced resources: it's a pool so it leaks "by 
definition" :+1: 
   
   
   
   
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 310391)
Time Spent: 2h 40m  (was: 2.5h)

> Large messages could leak native ByteBuffers
> 
>
> Key: ARTEMIS-2482
> URL: https://issues.apache.org/jira/browse/ARTEMIS-2482
> Project: ActiveMQ Artemis
>  Issue Type: Bug
>  Components: AMQP, Broker, OpenWire
>Affects Versions: 2.10.0
>Reporter: Francesco Nigro
>Priority: Major
>  Time Spent: 2h 40m
>  Remaining Estimate: 0h
>
> JournalStorageManager::addBytesToLargeMessage and 
> LargeServerMessageImpl::DecodingContext::encode are relying on the pooling of 
> direct ByteBuffers performed internally by NIO.
> Those buffers are pooled until certain size limit (ie 
> jdk.nio.maxCachedBufferSize, as shown on 
> https://bugs.openjdk.java.net/browse/JDK-8147468) otherwise are freed right 
> after the write succeed.
> If the property jdk.nio.maxCachedBufferSize isn't set, the direct buffers are 
> always pooled regardless of the size, leading to OOM issues on high load of 
> variable sized writes due to the amount of direct memory allocated and not 
> released/late released.
> This should be an alternative fix for 
> https://issues.apache.org/jira/browse/ARTEMIS-1811 and it check if such 
> pooling is happening, making large messages to be read/written in chunks by 
> using the Netty ByteBuf pool to handle any intermediate buffer.



--
This message was sent by Atlassian Jira
(v8.3.2#803003)


[jira] [Work logged] (ARTEMIS-2482) Large messages could leak native ByteBuffers

2019-09-11 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/ARTEMIS-2482?focusedWorklogId=310384=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-310384
 ]

ASF GitHub Bot logged work on ARTEMIS-2482:
---

Author: ASF GitHub Bot
Created on: 11/Sep/19 08:06
Start Date: 11/Sep/19 08:06
Worklog Time Spent: 10m 
  Work Description: franz1981 commented on pull request #2832: ARTEMIS-2482 
Large messages could leak native ByteBuffers
URL: https://github.com/apache/activemq-artemis/pull/2832#discussion_r323108903
 
 

 ##
 File path: 
artemis-server/src/main/java/org/apache/activemq/artemis/core/persistence/impl/journal/JournalStorageManager.java
 ##
 @@ -832,46 +835,117 @@ public void stopReplication() {
   }
}
 
+   static final int LARGE_MESSAGE_CHUNK_SIZE = 100 * 1024;
+
+   private void addBytesUsingTempNativeBuffer(final SequentialFile file, final 
ActiveMQBuffer bytes) throws Exception {
+  assert file instanceof NIOSequentialFile;
+  //we can't use the actual content of it as it is and need to perform a 
copy into a direct ByteBuffer
+  int readableBytes = bytes.readableBytes();
+  final int requiredCapacity = Math.min(LARGE_MESSAGE_CHUNK_SIZE, 
readableBytes);
+  final ByteBuf tempBuffer = 
PooledByteBufAllocator.DEFAULT.directBuffer(requiredCapacity, requiredCapacity);
+  try {
+ int readerIndex = bytes.readerIndex();
+ while (readableBytes > 0) {
+final int size = Math.min(readableBytes, LARGE_MESSAGE_CHUNK_SIZE);
+final ByteBuffer nioBytes = tempBuffer.internalNioBuffer(0, size);
+final int position = nioBytes.position();
+bytes.getBytes(readerIndex, nioBytes);
+nioBytes.position(position);
+file.blockingWriteDirect(nioBytes, false, false);
+readerIndex += size;
+readableBytes -= size;
+ }
+  } finally {
+ tempBuffer.release();
+  }
+   }
+
public final void addBytesToLargeMessage(final SequentialFile file,
 final long messageId,
 final ActiveMQBuffer bytes) throws 
Exception {
   readLock();
   try {
  file.position(file.size());
  if (bytes.byteBuf() != null && bytes.byteBuf().nioBufferCount() == 1) 
{
-final ByteBuffer nioBytes = 
bytes.byteBuf().internalNioBuffer(bytes.readerIndex(), bytes.readableBytes());
-file.blockingWriteDirect(nioBytes, false, false);
-
-if (isReplicated()) {
-   //copy defensively bytes
-   final byte[] bytesCopy = new byte[bytes.readableBytes()];
-   bytes.getBytes(bytes.readerIndex(), bytesCopy);
-   replicator.largeMessageWrite(messageId, bytesCopy);
+//NIO -> need direct ByteBuffers, while JDBC the opposite
+if (file instanceof NIOSequentialFile) {
+   if (bytes.byteBuf().isDirect()) {
+  final ByteBuffer nioBytes = 
bytes.byteBuf().internalNioBuffer(bytes.readerIndex(), bytes.readableBytes());
+  file.blockingWriteDirect(nioBytes, false, false);
+   } else {
+  addBytesUsingTempNativeBuffer(file, bytes);
+   }
+} else if (!bytes.byteBuf().isDirect()) {
+   final ByteBuffer nioBytes = 
bytes.byteBuf().internalNioBuffer(bytes.readerIndex(), bytes.readableBytes());
 
 Review comment:
   it is on 
https://github.com/apache/activemq-artemis/pull/2832/files/4fc608a382d9d1d7eb0401e808141c7f939b21f9#diff-6c51f3299aaf6c901fe05c48fcfa3bb7R869
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 310384)
Time Spent: 2.5h  (was: 2h 20m)

> Large messages could leak native ByteBuffers
> 
>
> Key: ARTEMIS-2482
> URL: https://issues.apache.org/jira/browse/ARTEMIS-2482
> Project: ActiveMQ Artemis
>  Issue Type: Bug
>  Components: AMQP, Broker, OpenWire
>Affects Versions: 2.10.0
>Reporter: Francesco Nigro
>Priority: Major
>  Time Spent: 2.5h
>  Remaining Estimate: 0h
>
> JournalStorageManager::addBytesToLargeMessage and 
> LargeServerMessageImpl::DecodingContext::encode are relying on the pooling of 
> direct ByteBuffers performed internally by NIO.
> Those buffers are pooled until certain size limit (ie 
> jdk.nio.maxCachedBufferSize, as shown on 
> https://bugs.openjdk.java.net/browse/JDK-8147468) otherwise 

[jira] [Work logged] (ARTEMIS-2482) Large messages could leak native ByteBuffers

2019-09-11 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/ARTEMIS-2482?focusedWorklogId=310383=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-310383
 ]

ASF GitHub Bot logged work on ARTEMIS-2482:
---

Author: ASF GitHub Bot
Created on: 11/Sep/19 08:04
Start Date: 11/Sep/19 08:04
Worklog Time Spent: 10m 
  Work Description: franz1981 commented on pull request #2832: ARTEMIS-2482 
Large messages could leak native ByteBuffers
URL: https://github.com/apache/activemq-artemis/pull/2832#discussion_r323108156
 
 

 ##
 File path: 
artemis-server/src/main/java/org/apache/activemq/artemis/core/persistence/impl/journal/JournalStorageManager.java
 ##
 @@ -832,46 +835,117 @@ public void stopReplication() {
   }
}
 
+   static final int LARGE_MESSAGE_CHUNK_SIZE = 100 * 1024;
+
+   private void addBytesUsingTempNativeBuffer(final SequentialFile file, final 
ActiveMQBuffer bytes) throws Exception {
+  assert file instanceof NIOSequentialFile;
+  //we can't use the actual content of it as it is and need to perform a 
copy into a direct ByteBuffer
+  int readableBytes = bytes.readableBytes();
+  final int requiredCapacity = Math.min(LARGE_MESSAGE_CHUNK_SIZE, 
readableBytes);
+  final ByteBuf tempBuffer = 
PooledByteBufAllocator.DEFAULT.directBuffer(requiredCapacity, requiredCapacity);
+  try {
+ int readerIndex = bytes.readerIndex();
+ while (readableBytes > 0) {
+final int size = Math.min(readableBytes, LARGE_MESSAGE_CHUNK_SIZE);
+final ByteBuffer nioBytes = tempBuffer.internalNioBuffer(0, size);
+final int position = nioBytes.position();
+bytes.getBytes(readerIndex, nioBytes);
+nioBytes.position(position);
+file.blockingWriteDirect(nioBytes, false, false);
+readerIndex += size;
+readableBytes -= size;
+ }
+  } finally {
+ tempBuffer.release();
+  }
+   }
+
public final void addBytesToLargeMessage(final SequentialFile file,
 final long messageId,
 final ActiveMQBuffer bytes) throws 
Exception {
   readLock();
   try {
  file.position(file.size());
  if (bytes.byteBuf() != null && bytes.byteBuf().nioBufferCount() == 1) 
{
-final ByteBuffer nioBytes = 
bytes.byteBuf().internalNioBuffer(bytes.readerIndex(), bytes.readableBytes());
-file.blockingWriteDirect(nioBytes, false, false);
-
-if (isReplicated()) {
-   //copy defensively bytes
-   final byte[] bytesCopy = new byte[bytes.readableBytes()];
-   bytes.getBytes(bytes.readerIndex(), bytesCopy);
-   replicator.largeMessageWrite(messageId, bytesCopy);
+//NIO -> need direct ByteBuffers, while JDBC the opposite
+if (file instanceof NIOSequentialFile) {
+   if (bytes.byteBuf().isDirect()) {
+  final ByteBuffer nioBytes = 
bytes.byteBuf().internalNioBuffer(bytes.readerIndex(), bytes.readableBytes());
 
 Review comment:
   it is on 
https://github.com/apache/activemq-artemis/pull/2832/files/4fc608a382d9d1d7eb0401e808141c7f939b21f9#diff-6c51f3299aaf6c901fe05c48fcfa3bb7R869
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 310383)
Time Spent: 2h 20m  (was: 2h 10m)

> Large messages could leak native ByteBuffers
> 
>
> Key: ARTEMIS-2482
> URL: https://issues.apache.org/jira/browse/ARTEMIS-2482
> Project: ActiveMQ Artemis
>  Issue Type: Bug
>  Components: AMQP, Broker, OpenWire
>Affects Versions: 2.10.0
>Reporter: Francesco Nigro
>Priority: Major
>  Time Spent: 2h 20m
>  Remaining Estimate: 0h
>
> JournalStorageManager::addBytesToLargeMessage and 
> LargeServerMessageImpl::DecodingContext::encode are relying on the pooling of 
> direct ByteBuffers performed internally by NIO.
> Those buffers are pooled until certain size limit (ie 
> jdk.nio.maxCachedBufferSize, as shown on 
> https://bugs.openjdk.java.net/browse/JDK-8147468) otherwise are freed right 
> after the write succeed.
> If the property jdk.nio.maxCachedBufferSize isn't set, the direct buffers are 
> always pooled regardless of the size, leading to OOM issues on high load of 
> variable sized writes due to the amount of direct memory allocated and not 
> released/late released.
> This should be an alternative fix for 
> 

[jira] [Work logged] (ARTEMIS-2482) Large messages could leak native ByteBuffers

2019-09-11 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/ARTEMIS-2482?focusedWorklogId=310372=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-310372
 ]

ASF GitHub Bot logged work on ARTEMIS-2482:
---

Author: ASF GitHub Bot
Created on: 11/Sep/19 07:20
Start Date: 11/Sep/19 07:20
Worklog Time Spent: 10m 
  Work Description: wy96f commented on pull request #2832: ARTEMIS-2482 
Large messages could leak native ByteBuffers
URL: https://github.com/apache/activemq-artemis/pull/2832#discussion_r323071598
 
 

 ##
 File path: 
artemis-server/src/main/java/org/apache/activemq/artemis/core/persistence/impl/journal/JournalStorageManager.java
 ##
 @@ -832,46 +835,117 @@ public void stopReplication() {
   }
}
 
+   static final int LARGE_MESSAGE_CHUNK_SIZE = 100 * 1024;
+
+   private void addBytesUsingTempNativeBuffer(final SequentialFile file, final 
ActiveMQBuffer bytes) throws Exception {
+  assert file instanceof NIOSequentialFile;
+  //we can't use the actual content of it as it is and need to perform a 
copy into a direct ByteBuffer
+  int readableBytes = bytes.readableBytes();
+  final int requiredCapacity = Math.min(LARGE_MESSAGE_CHUNK_SIZE, 
readableBytes);
+  final ByteBuf tempBuffer = 
PooledByteBufAllocator.DEFAULT.directBuffer(requiredCapacity, requiredCapacity);
+  try {
+ int readerIndex = bytes.readerIndex();
+ while (readableBytes > 0) {
+final int size = Math.min(readableBytes, LARGE_MESSAGE_CHUNK_SIZE);
+final ByteBuffer nioBytes = tempBuffer.internalNioBuffer(0, size);
+final int position = nioBytes.position();
+bytes.getBytes(readerIndex, nioBytes);
+nioBytes.position(position);
+file.blockingWriteDirect(nioBytes, false, false);
+readerIndex += size;
+readableBytes -= size;
+ }
+  } finally {
+ tempBuffer.release();
+  }
+   }
+
public final void addBytesToLargeMessage(final SequentialFile file,
 final long messageId,
 final ActiveMQBuffer bytes) throws 
Exception {
   readLock();
   try {
  file.position(file.size());
  if (bytes.byteBuf() != null && bytes.byteBuf().nioBufferCount() == 1) 
{
-final ByteBuffer nioBytes = 
bytes.byteBuf().internalNioBuffer(bytes.readerIndex(), bytes.readableBytes());
-file.blockingWriteDirect(nioBytes, false, false);
-
-if (isReplicated()) {
-   //copy defensively bytes
-   final byte[] bytesCopy = new byte[bytes.readableBytes()];
-   bytes.getBytes(bytes.readerIndex(), bytesCopy);
-   replicator.largeMessageWrite(messageId, bytesCopy);
+//NIO -> need direct ByteBuffers, while JDBC the opposite
+if (file instanceof NIOSequentialFile) {
+   if (bytes.byteBuf().isDirect()) {
+  final ByteBuffer nioBytes = 
bytes.byteBuf().internalNioBuffer(bytes.readerIndex(), bytes.readableBytes());
 
 Review comment:
   Do we need to judge nioBufferCount() == 1 first?
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 310372)
Time Spent: 2h 10m  (was: 2h)

> Large messages could leak native ByteBuffers
> 
>
> Key: ARTEMIS-2482
> URL: https://issues.apache.org/jira/browse/ARTEMIS-2482
> Project: ActiveMQ Artemis
>  Issue Type: Bug
>  Components: AMQP, Broker, OpenWire
>Affects Versions: 2.10.0
>Reporter: Francesco Nigro
>Priority: Major
>  Time Spent: 2h 10m
>  Remaining Estimate: 0h
>
> JournalStorageManager::addBytesToLargeMessage and 
> LargeServerMessageImpl::DecodingContext::encode are relying on the pooling of 
> direct ByteBuffers performed internally by NIO.
> Those buffers are pooled until certain size limit (ie 
> jdk.nio.maxCachedBufferSize, as shown on 
> https://bugs.openjdk.java.net/browse/JDK-8147468) otherwise are freed right 
> after the write succeed.
> If the property jdk.nio.maxCachedBufferSize isn't set, the direct buffers are 
> always pooled regardless of the size, leading to OOM issues on high load of 
> variable sized writes due to the amount of direct memory allocated and not 
> released/late released.
> This should be an alternative fix for 
> https://issues.apache.org/jira/browse/ARTEMIS-1811 and it check if such 
> pooling is happening, making large 

[jira] [Work logged] (ARTEMIS-2482) Large messages could leak native ByteBuffers

2019-09-11 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/ARTEMIS-2482?focusedWorklogId=310373=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-310373
 ]

ASF GitHub Bot logged work on ARTEMIS-2482:
---

Author: ASF GitHub Bot
Created on: 11/Sep/19 07:20
Start Date: 11/Sep/19 07:20
Worklog Time Spent: 10m 
  Work Description: wy96f commented on pull request #2832: ARTEMIS-2482 
Large messages could leak native ByteBuffers
URL: https://github.com/apache/activemq-artemis/pull/2832#discussion_r323071740
 
 

 ##
 File path: 
artemis-server/src/main/java/org/apache/activemq/artemis/core/persistence/impl/journal/JournalStorageManager.java
 ##
 @@ -832,46 +835,117 @@ public void stopReplication() {
   }
}
 
+   static final int LARGE_MESSAGE_CHUNK_SIZE = 100 * 1024;
+
+   private void addBytesUsingTempNativeBuffer(final SequentialFile file, final 
ActiveMQBuffer bytes) throws Exception {
+  assert file instanceof NIOSequentialFile;
+  //we can't use the actual content of it as it is and need to perform a 
copy into a direct ByteBuffer
+  int readableBytes = bytes.readableBytes();
+  final int requiredCapacity = Math.min(LARGE_MESSAGE_CHUNK_SIZE, 
readableBytes);
+  final ByteBuf tempBuffer = 
PooledByteBufAllocator.DEFAULT.directBuffer(requiredCapacity, requiredCapacity);
+  try {
+ int readerIndex = bytes.readerIndex();
+ while (readableBytes > 0) {
+final int size = Math.min(readableBytes, LARGE_MESSAGE_CHUNK_SIZE);
+final ByteBuffer nioBytes = tempBuffer.internalNioBuffer(0, size);
+final int position = nioBytes.position();
+bytes.getBytes(readerIndex, nioBytes);
+nioBytes.position(position);
+file.blockingWriteDirect(nioBytes, false, false);
+readerIndex += size;
+readableBytes -= size;
+ }
+  } finally {
+ tempBuffer.release();
+  }
+   }
+
public final void addBytesToLargeMessage(final SequentialFile file,
 final long messageId,
 final ActiveMQBuffer bytes) throws 
Exception {
   readLock();
   try {
  file.position(file.size());
  if (bytes.byteBuf() != null && bytes.byteBuf().nioBufferCount() == 1) 
{
-final ByteBuffer nioBytes = 
bytes.byteBuf().internalNioBuffer(bytes.readerIndex(), bytes.readableBytes());
-file.blockingWriteDirect(nioBytes, false, false);
-
-if (isReplicated()) {
-   //copy defensively bytes
-   final byte[] bytesCopy = new byte[bytes.readableBytes()];
-   bytes.getBytes(bytes.readerIndex(), bytesCopy);
-   replicator.largeMessageWrite(messageId, bytesCopy);
+//NIO -> need direct ByteBuffers, while JDBC the opposite
+if (file instanceof NIOSequentialFile) {
+   if (bytes.byteBuf().isDirect()) {
+  final ByteBuffer nioBytes = 
bytes.byteBuf().internalNioBuffer(bytes.readerIndex(), bytes.readableBytes());
+  file.blockingWriteDirect(nioBytes, false, false);
+   } else {
+  addBytesUsingTempNativeBuffer(file, bytes);
+   }
+} else if (!bytes.byteBuf().isDirect()) {
+   final ByteBuffer nioBytes = 
bytes.byteBuf().internalNioBuffer(bytes.readerIndex(), bytes.readableBytes());
 
 Review comment:
   Do we need to judge nioBufferCount() == 1 first?
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 310373)
Time Spent: 2h 10m  (was: 2h)

> Large messages could leak native ByteBuffers
> 
>
> Key: ARTEMIS-2482
> URL: https://issues.apache.org/jira/browse/ARTEMIS-2482
> Project: ActiveMQ Artemis
>  Issue Type: Bug
>  Components: AMQP, Broker, OpenWire
>Affects Versions: 2.10.0
>Reporter: Francesco Nigro
>Priority: Major
>  Time Spent: 2h 10m
>  Remaining Estimate: 0h
>
> JournalStorageManager::addBytesToLargeMessage and 
> LargeServerMessageImpl::DecodingContext::encode are relying on the pooling of 
> direct ByteBuffers performed internally by NIO.
> Those buffers are pooled until certain size limit (ie 
> jdk.nio.maxCachedBufferSize, as shown on 
> https://bugs.openjdk.java.net/browse/JDK-8147468) otherwise are freed right 
> after the write succeed.
> If the property jdk.nio.maxCachedBufferSize isn't set, the 

[jira] [Work logged] (ARTEMIS-2482) Large messages could leak native ByteBuffers

2019-09-11 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/ARTEMIS-2482?focusedWorklogId=310368=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-310368
 ]

ASF GitHub Bot logged work on ARTEMIS-2482:
---

Author: ASF GitHub Bot
Created on: 11/Sep/19 07:09
Start Date: 11/Sep/19 07:09
Worklog Time Spent: 10m 
  Work Description: franz1981 commented on issue #2832: ARTEMIS-2482 Large 
messages could leak native ByteBuffers
URL: https://github.com/apache/activemq-artemis/pull/2832#issuecomment-530138441
 
 
   @wy96f please take a look: I know you've recently have fun with `ByteBuf`s :)
   I see too that the change introduced with 
76d420590fa73aefb41713a5589dcec22588c594 has a memory leak similar to the one 
of this PR on 
https://github.com/apache/activemq-artemis/blob/e537fbfde06a5a09ef369401e715970c4003bd32/artemis-server/src/main/java/org/apache/activemq/artemis/core/paging/impl/Page.java#L149
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 310368)
Time Spent: 2h  (was: 1h 50m)

> Large messages could leak native ByteBuffers
> 
>
> Key: ARTEMIS-2482
> URL: https://issues.apache.org/jira/browse/ARTEMIS-2482
> Project: ActiveMQ Artemis
>  Issue Type: Bug
>  Components: AMQP, Broker, OpenWire
>Affects Versions: 2.10.0
>Reporter: Francesco Nigro
>Priority: Major
>  Time Spent: 2h
>  Remaining Estimate: 0h
>
> JournalStorageManager::addBytesToLargeMessage and 
> LargeServerMessageImpl::DecodingContext::encode are relying on the pooling of 
> direct ByteBuffers performed internally by NIO.
> Those buffers are pooled until certain size limit (ie 
> jdk.nio.maxCachedBufferSize, as shown on 
> https://bugs.openjdk.java.net/browse/JDK-8147468) otherwise are freed right 
> after the write succeed.
> If the property jdk.nio.maxCachedBufferSize isn't set, the direct buffers are 
> always pooled regardless of the size, leading to OOM issues on high load of 
> variable sized writes due to the amount of direct memory allocated and not 
> released/late released.
> This should be an alternative fix for 
> https://issues.apache.org/jira/browse/ARTEMIS-1811 and it check if such 
> pooling is happening, making large messages to be read/written in chunks by 
> using the Netty ByteBuf pool to handle any intermediate buffer.



--
This message was sent by Atlassian Jira
(v8.3.2#803003)


[jira] [Work logged] (ARTEMIS-2482) Large messages could leak native ByteBuffers

2019-09-10 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/ARTEMIS-2482?focusedWorklogId=310157=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-310157
 ]

ASF GitHub Bot logged work on ARTEMIS-2482:
---

Author: ASF GitHub Bot
Created on: 10/Sep/19 22:25
Start Date: 10/Sep/19 22:25
Worklog Time Spent: 10m 
  Work Description: franz1981 commented on issue #2832: ARTEMIS-2482 Large 
messages could leak native ByteBuffers
URL: https://github.com/apache/activemq-artemis/pull/2832#issuecomment-530138441
 
 
   @wy96f please take a look that you've recently have fun with `ByteBuf`s :)
   I see too that the change introduced with 
76d420590fa73aefb41713a5589dcec22588c594 has a memory leak similar to the one 
of this PR on 
https://github.com/apache/activemq-artemis/blob/e537fbfde06a5a09ef369401e715970c4003bd32/artemis-server/src/main/java/org/apache/activemq/artemis/core/paging/impl/Page.java#L149
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 310157)
Time Spent: 1h 50m  (was: 1h 40m)

> Large messages could leak native ByteBuffers
> 
>
> Key: ARTEMIS-2482
> URL: https://issues.apache.org/jira/browse/ARTEMIS-2482
> Project: ActiveMQ Artemis
>  Issue Type: Bug
>  Components: AMQP, Broker, OpenWire
>Affects Versions: 2.10.0
>Reporter: Francesco Nigro
>Priority: Major
>  Time Spent: 1h 50m
>  Remaining Estimate: 0h
>
> JournalStorageManager::addBytesToLargeMessage and 
> LargeServerMessageImpl::DecodingContext::encode are relying on the pooling of 
> direct ByteBuffers performed internally by NIO.
> Those buffers are pooled until certain size limit (ie 
> jdk.nio.maxCachedBufferSize, as shown on 
> https://bugs.openjdk.java.net/browse/JDK-8147468) otherwise are freed right 
> after the write succeed.
> If the property jdk.nio.maxCachedBufferSize isn't set, the direct buffers are 
> always pooled regardless of the size, leading to OOM issues on high load of 
> variable sized writes due to the amount of direct memory allocated and not 
> released/late released.
> This should be an alternative fix for 
> https://issues.apache.org/jira/browse/ARTEMIS-1811 and it check if such 
> pooling is happening, making large messages to be read/written in chunks by 
> using the Netty ByteBuf pool to handle any intermediate buffer.



--
This message was sent by Atlassian Jira
(v8.3.2#803003)


[jira] [Work logged] (ARTEMIS-2482) Large messages could leak native ByteBuffers

2019-09-10 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/ARTEMIS-2482?focusedWorklogId=310156=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-310156
 ]

ASF GitHub Bot logged work on ARTEMIS-2482:
---

Author: ASF GitHub Bot
Created on: 10/Sep/19 22:24
Start Date: 10/Sep/19 22:24
Worklog Time Spent: 10m 
  Work Description: franz1981 commented on issue #2832: ARTEMIS-2482 Large 
messages could leak native ByteBuffers
URL: https://github.com/apache/activemq-artemis/pull/2832#issuecomment-530138441
 
 
   @wy96f please take a look that you've recently have fun with `ByteBuf`s :)
   I see too that the change introduced with has a memory leak similar to the 
one of this PR on 
https://github.com/apache/activemq-artemis/blob/e537fbfde06a5a09ef369401e715970c4003bd32/artemis-server/src/main/java/org/apache/activemq/artemis/core/paging/impl/Page.java#L149
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 310156)
Time Spent: 1h 40m  (was: 1.5h)

> Large messages could leak native ByteBuffers
> 
>
> Key: ARTEMIS-2482
> URL: https://issues.apache.org/jira/browse/ARTEMIS-2482
> Project: ActiveMQ Artemis
>  Issue Type: Bug
>  Components: AMQP, Broker, OpenWire
>Affects Versions: 2.10.0
>Reporter: Francesco Nigro
>Priority: Major
>  Time Spent: 1h 40m
>  Remaining Estimate: 0h
>
> JournalStorageManager::addBytesToLargeMessage and 
> LargeServerMessageImpl::DecodingContext::encode are relying on the pooling of 
> direct ByteBuffers performed internally by NIO.
> Those buffers are pooled until certain size limit (ie 
> jdk.nio.maxCachedBufferSize, as shown on 
> https://bugs.openjdk.java.net/browse/JDK-8147468) otherwise are freed right 
> after the write succeed.
> If the property jdk.nio.maxCachedBufferSize isn't set, the direct buffers are 
> always pooled regardless of the size, leading to OOM issues on high load of 
> variable sized writes due to the amount of direct memory allocated and not 
> released/late released.
> This should be an alternative fix for 
> https://issues.apache.org/jira/browse/ARTEMIS-1811 and it check if such 
> pooling is happening, making large messages to be read/written in chunks by 
> using the Netty ByteBuf pool to handle any intermediate buffer.



--
This message was sent by Atlassian Jira
(v8.3.2#803003)


[jira] [Work logged] (ARTEMIS-2482) Large messages could leak native ByteBuffers

2019-09-10 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/ARTEMIS-2482?focusedWorklogId=310150=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-310150
 ]

ASF GitHub Bot logged work on ARTEMIS-2482:
---

Author: ASF GitHub Bot
Created on: 10/Sep/19 22:04
Start Date: 10/Sep/19 22:04
Worklog Time Spent: 10m 
  Work Description: franz1981 commented on issue #2832: ARTEMIS-2482 Large 
messages could leak native ByteBuffers
URL: https://github.com/apache/activemq-artemis/pull/2832#issuecomment-530138441
 
 
   @wy96f please take a look that you've recently have fun with `ByteBuf`s :)
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 310150)
Time Spent: 1.5h  (was: 1h 20m)

> Large messages could leak native ByteBuffers
> 
>
> Key: ARTEMIS-2482
> URL: https://issues.apache.org/jira/browse/ARTEMIS-2482
> Project: ActiveMQ Artemis
>  Issue Type: Bug
>  Components: AMQP, Broker, OpenWire
>Affects Versions: 2.10.0
>Reporter: Francesco Nigro
>Priority: Major
>  Time Spent: 1.5h
>  Remaining Estimate: 0h
>
> JournalStorageManager::addBytesToLargeMessage and 
> LargeServerMessageImpl::DecodingContext::encode are relying on the pooling of 
> direct ByteBuffers performed internally by NIO.
> Those buffers are pooled until certain size limit (ie 
> jdk.nio.maxCachedBufferSize, as shown on 
> https://bugs.openjdk.java.net/browse/JDK-8147468) otherwise are freed right 
> after the write succeed.
> If the property jdk.nio.maxCachedBufferSize isn't set, the direct buffers are 
> always pooled regardless of the size, leading to OOM issues on high load of 
> variable sized writes due to the amount of direct memory allocated and not 
> released/late released.
> This should be an alternative fix for 
> https://issues.apache.org/jira/browse/ARTEMIS-1811 and it check if such 
> pooling is happening, making large messages to be read/written in chunks by 
> using the Netty ByteBuf pool to handle any intermediate buffer.



--
This message was sent by Atlassian Jira
(v8.3.2#803003)


[jira] [Work logged] (ARTEMIS-2482) Large messages could leak native ByteBuffers

2019-09-10 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/ARTEMIS-2482?focusedWorklogId=310081=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-310081
 ]

ASF GitHub Bot logged work on ARTEMIS-2482:
---

Author: ASF GitHub Bot
Created on: 10/Sep/19 20:25
Start Date: 10/Sep/19 20:25
Worklog Time Spent: 10m 
  Work Description: franz1981 commented on issue #2832: ARTEMIS-2482 Large 
messages could leak native ByteBuffers
URL: https://github.com/apache/activemq-artemis/pull/2832#issuecomment-530104594
 
 
   @clebertsuconic seems that I cannot add the label...please check the code, 
because I've found another part on replication where it could happen...
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 310081)
Time Spent: 1h 20m  (was: 1h 10m)

> Large messages could leak native ByteBuffers
> 
>
> Key: ARTEMIS-2482
> URL: https://issues.apache.org/jira/browse/ARTEMIS-2482
> Project: ActiveMQ Artemis
>  Issue Type: Bug
>  Components: AMQP, Broker, OpenWire
>Affects Versions: 2.10.0
>Reporter: Francesco Nigro
>Priority: Major
>  Time Spent: 1h 20m
>  Remaining Estimate: 0h
>
> JournalStorageManager::addBytesToLargeMessage and 
> LargeServerMessageImpl::DecodingContext::encode are relying on the pooling of 
> direct ByteBuffers performed internally by NIO.
> Those buffers are pooled until certain size limit (ie 
> jdk.nio.maxCachedBufferSize, as shown on 
> https://bugs.openjdk.java.net/browse/JDK-8147468) otherwise are freed right 
> after the write succeed.
> If the property jdk.nio.maxCachedBufferSize isn't set, the direct buffers are 
> always pooled regardless of the size, leading to OOM issues on high load of 
> variable sized writes due to the amount of direct memory allocated and not 
> released/late released.
> This should be an alternative fix for 
> https://issues.apache.org/jira/browse/ARTEMIS-1811 and it check if such 
> pooling is happening, making large messages to be read/written in chunks by 
> using the Netty ByteBuf pool to handle any intermediate buffer.



--
This message was sent by Atlassian Jira
(v8.3.2#803003)


[jira] [Work logged] (ARTEMIS-2482) Large messages could leak native ByteBuffers

2019-09-10 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/ARTEMIS-2482?focusedWorklogId=310078=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-310078
 ]

ASF GitHub Bot logged work on ARTEMIS-2482:
---

Author: ASF GitHub Bot
Created on: 10/Sep/19 20:21
Start Date: 10/Sep/19 20:21
Worklog Time Spent: 10m 
  Work Description: clebertsuconic commented on issue #2832: ARTEMIS-2482 
Large messages could leak native ByteBuffers
URL: https://github.com/apache/activemq-artemis/pull/2832#issuecomment-530103241
 
 
   We shouldn't be using native cache anyways... just in case it happens in our 
codebase, i would rather have a limit.
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 310078)
Time Spent: 1h  (was: 50m)

> Large messages could leak native ByteBuffers
> 
>
> Key: ARTEMIS-2482
> URL: https://issues.apache.org/jira/browse/ARTEMIS-2482
> Project: ActiveMQ Artemis
>  Issue Type: Bug
>  Components: AMQP, Broker, OpenWire
>Affects Versions: 2.10.0
>Reporter: Francesco Nigro
>Priority: Major
>  Time Spent: 1h
>  Remaining Estimate: 0h
>
> JournalStorageManager::addBytesToLargeMessage and 
> LargeServerMessageImpl::DecodingContext::encode are relying on the pooling of 
> direct ByteBuffers performed internally by NIO.
> Those buffers are pooled until certain size limit (ie 
> jdk.nio.maxCachedBufferSize, as shown on 
> https://bugs.openjdk.java.net/browse/JDK-8147468) otherwise are freed right 
> after the write succeed.
> If the property jdk.nio.maxCachedBufferSize isn't set, the direct buffers are 
> always pooled regardless of the size, leading to OOM issues on high load of 
> variable sized writes due to the amount of direct memory allocated and not 
> released/late released.
> This should be an alternative fix for 
> https://issues.apache.org/jira/browse/ARTEMIS-1811 and it check if such 
> pooling is happening, making large messages to be read/written in chunks by 
> using the Netty ByteBuf pool to handle any intermediate buffer.



--
This message was sent by Atlassian Jira
(v8.3.2#803003)


[jira] [Work logged] (ARTEMIS-2482) Large messages could leak native ByteBuffers

2019-09-10 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/ARTEMIS-2482?focusedWorklogId=310079=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-310079
 ]

ASF GitHub Bot logged work on ARTEMIS-2482:
---

Author: ASF GitHub Bot
Created on: 10/Sep/19 20:21
Start Date: 10/Sep/19 20:21
Worklog Time Spent: 10m 
  Work Description: clebertsuconic commented on issue #2832: ARTEMIS-2482 
Large messages could leak native ByteBuffers
URL: https://github.com/apache/activemq-artemis/pull/2832#issuecomment-530103498
 
 
   @franz1981 can you add labels on github? use the DO-NOT-MERGE-YET?
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 310079)
Time Spent: 1h 10m  (was: 1h)

> Large messages could leak native ByteBuffers
> 
>
> Key: ARTEMIS-2482
> URL: https://issues.apache.org/jira/browse/ARTEMIS-2482
> Project: ActiveMQ Artemis
>  Issue Type: Bug
>  Components: AMQP, Broker, OpenWire
>Affects Versions: 2.10.0
>Reporter: Francesco Nigro
>Priority: Major
>  Time Spent: 1h 10m
>  Remaining Estimate: 0h
>
> JournalStorageManager::addBytesToLargeMessage and 
> LargeServerMessageImpl::DecodingContext::encode are relying on the pooling of 
> direct ByteBuffers performed internally by NIO.
> Those buffers are pooled until certain size limit (ie 
> jdk.nio.maxCachedBufferSize, as shown on 
> https://bugs.openjdk.java.net/browse/JDK-8147468) otherwise are freed right 
> after the write succeed.
> If the property jdk.nio.maxCachedBufferSize isn't set, the direct buffers are 
> always pooled regardless of the size, leading to OOM issues on high load of 
> variable sized writes due to the amount of direct memory allocated and not 
> released/late released.
> This should be an alternative fix for 
> https://issues.apache.org/jira/browse/ARTEMIS-1811 and it check if such 
> pooling is happening, making large messages to be read/written in chunks by 
> using the Netty ByteBuf pool to handle any intermediate buffer.



--
This message was sent by Atlassian Jira
(v8.3.2#803003)


[jira] [Work logged] (ARTEMIS-2482) Large messages could leak native ByteBuffers

2019-09-10 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/ARTEMIS-2482?focusedWorklogId=310043=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-310043
 ]

ASF GitHub Bot logged work on ARTEMIS-2482:
---

Author: ASF GitHub Bot
Created on: 10/Sep/19 19:04
Start Date: 10/Sep/19 19:04
Worklog Time Spent: 10m 
  Work Description: franz1981 commented on issue #2832: ARTEMIS-2482 Large 
messages could leak native ByteBuffers
URL: https://github.com/apache/activemq-artemis/pull/2832#issuecomment-530076449
 
 
   @clebertsuconic I still need to address few bits about it, so please don't 
merge it yet
   
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 310043)
Time Spent: 50m  (was: 40m)

> Large messages could leak native ByteBuffers
> 
>
> Key: ARTEMIS-2482
> URL: https://issues.apache.org/jira/browse/ARTEMIS-2482
> Project: ActiveMQ Artemis
>  Issue Type: Bug
>  Components: AMQP, Broker, OpenWire
>Affects Versions: 2.10.0
>Reporter: Francesco Nigro
>Priority: Major
>  Time Spent: 50m
>  Remaining Estimate: 0h
>
> JournalStorageManager::addBytesToLargeMessage and 
> LargeServerMessageImpl::DecodingContext::encode are relying on the pooling of 
> direct ByteBuffers performed internally by NIO.
> Those buffers are pooled until certain size limit (ie 
> jdk.nio.maxCachedBufferSize, as shown on 
> https://bugs.openjdk.java.net/browse/JDK-8147468) otherwise are freed right 
> after the write succeed.
> If the property jdk.nio.maxCachedBufferSize isn't set, the direct buffers are 
> always pooled regardless of the size, leading to OOM issues on high load of 
> variable sized writes due to the amount of direct memory allocated and not 
> released/late released.
> This should be an alternative fix for 
> https://issues.apache.org/jira/browse/ARTEMIS-1811 and it check if such 
> pooling is happening, making large messages to be read/written in chunks by 
> using the Netty ByteBuf pool to handle any intermediate buffer.



--
This message was sent by Atlassian Jira
(v8.3.2#803003)


[jira] [Work logged] (ARTEMIS-2482) Large messages could leak native ByteBuffers

2019-09-10 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/ARTEMIS-2482?focusedWorklogId=310042=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-310042
 ]

ASF GitHub Bot logged work on ARTEMIS-2482:
---

Author: ASF GitHub Bot
Created on: 10/Sep/19 19:04
Start Date: 10/Sep/19 19:04
Worklog Time Spent: 10m 
  Work Description: franz1981 commented on issue #2832: ARTEMIS-2482 Large 
messages could leak native ByteBuffers
URL: https://github.com/apache/activemq-artemis/pull/2832#issuecomment-530076449
 
 
   @clebertsuconic I still need to address few bits about it...will put a 
proper label...
   
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 310042)
Time Spent: 40m  (was: 0.5h)

> Large messages could leak native ByteBuffers
> 
>
> Key: ARTEMIS-2482
> URL: https://issues.apache.org/jira/browse/ARTEMIS-2482
> Project: ActiveMQ Artemis
>  Issue Type: Bug
>  Components: AMQP, Broker, OpenWire
>Affects Versions: 2.10.0
>Reporter: Francesco Nigro
>Priority: Major
>  Time Spent: 40m
>  Remaining Estimate: 0h
>
> JournalStorageManager::addBytesToLargeMessage and 
> LargeServerMessageImpl::DecodingContext::encode are relying on the pooling of 
> direct ByteBuffers performed internally by NIO.
> Those buffers are pooled until certain size limit (ie 
> jdk.nio.maxCachedBufferSize, as shown on 
> https://bugs.openjdk.java.net/browse/JDK-8147468) otherwise are freed right 
> after the write succeed.
> If the property jdk.nio.maxCachedBufferSize isn't set, the direct buffers are 
> always pooled regardless of the size, leading to OOM issues on high load of 
> variable sized writes due to the amount of direct memory allocated and not 
> released/late released.
> This should be an alternative fix for 
> https://issues.apache.org/jira/browse/ARTEMIS-1811 and it check if such 
> pooling is happening, making large messages to be read/written in chunks by 
> using the Netty ByteBuf pool to handle any intermediate buffer.



--
This message was sent by Atlassian Jira
(v8.3.2#803003)


[jira] [Work logged] (ARTEMIS-2482) Large messages could leak native ByteBuffers

2019-09-10 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/ARTEMIS-2482?focusedWorklogId=310018=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-310018
 ]

ASF GitHub Bot logged work on ARTEMIS-2482:
---

Author: ASF GitHub Bot
Created on: 10/Sep/19 18:32
Start Date: 10/Sep/19 18:32
Worklog Time Spent: 10m 
  Work Description: franz1981 commented on issue #2832: ARTEMIS-2482 Large 
messages could leak native ByteBuffers
URL: https://github.com/apache/activemq-artemis/pull/2832#issuecomment-530064493
 
 
   On paging we cache manually so is a different "issue" and probably for many 
user is fine.
   Re the property I'm not sure: imo is a good idea but it will impact existing 
users where it is happening without some; they will start to perceive slower 
performance in a stealthy way..
   Anyway I don't have a strong opinion on that so will do it if you prefer 
(just need to verify if is present in jdk 11 as well)
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 310018)
Time Spent: 0.5h  (was: 20m)

> Large messages could leak native ByteBuffers
> 
>
> Key: ARTEMIS-2482
> URL: https://issues.apache.org/jira/browse/ARTEMIS-2482
> Project: ActiveMQ Artemis
>  Issue Type: Bug
>  Components: AMQP, Broker, OpenWire
>Affects Versions: 2.10.0
>Reporter: Francesco Nigro
>Priority: Major
>  Time Spent: 0.5h
>  Remaining Estimate: 0h
>
> JournalStorageManager::addBytesToLargeMessage and 
> LargeServerMessageImpl::DecodingContext::encode are relying on the pooling of 
> direct ByteBuffers performed internally by NIO.
> Those buffers are pooled until certain size limit (ie 
> jdk.nio.maxCachedBufferSize, as shown on 
> https://bugs.openjdk.java.net/browse/JDK-8147468) otherwise are freed right 
> after the write succeed.
> If the property jdk.nio.maxCachedBufferSize isn't set, the direct buffers are 
> always pooled regardless of the size, leading to OOM issues on high load of 
> variable sized writes due to the amount of direct memory allocated and not 
> released/late released.
> This should be an alternative fix for 
> https://issues.apache.org/jira/browse/ARTEMIS-1811 and it check if such 
> pooling is happening, making large messages to be read/written in chunks by 
> using the Netty ByteBuf pool to handle any intermediate buffer.



--
This message was sent by Atlassian Jira
(v8.3.2#803003)


[jira] [Work logged] (ARTEMIS-2482) Large messages could leak native ByteBuffers

2019-09-10 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/ARTEMIS-2482?focusedWorklogId=309997=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-309997
 ]

ASF GitHub Bot logged work on ARTEMIS-2482:
---

Author: ASF GitHub Bot
Created on: 10/Sep/19 17:49
Start Date: 10/Sep/19 17:49
Worklog Time Spent: 10m 
  Work Description: clebertsuconic commented on issue #2832: ARTEMIS-2482 
Large messages could leak native ByteBuffers
URL: https://github.com/apache/activemq-artemis/pull/2832#issuecomment-530047928
 
 
   @franz1981 do we have any other reading operation that would lead the same?
   
   Can you also change our scripts to include the property to avoid caching if 
anything like that is ever used?
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 309997)
Time Spent: 20m  (was: 10m)

> Large messages could leak native ByteBuffers
> 
>
> Key: ARTEMIS-2482
> URL: https://issues.apache.org/jira/browse/ARTEMIS-2482
> Project: ActiveMQ Artemis
>  Issue Type: Bug
>  Components: AMQP, Broker, OpenWire
>Affects Versions: 2.10.0
>Reporter: Francesco Nigro
>Priority: Major
>  Time Spent: 20m
>  Remaining Estimate: 0h
>
> JournalStorageManager::addBytesToLargeMessage and 
> LargeServerMessageImpl::DecodingContext::encode are relying on the pooling of 
> direct ByteBuffers performed internally by NIO.
> Those buffers are pooled until certain size limit (ie 
> jdk.nio.maxCachedBufferSize, as shown on 
> https://bugs.openjdk.java.net/browse/JDK-8147468) otherwise are freed right 
> after the write succeed.
> If the property jdk.nio.maxCachedBufferSize isn't set, the direct buffers are 
> always pooled regardless of the size, leading to OOM issues on high load of 
> variable sized writes due to the amount of direct memory allocated and not 
> released/late released.
> This should be an alternative fix for 
> https://issues.apache.org/jira/browse/ARTEMIS-1811 and it check if such 
> pooling is happening, making large messages to be read/written in chunks by 
> using the Netty ByteBuf pool to handle any intermediate buffer.



--
This message was sent by Atlassian Jira
(v8.3.2#803003)


[jira] [Work logged] (ARTEMIS-2482) Large messages could leak native ByteBuffers

2019-09-10 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/ARTEMIS-2482?focusedWorklogId=309982=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-309982
 ]

ASF GitHub Bot logged work on ARTEMIS-2482:
---

Author: ASF GitHub Bot
Created on: 10/Sep/19 17:27
Start Date: 10/Sep/19 17:27
Worklog Time Spent: 10m 
  Work Description: franz1981 commented on pull request #2832: ARTEMIS-2482 
Large messages could leak native ByteBuffers
URL: https://github.com/apache/activemq-artemis/pull/2832
 
 
   Perform chunked read/write of large message files to save
   NIO from leaking native ByBuffers:
   see https://bugs.openjdk.java.net/browse/JDK-8147468
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 309982)
Remaining Estimate: 0h
Time Spent: 10m

> Large messages could leak native ByteBuffers
> 
>
> Key: ARTEMIS-2482
> URL: https://issues.apache.org/jira/browse/ARTEMIS-2482
> Project: ActiveMQ Artemis
>  Issue Type: Bug
>  Components: AMQP, Broker, OpenWire
>Affects Versions: 2.10.0
>Reporter: Francesco Nigro
>Priority: Major
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> JournalStorageManager::addBytesToLargeMessage and 
> LargeServerMessageImpl::DecodingContext::encode are relying on the pooling of 
> direct ByteBuffers performed internally by NIO.
> Those buffers are pooled until certain size limit (ie 
> jdk.nio.maxCachedBufferSize, as shown on 
> https://bugs.openjdk.java.net/browse/JDK-8147468) otherwise are freed right 
> after the write succeed.
> If the property jdk.nio.maxCachedBufferSize isn't set, the direct buffers are 
> always pooled regardless of the size, leading to OOM issues on high load of 
> variable sized writes due to the amount of direct memory allocated and not 
> released/late released.
> This should be an alternative fix for 
> https://issues.apache.org/jira/browse/ARTEMIS-1811 and it check if such 
> pooling is happening, making large messages to be read/written in chunks by 
> using the Netty ByteBuf pool to handle any intermediate buffer.



--
This message was sent by Atlassian Jira
(v8.3.2#803003)