[jira] [Work logged] (ARTEMIS-2496) Use of Netty FileRegion on ReplicationCatch is breaking wildfly integration with artemis

2019-09-19 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/ARTEMIS-2496?focusedWorklogId=315441=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-315441
 ]

ASF GitHub Bot logged work on ARTEMIS-2496:
---

Author: ASF GitHub Bot
Created on: 20/Sep/19 03:18
Start Date: 20/Sep/19 03:18
Worklog Time Spent: 10m 
  Work Description: wy96f commented on issue #2843: ARTEMIS-2496 Revert 
catch up with zero-copy, as it's causing issues i…
URL: https://github.com/apache/activemq-artemis/pull/2843#issuecomment-533388136
 
 
   > @wy96f I've just run a naive test replacing FileRegion with ChunkedNioFile 
and it should work OOTB...but seems that netty is not reading data from it...I 
need to dig better into it
   
   @franz1981
   I just notice `ChunkedWriteHandler` needs to be added in pipeline by using 
`ChunkedFile`, 
https://github.com/netty/netty/blob/ff7a9fa091a8bf2e10020f83fc4df1c44098/example/src/main/java/io/netty/example/file/FileServer.java#L77
   It was my bad i missed this before revert. When using `ChunkedFile`(in the 
ssl case), exception would be thrown, 
https://github.com/netty/netty/blob/ff7a9fa091a8bf2e10020f83fc4df1c44098/transport/src/main/java/io/netty/channel/nio/AbstractNioByteChannel.java#L245
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 315441)
Time Spent: 2h 10m  (was: 2h)

> Use of Netty FileRegion on ReplicationCatch is breaking wildfly integration 
> with artemis
> 
>
> Key: ARTEMIS-2496
> URL: https://issues.apache.org/jira/browse/ARTEMIS-2496
> Project: ActiveMQ Artemis
>  Issue Type: Bug
>Affects Versions: 2.10.0
>Reporter: clebert suconic
>Assignee: clebert suconic
>Priority: Major
> Fix For: 2.11.0
>
> Attachments: runTillFails.sh
>
>  Time Spent: 2h 10m
>  Remaining Estimate: 0h
>
> This is of course an issue on Wildfly integration, but it seems something on 
> our recent changes is breaking replication on Wildfly.
> My biggest concern is that it seems that paging catch up is silently failing 
> in our testsuite and some other issues are currently hidden.
> Wildfly has an extra layer on top of Netty: 
> https://github.com/xnio/netty-xnio-transport/tree/0.1
> But the main thing here, is that it seems that are other issues within 
> Artemis.
> For now I'm reverting the change from ARTEMIS-2336
> And we need more investigation to bring it back



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Created] (AMQ-7306) How to configure automatic cleaning strategy when ActiveMQ + zookeeper disk limit reaches its maximum

2019-09-19 Thread liuhanjiang (Jira)
liuhanjiang created AMQ-7306:


 Summary: How to configure automatic cleaning strategy when 
ActiveMQ + zookeeper disk limit reaches its maximum
 Key: AMQ-7306
 URL: https://issues.apache.org/jira/browse/AMQ-7306
 Project: ActiveMQ
  Issue Type: Bug
  Components: activemq-leveldb-store
Affects Versions: 5.15.0
Reporter: liuhanjiang
 Attachments: activemq.xml

The largest < store Usage > configuration in persistence < system Usage >



< / storeusage >

 

When the maximum value is reached, the producer will be blocked. Why is there 
no automatic cleaning strategy?



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Closed] (AMQ-7305) 磁盘限制达到最大值时怎么配置自动清理策略

2019-09-19 Thread Christopher L. Shannon (Jira)


 [ 
https://issues.apache.org/jira/browse/AMQ-7305?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Christopher L. Shannon closed AMQ-7305.
---
Resolution: Invalid

Not in english and leveldb is no longer supported

> 磁盘限制达到最大值时怎么配置自动清理策略
> 
>
> Key: AMQ-7305
> URL: https://issues.apache.org/jira/browse/AMQ-7305
> Project: ActiveMQ
>  Issue Type: Bug
>  Components: activemq-leveldb-store
>Affects Versions: 5.15.9
>Reporter: liuhanjiang
>Priority: Major
> Attachments: activemq.xml
>
>
> 在 持久化   中配置最大的  
>  
>  
> 当达到最大值时会阻塞生产者,这个为什么没有自动清理的策略 !



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (AMQ-6391) Memory leak with FailoverTransport when sending TX messages from MDB

2019-09-19 Thread Jonathan S Fisher (Jira)


[ 
https://issues.apache.org/jira/browse/AMQ-6391?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16933631#comment-16933631
 ] 

Jonathan S Fisher commented on AMQ-6391:


One more point of followup, earlier you mentioned rmIdFromConnectionId. I 
noticed the dire warning in this article:
[https://access.redhat.com/documentation/en-us/red_hat_jboss_a-mq/6.1/html-single/integrating_with_jboss_enterprise_application_platform/index]

However, I can't seem to find any documentation around this property. Could you 
possibly share your thoughts on how you arrived at that conclusion? Does that 
setting have side affects?

> Memory leak with FailoverTransport when sending TX messages from MDB
> 
>
> Key: AMQ-6391
> URL: https://issues.apache.org/jira/browse/AMQ-6391
> Project: ActiveMQ
>  Issue Type: Bug
>Reporter: Patrik Dudits
>Priority: Major
> Attachments: 0001-AMQ-6391-test.patch
>
>
> We observe memory leak in 
> {{FailoverTransport.stateTracker.connectionStates.transactions}} when using 
> XA Transactions in activemq-rar, sending message within same transaction and 
> not using {{useInboundSession}}.
> In such constellation there are two connections enlisted within same 
> transaction. During commit the transaction manager will execute commit on one 
> of the resources, per JTA 1.2 section 3.3.1 ("(TransactionManager) ensures 
> that the same resource manager only receives one set of prepare-commit calls 
> for completing the target global transaction ".) 
> [TransactionContext|https://github.com/apache/activemq/blob/a65f5e7c2077e048a2664339f6425d73948d71ce/activemq-client/src/main/java/org/apache/activemq/TransactionContext.java#L478]
>  will propagate the afterCommit to all contexts participating in same 
> transaction. However, this is not enough for {{ConnectionStateTracker}}, 
> which only reacts to [TransactionInfo 
> command|https://github.com/apache/activemq/blob/a65f5e7c2077e048a2664339f6425d73948d71ce/activemq-client/src/main/java/org/apache/activemq/TransactionContext.java#L469].
>  In effect, when two connection are enlisted in same transaction, just 
> commands of one of them is cleared upon commit, leading to memory leak.
> Since I presume the {{TransactionInfo}} should be sent only once for commit 
> of single transaction, {{ConnectionStateTracker}} needs to clear state for 
> the acknowledged transactions regardless of connection id in the transaction 
> command.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (AMQ-7232) Incorrect message counters when using virtual destinations

2019-09-19 Thread Jira


 [ 
https://issues.apache.org/jira/browse/AMQ-7232?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jean-Baptiste Onofré updated AMQ-7232:
--
Component/s: Broker

> Incorrect message counters when using virtual destinations
> --
>
> Key: AMQ-7232
> URL: https://issues.apache.org/jira/browse/AMQ-7232
> Project: ActiveMQ
>  Issue Type: Bug
>  Components: Broker
>Reporter: Lionel Cons
>Assignee: Jean-Baptiste Onofré
>Priority: Major
> Attachments: Virtual Destinations.jpg
>
>
> ActiveMQ supports [virtual 
> destinations|https://activemq.apache.org/virtual-destinations] to magically 
> link one or more queues to a topic.
> It also supports [JMX attributes|https://activemq.apache.org/jmx.html] to 
> count messages going through the different destinations in a broker. There 
> are both per-destination attributes ({{EnqueueCount}} and {{DequeueCount}}) 
> and per-broker attributes ({{TotalEnqueueCount}} and {{TotalDequeueCount}}).
> Unfortunately, these two features do not work well together.
> Take for instance the following scenario:
>  * one topic ({{/topic/T}})
>  * two virtual queues attached to it({{/queue/Consumer.A.T}} and 
> {{/queue/Consumer.B.T}})
>  * one topic producer ({{PT1}})
>  * two queue consumers on each virtual queue ({{CA1}}, {{CA2}}, {{CB1}} and 
> {{CB2}})
> !Virtual Destinations.jpg!
> When sending a single message, we get:
>  * {{/topic/T}}: {{EnqueueCount += 1}} and {{DequeueCount += 0}}
>  * {{/queue/Consumer.A.T}}: {{EnqueueCount += 1}} and {{DequeueCount += 1}}
>  * {{/queue/Consumer.B.T}}: {{EnqueueCount += 1}} and {{DequeueCount += 1}}
>  * at broker level: {{TotalEnqueueCount += 3}} and {{TotalDequeueCount += 2}}
> This is not consistent: when the message leaves the topic to go to the 
> virtual queues, {{DequeueCount}} (on the topic) does not change while 
> {{EnqueueCount}} (on the queues) does change.
> At broker level, {{TotalEnqueueCount}} gets incremented too much, giving the 
> impression that 3 messages have been received.
> The main question is: should the counters be incremented when a message is 
> magically forwarded from the topic to the attached virtual queues?
> I would argue that these counters should *not* change when messages move 
> internally (i.e. along dashed lines). This way, we can continue to have 
> {{TotalEnqueueCount}} being the sum of all {{EnqueueCount}} and at the same 
> time representing the number of messages received (globally) by the broker. 
> Idem for {{TotalDequeueCount}} and {{DequeueCount}}.
> IMHO, these counters should only change when messages move along solid lines. 
> If we want to track the internals (i.e. dashed lines) then we should have an 
> additional counter, a bit like we already have {{ForwardCount}} for network 
> of brokers.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Assigned] (AMQ-7232) Incorrect message counters when using virtual destinations

2019-09-19 Thread Jira


 [ 
https://issues.apache.org/jira/browse/AMQ-7232?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jean-Baptiste Onofré reassigned AMQ-7232:
-

Assignee: Jean-Baptiste Onofré

> Incorrect message counters when using virtual destinations
> --
>
> Key: AMQ-7232
> URL: https://issues.apache.org/jira/browse/AMQ-7232
> Project: ActiveMQ
>  Issue Type: Bug
>Reporter: Lionel Cons
>Assignee: Jean-Baptiste Onofré
>Priority: Major
> Attachments: Virtual Destinations.jpg
>
>
> ActiveMQ supports [virtual 
> destinations|https://activemq.apache.org/virtual-destinations] to magically 
> link one or more queues to a topic.
> It also supports [JMX attributes|https://activemq.apache.org/jmx.html] to 
> count messages going through the different destinations in a broker. There 
> are both per-destination attributes ({{EnqueueCount}} and {{DequeueCount}}) 
> and per-broker attributes ({{TotalEnqueueCount}} and {{TotalDequeueCount}}).
> Unfortunately, these two features do not work well together.
> Take for instance the following scenario:
>  * one topic ({{/topic/T}})
>  * two virtual queues attached to it({{/queue/Consumer.A.T}} and 
> {{/queue/Consumer.B.T}})
>  * one topic producer ({{PT1}})
>  * two queue consumers on each virtual queue ({{CA1}}, {{CA2}}, {{CB1}} and 
> {{CB2}})
> !Virtual Destinations.jpg!
> When sending a single message, we get:
>  * {{/topic/T}}: {{EnqueueCount += 1}} and {{DequeueCount += 0}}
>  * {{/queue/Consumer.A.T}}: {{EnqueueCount += 1}} and {{DequeueCount += 1}}
>  * {{/queue/Consumer.B.T}}: {{EnqueueCount += 1}} and {{DequeueCount += 1}}
>  * at broker level: {{TotalEnqueueCount += 3}} and {{TotalDequeueCount += 2}}
> This is not consistent: when the message leaves the topic to go to the 
> virtual queues, {{DequeueCount}} (on the topic) does not change while 
> {{EnqueueCount}} (on the queues) does change.
> At broker level, {{TotalEnqueueCount}} gets incremented too much, giving the 
> impression that 3 messages have been received.
> The main question is: should the counters be incremented when a message is 
> magically forwarded from the topic to the attached virtual queues?
> I would argue that these counters should *not* change when messages move 
> internally (i.e. along dashed lines). This way, we can continue to have 
> {{TotalEnqueueCount}} being the sum of all {{EnqueueCount}} and at the same 
> time representing the number of messages received (globally) by the broker. 
> Idem for {{TotalDequeueCount}} and {{DequeueCount}}.
> IMHO, these counters should only change when messages move along solid lines. 
> If we want to track the internals (i.e. dashed lines) then we should have an 
> additional counter, a bit like we already have {{ForwardCount}} for network 
> of brokers.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (AMQ-7305) 磁盘限制达到最大值时怎么配置自动清理策略

2019-09-19 Thread Jira


[ 
https://issues.apache.org/jira/browse/AMQ-7305?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16933409#comment-16933409
 ] 

Jean-Baptiste Onofré commented on AMQ-7305:
---

Do you mind to translate in english please ?

> 磁盘限制达到最大值时怎么配置自动清理策略
> 
>
> Key: AMQ-7305
> URL: https://issues.apache.org/jira/browse/AMQ-7305
> Project: ActiveMQ
>  Issue Type: Bug
>  Components: activemq-leveldb-store
>Affects Versions: 5.15.9
>Reporter: liuhanjiang
>Priority: Major
> Attachments: activemq.xml
>
>
> 在 持久化   中配置最大的  
>  
>  
> 当达到最大值时会阻塞生产者,这个为什么没有自动清理的策略 !



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (ARTEMIS-2496) Use of Netty FileRegion on ReplicationCatch is breaking wildfly integration with artemis

2019-09-19 Thread clebert suconic (Jira)


[ 
https://issues.apache.org/jira/browse/ARTEMIS-2496?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16933377#comment-16933377
 ] 

clebert suconic commented on ARTEMIS-2496:
--

by the time someone try this, you may use a different branch (master). but the 
instructions are pretty much the same

> Use of Netty FileRegion on ReplicationCatch is breaking wildfly integration 
> with artemis
> 
>
> Key: ARTEMIS-2496
> URL: https://issues.apache.org/jira/browse/ARTEMIS-2496
> Project: ActiveMQ Artemis
>  Issue Type: Bug
>Affects Versions: 2.10.0
>Reporter: clebert suconic
>Assignee: clebert suconic
>Priority: Major
> Fix For: 2.11.0
>
> Attachments: runTillFails.sh
>
>  Time Spent: 2h
>  Remaining Estimate: 0h
>
> This is of course an issue on Wildfly integration, but it seems something on 
> our recent changes is breaking replication on Wildfly.
> My biggest concern is that it seems that paging catch up is silently failing 
> in our testsuite and some other issues are currently hidden.
> Wildfly has an extra layer on top of Netty: 
> https://github.com/xnio/netty-xnio-transport/tree/0.1
> But the main thing here, is that it seems that are other issues within 
> Artemis.
> For now I'm reverting the change from ARTEMIS-2336
> And we need more investigation to bring it back



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (ARTEMIS-2496) Use of Netty FileRegion on ReplicationCatch is breaking wildfly integration with artemis

2019-09-19 Thread Francesco Nigro (Jira)


[ 
https://issues.apache.org/jira/browse/ARTEMIS-2496?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16933370#comment-16933370
 ] 

Francesco Nigro commented on ARTEMIS-2496:
--

Here some instructions to help reproducing the issue:
* clone https://github.com/ehsavoie/wildfly/tree/WFLY-12304
* change on pom.xml 
2.11.0-SNAPSHOT
 or whatever version of artemis we want to test
* be sure that the artemis broker referenced above is isntalled in the mvn repo 
ie run on it $ mvn -Pdev -DskipTest clean install 
* run on widfly $ mvn -DskipTest clean install 
* run the attached script  on wildfly ./runTillFails.sh 


> Use of Netty FileRegion on ReplicationCatch is breaking wildfly integration 
> with artemis
> 
>
> Key: ARTEMIS-2496
> URL: https://issues.apache.org/jira/browse/ARTEMIS-2496
> Project: ActiveMQ Artemis
>  Issue Type: Bug
>Affects Versions: 2.10.0
>Reporter: clebert suconic
>Assignee: clebert suconic
>Priority: Major
> Fix For: 2.11.0
>
> Attachments: runTillFails.sh
>
>  Time Spent: 2h
>  Remaining Estimate: 0h
>
> This is of course an issue on Wildfly integration, but it seems something on 
> our recent changes is breaking replication on Wildfly.
> My biggest concern is that it seems that paging catch up is silently failing 
> in our testsuite and some other issues are currently hidden.
> Wildfly has an extra layer on top of Netty: 
> https://github.com/xnio/netty-xnio-transport/tree/0.1
> But the main thing here, is that it seems that are other issues within 
> Artemis.
> For now I'm reverting the change from ARTEMIS-2336
> And we need more investigation to bring it back



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (ARTEMIS-2496) Use of Netty FileRegion on ReplicationCatch is breaking wildfly integration with artemis

2019-09-19 Thread Francesco Nigro (Jira)


 [ 
https://issues.apache.org/jira/browse/ARTEMIS-2496?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Francesco Nigro updated ARTEMIS-2496:
-
Attachment: runTillFails.sh

> Use of Netty FileRegion on ReplicationCatch is breaking wildfly integration 
> with artemis
> 
>
> Key: ARTEMIS-2496
> URL: https://issues.apache.org/jira/browse/ARTEMIS-2496
> Project: ActiveMQ Artemis
>  Issue Type: Bug
>Affects Versions: 2.10.0
>Reporter: clebert suconic
>Assignee: clebert suconic
>Priority: Major
> Fix For: 2.11.0
>
> Attachments: runTillFails.sh
>
>  Time Spent: 2h
>  Remaining Estimate: 0h
>
> This is of course an issue on Wildfly integration, but it seems something on 
> our recent changes is breaking replication on Wildfly.
> My biggest concern is that it seems that paging catch up is silently failing 
> in our testsuite and some other issues are currently hidden.
> Wildfly has an extra layer on top of Netty: 
> https://github.com/xnio/netty-xnio-transport/tree/0.1
> But the main thing here, is that it seems that are other issues within 
> Artemis.
> For now I'm reverting the change from ARTEMIS-2336
> And we need more investigation to bring it back



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (AMQ-7305) 磁盘限制达到最大值时怎么配置自动清理策略

2019-09-19 Thread liuhanjiang (Jira)


 [ 
https://issues.apache.org/jira/browse/AMQ-7305?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

liuhanjiang updated AMQ-7305:
-
Attachment: activemq.xml

> 磁盘限制达到最大值时怎么配置自动清理策略
> 
>
> Key: AMQ-7305
> URL: https://issues.apache.org/jira/browse/AMQ-7305
> Project: ActiveMQ
>  Issue Type: Bug
>  Components: activemq-leveldb-store
>Affects Versions: 5.15.9
>Reporter: liuhanjiang
>Priority: Major
> Attachments: activemq.xml
>
>
> 在 持久化   中配置最大的  
>  
>  
> 当达到最大值时会阻塞生产者,这个为什么没有自动清理的策略 !



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Created] (AMQ-7305) 磁盘限制达到最大值时怎么配置自动清理策略

2019-09-19 Thread liuhanjiang (Jira)
liuhanjiang created AMQ-7305:


 Summary: 磁盘限制达到最大值时怎么配置自动清理策略
 Key: AMQ-7305
 URL: https://issues.apache.org/jira/browse/AMQ-7305
 Project: ActiveMQ
  Issue Type: Bug
  Components: activemq-leveldb-store
Affects Versions: 5.15.9
Reporter: liuhanjiang


在 持久化   中配置最大的  
 
 

当达到最大值时会阻塞生产者,这个为什么没有自动清理的策略 !



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Comment Edited] (AMQ-7258) ActiveMQ does not start if Karaf is offline (SAXParseException)

2019-09-19 Thread David Hilton (Jira)


[ 
https://issues.apache.org/jira/browse/AMQ-7258?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16932494#comment-16932494
 ] 

David Hilton edited comment on AMQ-7258 at 9/19/19 11:40 AM:
-

Appologies if my comments are not useful, but these issues (mentioned in the 
Jira description) seems to be present on Karaf 4.2.1, 4.2.2, 4.2.3, 4.2.4, 
4.2.5 and 4.2.6 for Apacge 5.15.10

Karaf 4.1.7 seems to work fine with ApacheMQ 5.15.10
{noformat}
 
karaf@root()> feature:list | grep active
activemq-broker-noweb | 5.15.10  |  | Uninstalled | 
activemq-5.15.10  | Full ActiveMQ broker with default 
configuration
activemq-broker   | 5.15.10  | x| Started | 
activemq-5.15.10  | Full ActiveMQ broker with default 
configuration a
activemq-camel| 5.15.10  |  | Uninstalled | 
activemq-5.15.10  |
activemq-web-console  | 5.15.10  |  | Started | 
activemq-5.15.10  |
activemq-blueprint| 5.15.10  |  | Uninstalled | 
activemq-5.15.10  |
activemq-amqp-client  | 5.15.10  |  | Uninstalled | 
activemq-5.15.10  | ActiveMQ AMQP protocol client libraries
activemq-client   | 5.15.10  |  | Started | 
activemq-core-5.15.10 | ActiveMQ client libraries
activemq-cf   | 5.15.10  |  | Uninstalled | 
activemq-core-5.15.10 | ActiveMQ ConnectionFactory from config
activemq  | 5.15.10  |  | Started | 
activemq-core-5.15.10 | ActiveMQ broker libraries{noformat}
However, Karaf 4.2.6 is obviously far better than 4.1.7, its unfair of me to 
ask [~jbonofre] but is there any workaround possible to get Karaf 4.2.6 to work 
with ActiveMQ 5.15.10 (again I appreciate its not really your problem).

 


was (Author: davidhilton68):
Appologies if my comments are not useful, but these issues seems to be present 
on Karaf 4.2.1, 4.2.2, 4.2.3, 4.2.4, 4.2.5 and 4.2.6

Surprised its not cropped up before.

Is there a workaround (other than karaf 4.1.7)?

> ActiveMQ does not start if Karaf is offline (SAXParseException)
> ---
>
> Key: AMQ-7258
> URL: https://issues.apache.org/jira/browse/AMQ-7258
> Project: ActiveMQ
>  Issue Type: Bug
>  Components: OSGi/Karaf
>Affects Versions: 5.15.9
> Environment: Karaf, Offline
>Reporter: Jonas
>Assignee: Jean-Baptiste Onofré
>Priority: Major
>  Time Spent: 1h
>  Remaining Estimate: 0h
>
> To reproduce:
>  - Download, unpack and start karaf 4.2.6
> feature:repo-add activemq
>  feature:install activemq-broker
> ActiveMQ will start succesfully.
>  Now stop karaf, go offline and start karaf again.
> This time the exception below can be found in the log and ActiveMQ fails to 
> start.
> {code:java}
> Caused by: 
> org.springframework.beans.factory.xml.XmlBeanDefinitionStoreException: Line 
> 24 in XML document from URL 
> [file:/Users/jkop/Downloads/apache-karaf-4.2.6/etc/activemq.xml] is invalid; 
> nested exception is org.xml.sax.SAXParseException; lineNumber: 24; 
> columnNumber: 101; cvc-elt.1: Cannot find the declaration of element 'beans'.
>     at 
> org.springframework.beans.factory.xml.XmlBeanDefinitionReader.doLoadBeanDefinitions(XmlBeanDefinitionReader.java:404)
>  ~[98:org.apache.servicemix.bundles.spring-beans:5.1.7.RELEASE_1]
>     at 
> org.springframework.beans.factory.xml.XmlBeanDefinitionReader.loadBeanDefinitions(XmlBeanDefinitionReader.java:336)
>  ~[98:org.apache.servicemix.bundles.spring-beans:5.1.7.RELEASE_1]
>     at 
> org.springframework.beans.factory.xml.XmlBeanDefinitionReader.loadBeanDefinitions(XmlBeanDefinitionReader.java:304)
>  ~[98:org.apache.servicemix.bundles.spring-beans:5.1.7.RELEASE_1]
>     at 
> org.springframework.beans.factory.support.AbstractBeanDefinitionReader.loadBeanDefinitions(AbstractBeanDefinitionReader.java:188)
>  ~[98:org.apache.servicemix.bundles.spring-beans:5.1.7.RELEASE_1]
>     at 
> org.springframework.beans.factory.support.AbstractBeanDefinitionReader.loadBeanDefinitions(AbstractBeanDefinitionReader.java:224)
>  ~[98:org.apache.servicemix.bundles.spring-beans:5.1.7.RELEASE_1]
>     at 
> org.springframework.beans.factory.support.AbstractBeanDefinitionReader.loadBeanDefinitions(AbstractBeanDefinitionReader.java:195)
>  ~[98:org.apache.servicemix.bundles.spring-beans:5.1.7.RELEASE_1]
>     at 
> org.springframework.beans.factory.support.AbstractBeanDefinitionReader.loadBeanDefinitions(AbstractBeanDefinitionReader.java:257)
>  ~[98:org.apache.servicemix.bundles.spring-beans:5.1.7.RELEASE_1]
>     at 
> 

[jira] [Work logged] (ARTEMIS-2496) Use of Netty FileRegion on ReplicationCatch is breaking wildfly integration with artemis

2019-09-19 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/ARTEMIS-2496?focusedWorklogId=314944=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-314944
 ]

ASF GitHub Bot logged work on ARTEMIS-2496:
---

Author: ASF GitHub Bot
Created on: 19/Sep/19 10:09
Start Date: 19/Sep/19 10:09
Worklog Time Spent: 10m 
  Work Description: franz1981 commented on issue #2843: ARTEMIS-2496 Revert 
catch up with zero-copy, as it's causing issues i…
URL: https://github.com/apache/activemq-artemis/pull/2843#issuecomment-533063407
 
 
   @wy96f I've just run a naive test replacing FileRegion with ChunkedNioFile 
and it should work OOTB...but seems that netty is not reading data from it...I 
need to dig better into it
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 314944)
Time Spent: 2h  (was: 1h 50m)

> Use of Netty FileRegion on ReplicationCatch is breaking wildfly integration 
> with artemis
> 
>
> Key: ARTEMIS-2496
> URL: https://issues.apache.org/jira/browse/ARTEMIS-2496
> Project: ActiveMQ Artemis
>  Issue Type: Bug
>Affects Versions: 2.10.0
>Reporter: clebert suconic
>Assignee: clebert suconic
>Priority: Major
> Fix For: 2.11.0
>
>  Time Spent: 2h
>  Remaining Estimate: 0h
>
> This is of course an issue on Wildfly integration, but it seems something on 
> our recent changes is breaking replication on Wildfly.
> My biggest concern is that it seems that paging catch up is silently failing 
> in our testsuite and some other issues are currently hidden.
> Wildfly has an extra layer on top of Netty: 
> https://github.com/xnio/netty-xnio-transport/tree/0.1
> But the main thing here, is that it seems that are other issues within 
> Artemis.
> For now I'm reverting the change from ARTEMIS-2336
> And we need more investigation to bring it back



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Work logged] (ARTEMIS-2482) Large messages could leak native ByteBuffers

2019-09-19 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/ARTEMIS-2482?focusedWorklogId=314935=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-314935
 ]

ASF GitHub Bot logged work on ARTEMIS-2482:
---

Author: ASF GitHub Bot
Created on: 19/Sep/19 10:02
Start Date: 19/Sep/19 10:02
Worklog Time Spent: 10m 
  Work Description: wy96f commented on pull request #2832: ARTEMIS-2482 
Large messages could leak native ByteBuffers
URL: https://github.com/apache/activemq-artemis/pull/2832#discussion_r326091918
 
 

 ##
 File path: 
artemis-server/src/main/java/org/apache/activemq/artemis/core/persistence/impl/journal/LargeServerMessageImpl.java
 ##
 @@ -511,6 +512,44 @@ protected void closeFile() throws Exception {
   }
}
 
+   private static int read(final SequentialFile file, final ByteBuffer 
bufferRead) throws Exception {
 
 Review comment:
   Agreed :)
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 314935)
Time Spent: 7h 40m  (was: 7.5h)

> Large messages could leak native ByteBuffers
> 
>
> Key: ARTEMIS-2482
> URL: https://issues.apache.org/jira/browse/ARTEMIS-2482
> Project: ActiveMQ Artemis
>  Issue Type: Bug
>  Components: AMQP, Broker, OpenWire
>Affects Versions: 2.10.0
>Reporter: Francesco Nigro
>Priority: Major
>  Time Spent: 7h 40m
>  Remaining Estimate: 0h
>
> JournalStorageManager::addBytesToLargeMessage and 
> LargeServerMessageImpl::DecodingContext::encode are relying on the pooling of 
> direct ByteBuffers performed internally by NIO.
> Those buffers are pooled until certain size limit (ie 
> jdk.nio.maxCachedBufferSize, as shown on 
> https://bugs.openjdk.java.net/browse/JDK-8147468) otherwise are freed right 
> after the write succeed.
> If the property jdk.nio.maxCachedBufferSize isn't set, the direct buffers are 
> always pooled regardless of the size, leading to OOM issues on high load of 
> variable sized writes due to the amount of direct memory allocated and not 
> released/late released.
> This should be an alternative fix for 
> https://issues.apache.org/jira/browse/ARTEMIS-1811 and it check if such 
> pooling is happening, making large messages to be read/written in chunks by 
> using the Netty ByteBuf pool to handle any intermediate buffer.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Work logged] (ARTEMIS-2496) Use of Netty FileRegion on ReplicationCatch is breaking wildfly integration with artemis

2019-09-19 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/ARTEMIS-2496?focusedWorklogId=314921=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-314921
 ]

ASF GitHub Bot logged work on ARTEMIS-2496:
---

Author: ASF GitHub Bot
Created on: 19/Sep/19 09:36
Start Date: 19/Sep/19 09:36
Worklog Time Spent: 10m 
  Work Description: wy96f commented on issue #2843: ARTEMIS-2496 Revert 
catch up with zero-copy, as it's causing issues i…
URL: https://github.com/apache/activemq-artemis/pull/2843#issuecomment-533051423
 
 
   
   > Another thing I've noticed: before the revert, `ChunkedFile` wasn't 
working for me...do we have tests to verify it?
   
   @franz1981 There is no test. I copied the code 
https://github.com/netty/netty/blob/ff7a9fa091a8bf2e10020f83fc4df1c44098/example/src/main/java/io/netty/example/file/FileServerHandler.java#L52
   
   `ChunkedFile` will read file into bytebuf which would then be written into 
socket channel, 
https://github.com/netty/netty/blob/ff7a9fa091a8bf2e10020f83fc4df1c44098/handler/src/main/java/io/netty/handler/stream/ChunkedWriteHandler.java#L242
   It should work as the operations are general. What problem did you encounter 
with `ChunkedFile`?
   
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 314921)
Time Spent: 1h 50m  (was: 1h 40m)

> Use of Netty FileRegion on ReplicationCatch is breaking wildfly integration 
> with artemis
> 
>
> Key: ARTEMIS-2496
> URL: https://issues.apache.org/jira/browse/ARTEMIS-2496
> Project: ActiveMQ Artemis
>  Issue Type: Bug
>Affects Versions: 2.10.0
>Reporter: clebert suconic
>Assignee: clebert suconic
>Priority: Major
> Fix For: 2.11.0
>
>  Time Spent: 1h 50m
>  Remaining Estimate: 0h
>
> This is of course an issue on Wildfly integration, but it seems something on 
> our recent changes is breaking replication on Wildfly.
> My biggest concern is that it seems that paging catch up is silently failing 
> in our testsuite and some other issues are currently hidden.
> Wildfly has an extra layer on top of Netty: 
> https://github.com/xnio/netty-xnio-transport/tree/0.1
> But the main thing here, is that it seems that are other issues within 
> Artemis.
> For now I'm reverting the change from ARTEMIS-2336
> And we need more investigation to bring it back



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Work logged] (ARTEMIS-2496) Use of Netty FileRegion on ReplicationCatch is breaking wildfly integration with artemis

2019-09-19 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/ARTEMIS-2496?focusedWorklogId=314908=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-314908
 ]

ASF GitHub Bot logged work on ARTEMIS-2496:
---

Author: ASF GitHub Bot
Created on: 19/Sep/19 08:43
Start Date: 19/Sep/19 08:43
Worklog Time Spent: 10m 
  Work Description: franz1981 commented on issue #2843: ARTEMIS-2496 Revert 
catch up with zero-copy, as it's causing issues i…
URL: https://github.com/apache/activemq-artemis/pull/2843#issuecomment-533031566
 
 
   @wy96f Thanks to reach out. Xnio is using 
https://github.com/xnio/netty-xnio-transport/blob/0.1/src/main/java/org/xnio/netty/transport/AbstractXnioSocketChannel.java#L149
 that transparently would receive the flushed data into Netty ie no custom 
child of `Connection`
   
   Another thing I've noticed: before the revert, `ChunkedFile` wasn't working 
for me...do we have tests to verify it?
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 314908)
Time Spent: 1h 40m  (was: 1.5h)

> Use of Netty FileRegion on ReplicationCatch is breaking wildfly integration 
> with artemis
> 
>
> Key: ARTEMIS-2496
> URL: https://issues.apache.org/jira/browse/ARTEMIS-2496
> Project: ActiveMQ Artemis
>  Issue Type: Bug
>Affects Versions: 2.10.0
>Reporter: clebert suconic
>Assignee: clebert suconic
>Priority: Major
> Fix For: 2.11.0
>
>  Time Spent: 1h 40m
>  Remaining Estimate: 0h
>
> This is of course an issue on Wildfly integration, but it seems something on 
> our recent changes is breaking replication on Wildfly.
> My biggest concern is that it seems that paging catch up is silently failing 
> in our testsuite and some other issues are currently hidden.
> Wildfly has an extra layer on top of Netty: 
> https://github.com/xnio/netty-xnio-transport/tree/0.1
> But the main thing here, is that it seems that are other issues within 
> Artemis.
> For now I'm reverting the change from ARTEMIS-2336
> And we need more investigation to bring it back



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Work logged] (ARTEMIS-2496) Use of Netty FileRegion on ReplicationCatch is breaking wildfly integration with artemis

2019-09-19 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/ARTEMIS-2496?focusedWorklogId=314903=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-314903
 ]

ASF GitHub Bot logged work on ARTEMIS-2496:
---

Author: ASF GitHub Bot
Created on: 19/Sep/19 08:32
Start Date: 19/Sep/19 08:32
Worklog Time Spent: 10m 
  Work Description: wy96f commented on issue #2843: ARTEMIS-2496 Revert 
catch up with zero-copy, as it's causing issues i…
URL: https://github.com/apache/activemq-artemis/pull/2843#issuecomment-533027655
 
 
   @clebertsuconic @franz1981 Hi, I didn't use wildfly/xnio. Will xnio use a 
HttpConnection which implements Connection like InVMConnection/NettyConnection?
   
   ```
if (connection != null && connection.getTransportConnection() 
instanceof NettyConnection) {
   bufferSize -= dataSize;
   isNetty = true;
}
buffer = createPacket(connection, bufferSize);
encodeHeader(buffer);
encodeRest(buffer, connection);
if (!isNetty) {
   if (buffer.byteBuf() != null && 
buffer.byteBuf().nioBufferCount() == 1 && buffer.byteBuf().isDirect()) {
  final ByteBuffer byteBuffer = 
buffer.byteBuf().internalNioBuffer(buffer.writerIndex(), 
buffer.writableBytes());
  readFile(byteBuffer);
   } else {
  final ByteBuf byteBuffer = 
PooledByteBufAllocator.DEFAULT.directBuffer(buffer.writableBytes(), 
buffer.writableBytes());
  try {
 final ByteBuffer nioBuffer = 
byteBuffer.internalNioBuffer(0, buffer.writableBytes());
 final int readBytes = readFile(nioBuffer);
 if (readBytes > 0) {
//still use byteBuf to copy data
buffer.writeBytes(byteBuffer, 0, readBytes);
 }
  } finally {
 byteBuffer.release();
  }
   }
   buffer.writerIndex(buffer.capacity());
}
encodeSize(buffer, encodedSize);
return buffer;
   ```
   If not NettyConnection, file data will be read into buffer.
   
   Then in ChannelImpl::send
   ```
   connection.getTransportConnection().write(buffer);
   connection.getTransportConnection().write(raf, fileChannel, offset, 
dataSize, callback == null ? null : (ChannelFutureListener) future -> 
callback.done(future == null || future.isSuccess()));
   ```
   Both buffer and file will be written. For InVMConnection, actually no file 
data is transferred:
   ```
  @Override
  public void write(RandomAccessFile raf,
FileChannel fileChannel,
long offset,
int dataSize,
final ChannelFutureListener futureListener) {
 if (futureListener == null) {
return;
 }
 try {
executor.execute(() -> {
   try {
  futureListener.operationComplete(null);
   } catch (Exception e) {
  throw new IllegalStateException(e);
   }
});
 } catch (RejectedExecutionException e) {
   
 }
  }
   ```
   But if xnio implements a connection and transfers file data one more time in 
file send method, the mechanism is broken. Not sure if it is caused by this?
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 314903)
Time Spent: 1.5h  (was: 1h 20m)

> Use of Netty FileRegion on ReplicationCatch is breaking wildfly integration 
> with artemis
> 
>
> Key: ARTEMIS-2496
> URL: https://issues.apache.org/jira/browse/ARTEMIS-2496
> Project: ActiveMQ Artemis
>  Issue Type: Bug
>Affects Versions: 2.10.0
>Reporter: clebert suconic
>Assignee: clebert suconic
>Priority: Major
> Fix For: 2.11.0
>
>  Time Spent: 1.5h
>  Remaining Estimate: 0h
>
> This is of course an issue on Wildfly integration, but it seems something on 
> our recent changes is breaking replication on Wildfly.
> My biggest concern is that it seems that paging catch up is silently failing 
> in our testsuite and some other issues are currently hidden.
> Wildfly has an extra layer on top of Netty: 
> https://github.com/xnio/netty-xnio-transport/tree/0.1
> But the main thing here, is that it seems that are other