[jira] [Work logged] (ARTEMIS-2560) Duplicate messages created by cluster lead to OOME crash

2019-12-03 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/ARTEMIS-2560?focusedWorklogId=352685&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-352685
 ]

ASF GitHub Bot logged work on ARTEMIS-2560:
---

Author: ASF GitHub Bot
Created on: 03/Dec/19 14:59
Start Date: 03/Dec/19 14:59
Worklog Time Spent: 10m 
  Work Description: asfgit commented on pull request #2906: ARTEMIS-2560 
Duplicate amqp messages over cluster
URL: https://github.com/apache/activemq-artemis/pull/2906
 
 
   
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 352685)
Time Spent: 1h 50m  (was: 1h 40m)

> Duplicate messages created by cluster lead to OOME crash
> 
>
> Key: ARTEMIS-2560
> URL: https://issues.apache.org/jira/browse/ARTEMIS-2560
> Project: ActiveMQ Artemis
>  Issue Type: Bug
>  Components: AMQP
>Affects Versions: 2.10.1
>Reporter: Mikko Niemi
>Priority: Major
> Attachments: python-qpid-consumer.py, python-qpid-producer.py, 
> server0-broker.xml, server1-broker.xml
>
>  Time Spent: 1h 50m
>  Remaining Estimate: 0h
>
> Summary: When using two node cluster with very simple configuration (see 
> attached broker.xml files), duplicate messages are generated to queue when 
> Python client is used to consume messages one by one from alternating nodes. 
> Duplicate messages are generated until OutOfMemoryException crashes the 
> broker.
> Detailed description how to produce this problem:
>  # Create two node cluster using attached broker.xml files. Node names are 
> server0 and server1 for the rest of this description.
>  # Produce 100 messages to a queue defined in the address configuration 
> inside broker.xml file on node server0. See attached Python producer. 
> Produced messages have identical content. Command to produce the messages 
> using the attached Python producer: "python python-qpid-producer.py -u 
> $username -p $password -H server0 -a exampleQueue -m TestMessageFooBar -A 100"
>  # Consume one message from server1 using attached Python consumer. Cluster 
> will balance the messages to server1 and total amount of messages is 
> decreased by one. After one message is consumed, session and connection are 
> closed. Command to consume a message using attached Python consumer: "python 
> python-qpid-consumer.py -u $username -p $password -H server1 -a exampleQueue"
>  # Consume one message from server0 using attached Python consumer. Cluster 
> will balance the messages to server0 and total amount of messages is 
> decreased by one. After one message is consumed, session and connection are 
> closed. Command to consume a message using attached Python consumer: "python 
> python-qpid-consumer.py -u $username -p $password -H server0 -a exampleQueue"
>  # Consume one message from server1 using attached Python consumer. Cluster 
> will balance the messages to server1 but this time total amount of messages 
> is increased radically.
>  # If consuming of messages is continued in the manner described above (one 
> message from one node and then one message from another node), more messages 
> continue to appear into the queue until broker runs out of memory and crashes.
> Technical details considering Python test described above:
>  * Apache ActiveMQ Artemis 2.10.1 on RHEL 7.7 64bit
>  * OpenJDK 11.0.5
>  * Python 3.4.10
>  * Apache Qpid Proton 0.29.0 installed via PIP
> In addition to above, following different variations have been tested. 
> Problem still occurs with all these variations:
>  * Protocol was changed to STOMP.
>  * Window-Based Flow Control was turned off on both sides, client and server.
>  * 
>  ** 
> [https://activemq.apache.org/components/artemis/documentation/latest/flow-control.html]
>  consumerWindowSize
>  * Implementation was changed to Java using Apache Qpid JMS library (version 
> 0.39.0 for producer, version 0.46.0 for consumer).
> If this is not a bug, I would be very happy for any solution for this 
> problem, wheter it is pointing out some mistake in the configuration or in 
> the consumer, explaining if this is a designed feature or some other 
> explanation.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Work logged] (ARTEMIS-2560) Duplicate messages created by cluster lead to OOME crash

2019-12-03 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/ARTEMIS-2560?focusedWorklogId=352682&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-352682
 ]

ASF GitHub Bot logged work on ARTEMIS-2560:
---

Author: ASF GitHub Bot
Created on: 03/Dec/19 14:57
Start Date: 03/Dec/19 14:57
Worklog Time Spent: 10m 
  Work Description: clebertsuconic commented on issue #2906: ARTEMIS-2560 
Duplicate amqp messages over cluster
URL: https://github.com/apache/activemq-artemis/pull/2906#issuecomment-561204753
 
 
   @gaohoward I already merged locallt before you fixed the import.. and I am 
merging after I run some tests. leave it with me.
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 352682)
Time Spent: 1h 40m  (was: 1.5h)

> Duplicate messages created by cluster lead to OOME crash
> 
>
> Key: ARTEMIS-2560
> URL: https://issues.apache.org/jira/browse/ARTEMIS-2560
> Project: ActiveMQ Artemis
>  Issue Type: Bug
>  Components: AMQP
>Affects Versions: 2.10.1
>Reporter: Mikko Niemi
>Priority: Major
> Attachments: python-qpid-consumer.py, python-qpid-producer.py, 
> server0-broker.xml, server1-broker.xml
>
>  Time Spent: 1h 40m
>  Remaining Estimate: 0h
>
> Summary: When using two node cluster with very simple configuration (see 
> attached broker.xml files), duplicate messages are generated to queue when 
> Python client is used to consume messages one by one from alternating nodes. 
> Duplicate messages are generated until OutOfMemoryException crashes the 
> broker.
> Detailed description how to produce this problem:
>  # Create two node cluster using attached broker.xml files. Node names are 
> server0 and server1 for the rest of this description.
>  # Produce 100 messages to a queue defined in the address configuration 
> inside broker.xml file on node server0. See attached Python producer. 
> Produced messages have identical content. Command to produce the messages 
> using the attached Python producer: "python python-qpid-producer.py -u 
> $username -p $password -H server0 -a exampleQueue -m TestMessageFooBar -A 100"
>  # Consume one message from server1 using attached Python consumer. Cluster 
> will balance the messages to server1 and total amount of messages is 
> decreased by one. After one message is consumed, session and connection are 
> closed. Command to consume a message using attached Python consumer: "python 
> python-qpid-consumer.py -u $username -p $password -H server1 -a exampleQueue"
>  # Consume one message from server0 using attached Python consumer. Cluster 
> will balance the messages to server0 and total amount of messages is 
> decreased by one. After one message is consumed, session and connection are 
> closed. Command to consume a message using attached Python consumer: "python 
> python-qpid-consumer.py -u $username -p $password -H server0 -a exampleQueue"
>  # Consume one message from server1 using attached Python consumer. Cluster 
> will balance the messages to server1 but this time total amount of messages 
> is increased radically.
>  # If consuming of messages is continued in the manner described above (one 
> message from one node and then one message from another node), more messages 
> continue to appear into the queue until broker runs out of memory and crashes.
> Technical details considering Python test described above:
>  * Apache ActiveMQ Artemis 2.10.1 on RHEL 7.7 64bit
>  * OpenJDK 11.0.5
>  * Python 3.4.10
>  * Apache Qpid Proton 0.29.0 installed via PIP
> In addition to above, following different variations have been tested. 
> Problem still occurs with all these variations:
>  * Protocol was changed to STOMP.
>  * Window-Based Flow Control was turned off on both sides, client and server.
>  * 
>  ** 
> [https://activemq.apache.org/components/artemis/documentation/latest/flow-control.html]
>  consumerWindowSize
>  * Implementation was changed to Java using Apache Qpid JMS library (version 
> 0.39.0 for producer, version 0.46.0 for consumer).
> If this is not a bug, I would be very happy for any solution for this 
> problem, wheter it is pointing out some mistake in the configuration or in 
> the consumer, explaining if this is a designed feature or some other 
> explanation.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Work logged] (ARTEMIS-2560) Duplicate messages created by cluster lead to OOME crash

2019-12-03 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/ARTEMIS-2560?focusedWorklogId=352680&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-352680
 ]

ASF GitHub Bot logged work on ARTEMIS-2560:
---

Author: ASF GitHub Bot
Created on: 03/Dec/19 14:56
Start Date: 03/Dec/19 14:56
Worklog Time Spent: 10m 
  Work Description: clebertsuconic commented on pull request #2906: 
ARTEMIS-2560 Duplicate amqp messages over cluster
URL: https://github.com/apache/activemq-artemis/pull/2906#discussion_r353226152
 
 

 ##
 File path: 
tests/integration-tests/src/test/java/org/apache/activemq/artemis/tests/integration/cluster/distribution/AMQPMessageRedistributionTest.java
 ##
 @@ -0,0 +1,158 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements. See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License. You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.activemq.artemis.tests.integration.cluster.distribution;
+
+import org.apache.activemq.artemis.api.core.RoutingType;
+import 
org.apache.activemq.artemis.core.server.cluster.impl.MessageLoadBalancingType;
+import org.apache.activemq.artemis.core.settings.impl.AddressSettings;
+import 
org.apache.activemq.artemis.protocol.amqp.broker.ProtonProtocolManagerFactory;
+import org.apache.qpid.jms.JmsConnectionFactory;
+import org.junit.Before;
+import org.junit.Test;
+
+import javax.jms.Connection;
+import javax.jms.Message;
+import javax.jms.MessageConsumer;
+import javax.jms.MessageProducer;
+import javax.jms.Session;
+import javax.jms.TextMessage;
+
+public class AMQPMessageRedistributionTest extends ClusterTestBase {
+
+   final String queue = "exampleQueue";
+   final String broker0 = "amqp://localhost:61616";
+   final String broker1 = "amqp://localhost:61617";
+
+   @Override
+   @Before
+   public void setUp() throws Exception {
+  super.setUp();
+  start();
+   }
+
+   private void start() throws Exception {
+  setupServers();
+
+  setRedistributionDelay(0);
+   }
+
+   protected boolean isNetty() {
+  return true;
+   }
+
+   @Test
+   public void testMessageRedistributionWithoutDupAMQP() throws Exception {
+  setupCluster(MessageLoadBalancingType.ON_DEMAND);
+
+  startServers(0, 1);
+
+  setupSessionFactory(0, isNetty());
+  setupSessionFactory(1, isNetty());
+
+  createQueue(0, queue, queue, null, true, null, null, 
RoutingType.ANYCAST);
+  createQueue(1, queue, queue, null, true, null, null, 
RoutingType.ANYCAST);
+
+  waitForBindings(0, queue, 1, 0, true);
+  waitForBindings(1, queue, 1, 0, true);
+
+  waitForBindings(0, queue, 1, 0, false);
+  waitForBindings(1, queue, 1, 0, false);
+
+  final int NUMBER_OF_MESSAGES = 20;
+
+  JmsConnectionFactory factory = new JmsConnectionFactory(broker0);
+  Connection connection = factory.createConnection();
+  Session session = connection.createSession(false, 
Session.AUTO_ACKNOWLEDGE);
+  MessageProducer producer = 
session.createProducer(session.createQueue(queue));
+
+  for (int i = 0; i < NUMBER_OF_MESSAGES; i++) {
+ producer.send(session.createTextMessage("hello " + i));
+  }
+  connection.close();
+
+  receiveOnBothNodes(NUMBER_OF_MESSAGES);
+   }
+
+   private void receiveOnBothNodes(int NUMBER_OF_MESSAGES) throws Exception {
+  receiveBroker(broker1, 1);
+  receiveBroker(broker0, 1);
+  receiveBrokerAll(broker1, NUMBER_OF_MESSAGES - 2);
+   }
+
+   private void receiveBrokerAll(String brokerurl, int num) throws Exception {
+  JmsConnectionFactory factory = new JmsConnectionFactory(brokerurl);
+  Connection connection = factory.createConnection();
+  Session session = connection.createSession(false, 
Session.AUTO_ACKNOWLEDGE);
+  MessageConsumer consumer = 
session.createConsumer(session.createQueue(queue));
+  connection.start();
+
+  for (int i = 0; i < num; i++) {
+ TextMessage msg = (TextMessage) consumer.receive(5000);
+ assertNotNull(msg);
+  }
+  Message msg = consumer.receive(2000);
 
 Review comment:
   Please... do not receive(2 seconds) and assert for null
   
   if you expect it to be null, please receiveNoWait
 
-

[jira] [Work logged] (ARTEMIS-2560) Duplicate messages created by cluster lead to OOME crash

2019-12-03 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/ARTEMIS-2560?focusedWorklogId=352681&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-352681
 ]

ASF GitHub Bot logged work on ARTEMIS-2560:
---

Author: ASF GitHub Bot
Created on: 03/Dec/19 14:56
Start Date: 03/Dec/19 14:56
Worklog Time Spent: 10m 
  Work Description: clebertsuconic commented on pull request #2906: 
ARTEMIS-2560 Duplicate amqp messages over cluster
URL: https://github.com/apache/activemq-artemis/pull/2906#discussion_r353226253
 
 

 ##
 File path: 
tests/integration-tests/src/test/java/org/apache/activemq/artemis/tests/integration/cluster/distribution/AMQPMessageRedistributionTest.java
 ##
 @@ -0,0 +1,158 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements. See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License. You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.activemq.artemis.tests.integration.cluster.distribution;
+
+import org.apache.activemq.artemis.api.core.RoutingType;
+import 
org.apache.activemq.artemis.core.server.cluster.impl.MessageLoadBalancingType;
+import org.apache.activemq.artemis.core.settings.impl.AddressSettings;
+import 
org.apache.activemq.artemis.protocol.amqp.broker.ProtonProtocolManagerFactory;
+import org.apache.qpid.jms.JmsConnectionFactory;
+import org.junit.Before;
+import org.junit.Test;
+
+import javax.jms.Connection;
+import javax.jms.Message;
+import javax.jms.MessageConsumer;
+import javax.jms.MessageProducer;
+import javax.jms.Session;
+import javax.jms.TextMessage;
+
+public class AMQPMessageRedistributionTest extends ClusterTestBase {
+
+   final String queue = "exampleQueue";
+   final String broker0 = "amqp://localhost:61616";
+   final String broker1 = "amqp://localhost:61617";
+
+   @Override
+   @Before
+   public void setUp() throws Exception {
+  super.setUp();
+  start();
+   }
+
+   private void start() throws Exception {
+  setupServers();
+
+  setRedistributionDelay(0);
+   }
+
+   protected boolean isNetty() {
+  return true;
+   }
+
+   @Test
+   public void testMessageRedistributionWithoutDupAMQP() throws Exception {
+  setupCluster(MessageLoadBalancingType.ON_DEMAND);
+
+  startServers(0, 1);
+
+  setupSessionFactory(0, isNetty());
+  setupSessionFactory(1, isNetty());
+
+  createQueue(0, queue, queue, null, true, null, null, 
RoutingType.ANYCAST);
+  createQueue(1, queue, queue, null, true, null, null, 
RoutingType.ANYCAST);
+
+  waitForBindings(0, queue, 1, 0, true);
+  waitForBindings(1, queue, 1, 0, true);
+
+  waitForBindings(0, queue, 1, 0, false);
+  waitForBindings(1, queue, 1, 0, false);
+
+  final int NUMBER_OF_MESSAGES = 20;
+
+  JmsConnectionFactory factory = new JmsConnectionFactory(broker0);
+  Connection connection = factory.createConnection();
+  Session session = connection.createSession(false, 
Session.AUTO_ACKNOWLEDGE);
+  MessageProducer producer = 
session.createProducer(session.createQueue(queue));
+
+  for (int i = 0; i < NUMBER_OF_MESSAGES; i++) {
+ producer.send(session.createTextMessage("hello " + i));
+  }
+  connection.close();
+
+  receiveOnBothNodes(NUMBER_OF_MESSAGES);
+   }
+
+   private void receiveOnBothNodes(int NUMBER_OF_MESSAGES) throws Exception {
+  receiveBroker(broker1, 1);
+  receiveBroker(broker0, 1);
+  receiveBrokerAll(broker1, NUMBER_OF_MESSAGES - 2);
+   }
+
+   private void receiveBrokerAll(String brokerurl, int num) throws Exception {
+  JmsConnectionFactory factory = new JmsConnectionFactory(brokerurl);
+  Connection connection = factory.createConnection();
+  Session session = connection.createSession(false, 
Session.AUTO_ACKNOWLEDGE);
+  MessageConsumer consumer = 
session.createConsumer(session.createQueue(queue));
+  connection.start();
+
+  for (int i = 0; i < num; i++) {
+ TextMessage msg = (TextMessage) consumer.receive(5000);
+ assertNotNull(msg);
+  }
+  Message msg = consumer.receive(2000);
 
 Review comment:
   but I already changed it here, no need to ammend.
 

This is an automated message from the Apache Gi

[jira] [Work logged] (ARTEMIS-2560) Duplicate messages created by cluster lead to OOME crash

2019-12-03 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/ARTEMIS-2560?focusedWorklogId=352672&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-352672
 ]

ASF GitHub Bot logged work on ARTEMIS-2560:
---

Author: ASF GitHub Bot
Created on: 03/Dec/19 14:47
Start Date: 03/Dec/19 14:47
Worklog Time Spent: 10m 
  Work Description: gaohoward commented on issue #2906: ARTEMIS-2560 
Duplicate amqp messages over cluster
URL: https://github.com/apache/activemq-artemis/pull/2906#issuecomment-561200088
 
 
   @clebertsuconic I'll remove that. It was added before I moved the predict to 
Message. It passed the style check so it's been ignored by me.  
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 352672)
Time Spent: 1h 10m  (was: 1h)

> Duplicate messages created by cluster lead to OOME crash
> 
>
> Key: ARTEMIS-2560
> URL: https://issues.apache.org/jira/browse/ARTEMIS-2560
> Project: ActiveMQ Artemis
>  Issue Type: Bug
>  Components: AMQP
>Affects Versions: 2.10.1
>Reporter: Mikko Niemi
>Priority: Major
> Attachments: python-qpid-consumer.py, python-qpid-producer.py, 
> server0-broker.xml, server1-broker.xml
>
>  Time Spent: 1h 10m
>  Remaining Estimate: 0h
>
> Summary: When using two node cluster with very simple configuration (see 
> attached broker.xml files), duplicate messages are generated to queue when 
> Python client is used to consume messages one by one from alternating nodes. 
> Duplicate messages are generated until OutOfMemoryException crashes the 
> broker.
> Detailed description how to produce this problem:
>  # Create two node cluster using attached broker.xml files. Node names are 
> server0 and server1 for the rest of this description.
>  # Produce 100 messages to a queue defined in the address configuration 
> inside broker.xml file on node server0. See attached Python producer. 
> Produced messages have identical content. Command to produce the messages 
> using the attached Python producer: "python python-qpid-producer.py -u 
> $username -p $password -H server0 -a exampleQueue -m TestMessageFooBar -A 100"
>  # Consume one message from server1 using attached Python consumer. Cluster 
> will balance the messages to server1 and total amount of messages is 
> decreased by one. After one message is consumed, session and connection are 
> closed. Command to consume a message using attached Python consumer: "python 
> python-qpid-consumer.py -u $username -p $password -H server1 -a exampleQueue"
>  # Consume one message from server0 using attached Python consumer. Cluster 
> will balance the messages to server0 and total amount of messages is 
> decreased by one. After one message is consumed, session and connection are 
> closed. Command to consume a message using attached Python consumer: "python 
> python-qpid-consumer.py -u $username -p $password -H server0 -a exampleQueue"
>  # Consume one message from server1 using attached Python consumer. Cluster 
> will balance the messages to server1 but this time total amount of messages 
> is increased radically.
>  # If consuming of messages is continued in the manner described above (one 
> message from one node and then one message from another node), more messages 
> continue to appear into the queue until broker runs out of memory and crashes.
> Technical details considering Python test described above:
>  * Apache ActiveMQ Artemis 2.10.1 on RHEL 7.7 64bit
>  * OpenJDK 11.0.5
>  * Python 3.4.10
>  * Apache Qpid Proton 0.29.0 installed via PIP
> In addition to above, following different variations have been tested. 
> Problem still occurs with all these variations:
>  * Protocol was changed to STOMP.
>  * Window-Based Flow Control was turned off on both sides, client and server.
>  * 
>  ** 
> [https://activemq.apache.org/components/artemis/documentation/latest/flow-control.html]
>  consumerWindowSize
>  * Implementation was changed to Java using Apache Qpid JMS library (version 
> 0.39.0 for producer, version 0.46.0 for consumer).
> If this is not a bug, I would be very happy for any solution for this 
> problem, wheter it is pointing out some mistake in the configuration or in 
> the consumer, explaining if this is a designed feature or some other 
> explanation.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Work logged] (ARTEMIS-2560) Duplicate messages created by cluster lead to OOME crash

2019-12-03 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/ARTEMIS-2560?focusedWorklogId=352670&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-352670
 ]

ASF GitHub Bot logged work on ARTEMIS-2560:
---

Author: ASF GitHub Bot
Created on: 03/Dec/19 14:45
Start Date: 03/Dec/19 14:45
Worklog Time Spent: 10m 
  Work Description: clebertsuconic commented on issue #2906: ARTEMIS-2560 
Duplicate amqp messages over cluster
URL: https://github.com/apache/activemq-artemis/pull/2906#issuecomment-561198944
 
 
   @gaohoward you actually don't need that import static at all. Those classes 
are implementing / extending Message. so no need to static import (apparently)
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 352670)
Time Spent: 1h  (was: 50m)

> Duplicate messages created by cluster lead to OOME crash
> 
>
> Key: ARTEMIS-2560
> URL: https://issues.apache.org/jira/browse/ARTEMIS-2560
> Project: ActiveMQ Artemis
>  Issue Type: Bug
>  Components: AMQP
>Affects Versions: 2.10.1
>Reporter: Mikko Niemi
>Priority: Major
> Attachments: python-qpid-consumer.py, python-qpid-producer.py, 
> server0-broker.xml, server1-broker.xml
>
>  Time Spent: 1h
>  Remaining Estimate: 0h
>
> Summary: When using two node cluster with very simple configuration (see 
> attached broker.xml files), duplicate messages are generated to queue when 
> Python client is used to consume messages one by one from alternating nodes. 
> Duplicate messages are generated until OutOfMemoryException crashes the 
> broker.
> Detailed description how to produce this problem:
>  # Create two node cluster using attached broker.xml files. Node names are 
> server0 and server1 for the rest of this description.
>  # Produce 100 messages to a queue defined in the address configuration 
> inside broker.xml file on node server0. See attached Python producer. 
> Produced messages have identical content. Command to produce the messages 
> using the attached Python producer: "python python-qpid-producer.py -u 
> $username -p $password -H server0 -a exampleQueue -m TestMessageFooBar -A 100"
>  # Consume one message from server1 using attached Python consumer. Cluster 
> will balance the messages to server1 and total amount of messages is 
> decreased by one. After one message is consumed, session and connection are 
> closed. Command to consume a message using attached Python consumer: "python 
> python-qpid-consumer.py -u $username -p $password -H server1 -a exampleQueue"
>  # Consume one message from server0 using attached Python consumer. Cluster 
> will balance the messages to server0 and total amount of messages is 
> decreased by one. After one message is consumed, session and connection are 
> closed. Command to consume a message using attached Python consumer: "python 
> python-qpid-consumer.py -u $username -p $password -H server0 -a exampleQueue"
>  # Consume one message from server1 using attached Python consumer. Cluster 
> will balance the messages to server1 but this time total amount of messages 
> is increased radically.
>  # If consuming of messages is continued in the manner described above (one 
> message from one node and then one message from another node), more messages 
> continue to appear into the queue until broker runs out of memory and crashes.
> Technical details considering Python test described above:
>  * Apache ActiveMQ Artemis 2.10.1 on RHEL 7.7 64bit
>  * OpenJDK 11.0.5
>  * Python 3.4.10
>  * Apache Qpid Proton 0.29.0 installed via PIP
> In addition to above, following different variations have been tested. 
> Problem still occurs with all these variations:
>  * Protocol was changed to STOMP.
>  * Window-Based Flow Control was turned off on both sides, client and server.
>  * 
>  ** 
> [https://activemq.apache.org/components/artemis/documentation/latest/flow-control.html]
>  consumerWindowSize
>  * Implementation was changed to Java using Apache Qpid JMS library (version 
> 0.39.0 for producer, version 0.46.0 for consumer).
> If this is not a bug, I would be very happy for any solution for this 
> problem, wheter it is pointing out some mistake in the configuration or in 
> the consumer, explaining if this is a designed feature or some other 
> explanation.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Work logged] (ARTEMIS-2560) Duplicate messages created by cluster lead to OOME crash

2019-12-03 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/ARTEMIS-2560?focusedWorklogId=352669&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-352669
 ]

ASF GitHub Bot logged work on ARTEMIS-2560:
---

Author: ASF GitHub Bot
Created on: 03/Dec/19 14:44
Start Date: 03/Dec/19 14:44
Worklog Time Spent: 10m 
  Work Description: clebertsuconic commented on issue #2906: ARTEMIS-2560 
Duplicate amqp messages over cluster
URL: https://github.com/apache/activemq-artemis/pull/2906#issuecomment-561198944
 
 
   @gaohoward you actually don't need that import static at all. Those classes 
are implemeting / extending Message. so no need to static import (apparently)
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 352669)
Time Spent: 50m  (was: 40m)

> Duplicate messages created by cluster lead to OOME crash
> 
>
> Key: ARTEMIS-2560
> URL: https://issues.apache.org/jira/browse/ARTEMIS-2560
> Project: ActiveMQ Artemis
>  Issue Type: Bug
>  Components: AMQP
>Affects Versions: 2.10.1
>Reporter: Mikko Niemi
>Priority: Major
> Attachments: python-qpid-consumer.py, python-qpid-producer.py, 
> server0-broker.xml, server1-broker.xml
>
>  Time Spent: 50m
>  Remaining Estimate: 0h
>
> Summary: When using two node cluster with very simple configuration (see 
> attached broker.xml files), duplicate messages are generated to queue when 
> Python client is used to consume messages one by one from alternating nodes. 
> Duplicate messages are generated until OutOfMemoryException crashes the 
> broker.
> Detailed description how to produce this problem:
>  # Create two node cluster using attached broker.xml files. Node names are 
> server0 and server1 for the rest of this description.
>  # Produce 100 messages to a queue defined in the address configuration 
> inside broker.xml file on node server0. See attached Python producer. 
> Produced messages have identical content. Command to produce the messages 
> using the attached Python producer: "python python-qpid-producer.py -u 
> $username -p $password -H server0 -a exampleQueue -m TestMessageFooBar -A 100"
>  # Consume one message from server1 using attached Python consumer. Cluster 
> will balance the messages to server1 and total amount of messages is 
> decreased by one. After one message is consumed, session and connection are 
> closed. Command to consume a message using attached Python consumer: "python 
> python-qpid-consumer.py -u $username -p $password -H server1 -a exampleQueue"
>  # Consume one message from server0 using attached Python consumer. Cluster 
> will balance the messages to server0 and total amount of messages is 
> decreased by one. After one message is consumed, session and connection are 
> closed. Command to consume a message using attached Python consumer: "python 
> python-qpid-consumer.py -u $username -p $password -H server0 -a exampleQueue"
>  # Consume one message from server1 using attached Python consumer. Cluster 
> will balance the messages to server1 but this time total amount of messages 
> is increased radically.
>  # If consuming of messages is continued in the manner described above (one 
> message from one node and then one message from another node), more messages 
> continue to appear into the queue until broker runs out of memory and crashes.
> Technical details considering Python test described above:
>  * Apache ActiveMQ Artemis 2.10.1 on RHEL 7.7 64bit
>  * OpenJDK 11.0.5
>  * Python 3.4.10
>  * Apache Qpid Proton 0.29.0 installed via PIP
> In addition to above, following different variations have been tested. 
> Problem still occurs with all these variations:
>  * Protocol was changed to STOMP.
>  * Window-Based Flow Control was turned off on both sides, client and server.
>  * 
>  ** 
> [https://activemq.apache.org/components/artemis/documentation/latest/flow-control.html]
>  consumerWindowSize
>  * Implementation was changed to Java using Apache Qpid JMS library (version 
> 0.39.0 for producer, version 0.46.0 for consumer).
> If this is not a bug, I would be very happy for any solution for this 
> problem, wheter it is pointing out some mistake in the configuration or in 
> the consumer, explaining if this is a designed feature or some other 
> explanation.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Work logged] (ARTEMIS-2560) Duplicate messages created by cluster lead to OOME crash

2019-12-03 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/ARTEMIS-2560?focusedWorklogId=352658&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-352658
 ]

ASF GitHub Bot logged work on ARTEMIS-2560:
---

Author: ASF GitHub Bot
Created on: 03/Dec/19 14:37
Start Date: 03/Dec/19 14:37
Worklog Time Spent: 10m 
  Work Description: gaohoward commented on issue #2906: ARTEMIS-2560 
Duplicate amqp messages over cluster
URL: https://github.com/apache/activemq-artemis/pull/2906#issuecomment-561195869
 
 
   thx @clebertsuconic @franz1981 
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 352658)
Time Spent: 40m  (was: 0.5h)

> Duplicate messages created by cluster lead to OOME crash
> 
>
> Key: ARTEMIS-2560
> URL: https://issues.apache.org/jira/browse/ARTEMIS-2560
> Project: ActiveMQ Artemis
>  Issue Type: Bug
>  Components: AMQP
>Affects Versions: 2.10.1
>Reporter: Mikko Niemi
>Priority: Major
> Attachments: python-qpid-consumer.py, python-qpid-producer.py, 
> server0-broker.xml, server1-broker.xml
>
>  Time Spent: 40m
>  Remaining Estimate: 0h
>
> Summary: When using two node cluster with very simple configuration (see 
> attached broker.xml files), duplicate messages are generated to queue when 
> Python client is used to consume messages one by one from alternating nodes. 
> Duplicate messages are generated until OutOfMemoryException crashes the 
> broker.
> Detailed description how to produce this problem:
>  # Create two node cluster using attached broker.xml files. Node names are 
> server0 and server1 for the rest of this description.
>  # Produce 100 messages to a queue defined in the address configuration 
> inside broker.xml file on node server0. See attached Python producer. 
> Produced messages have identical content. Command to produce the messages 
> using the attached Python producer: "python python-qpid-producer.py -u 
> $username -p $password -H server0 -a exampleQueue -m TestMessageFooBar -A 100"
>  # Consume one message from server1 using attached Python consumer. Cluster 
> will balance the messages to server1 and total amount of messages is 
> decreased by one. After one message is consumed, session and connection are 
> closed. Command to consume a message using attached Python consumer: "python 
> python-qpid-consumer.py -u $username -p $password -H server1 -a exampleQueue"
>  # Consume one message from server0 using attached Python consumer. Cluster 
> will balance the messages to server0 and total amount of messages is 
> decreased by one. After one message is consumed, session and connection are 
> closed. Command to consume a message using attached Python consumer: "python 
> python-qpid-consumer.py -u $username -p $password -H server0 -a exampleQueue"
>  # Consume one message from server1 using attached Python consumer. Cluster 
> will balance the messages to server1 but this time total amount of messages 
> is increased radically.
>  # If consuming of messages is continued in the manner described above (one 
> message from one node and then one message from another node), more messages 
> continue to appear into the queue until broker runs out of memory and crashes.
> Technical details considering Python test described above:
>  * Apache ActiveMQ Artemis 2.10.1 on RHEL 7.7 64bit
>  * OpenJDK 11.0.5
>  * Python 3.4.10
>  * Apache Qpid Proton 0.29.0 installed via PIP
> In addition to above, following different variations have been tested. 
> Problem still occurs with all these variations:
>  * Protocol was changed to STOMP.
>  * Window-Based Flow Control was turned off on both sides, client and server.
>  * 
>  ** 
> [https://activemq.apache.org/components/artemis/documentation/latest/flow-control.html]
>  consumerWindowSize
>  * Implementation was changed to Java using Apache Qpid JMS library (version 
> 0.39.0 for producer, version 0.46.0 for consumer).
> If this is not a bug, I would be very happy for any solution for this 
> problem, wheter it is pointing out some mistake in the configuration or in 
> the consumer, explaining if this is a designed feature or some other 
> explanation.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Work logged] (ARTEMIS-2560) Duplicate messages created by cluster lead to OOME crash

2019-12-03 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/ARTEMIS-2560?focusedWorklogId=352654&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-352654
 ]

ASF GitHub Bot logged work on ARTEMIS-2560:
---

Author: ASF GitHub Bot
Created on: 03/Dec/19 14:33
Start Date: 03/Dec/19 14:33
Worklog Time Spent: 10m 
  Work Description: clebertsuconic commented on issue #2906: ARTEMIS-2560 
Duplicate amqp messages over cluster
URL: https://github.com/apache/activemq-artemis/pull/2906#issuecomment-561193994
 
 
   That was a mistake on the import, I will amend while merging it.
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 352654)
Time Spent: 0.5h  (was: 20m)

> Duplicate messages created by cluster lead to OOME crash
> 
>
> Key: ARTEMIS-2560
> URL: https://issues.apache.org/jira/browse/ARTEMIS-2560
> Project: ActiveMQ Artemis
>  Issue Type: Bug
>  Components: AMQP
>Affects Versions: 2.10.1
>Reporter: Mikko Niemi
>Priority: Major
> Attachments: python-qpid-consumer.py, python-qpid-producer.py, 
> server0-broker.xml, server1-broker.xml
>
>  Time Spent: 0.5h
>  Remaining Estimate: 0h
>
> Summary: When using two node cluster with very simple configuration (see 
> attached broker.xml files), duplicate messages are generated to queue when 
> Python client is used to consume messages one by one from alternating nodes. 
> Duplicate messages are generated until OutOfMemoryException crashes the 
> broker.
> Detailed description how to produce this problem:
>  # Create two node cluster using attached broker.xml files. Node names are 
> server0 and server1 for the rest of this description.
>  # Produce 100 messages to a queue defined in the address configuration 
> inside broker.xml file on node server0. See attached Python producer. 
> Produced messages have identical content. Command to produce the messages 
> using the attached Python producer: "python python-qpid-producer.py -u 
> $username -p $password -H server0 -a exampleQueue -m TestMessageFooBar -A 100"
>  # Consume one message from server1 using attached Python consumer. Cluster 
> will balance the messages to server1 and total amount of messages is 
> decreased by one. After one message is consumed, session and connection are 
> closed. Command to consume a message using attached Python consumer: "python 
> python-qpid-consumer.py -u $username -p $password -H server1 -a exampleQueue"
>  # Consume one message from server0 using attached Python consumer. Cluster 
> will balance the messages to server0 and total amount of messages is 
> decreased by one. After one message is consumed, session and connection are 
> closed. Command to consume a message using attached Python consumer: "python 
> python-qpid-consumer.py -u $username -p $password -H server0 -a exampleQueue"
>  # Consume one message from server1 using attached Python consumer. Cluster 
> will balance the messages to server1 but this time total amount of messages 
> is increased radically.
>  # If consuming of messages is continued in the manner described above (one 
> message from one node and then one message from another node), more messages 
> continue to appear into the queue until broker runs out of memory and crashes.
> Technical details considering Python test described above:
>  * Apache ActiveMQ Artemis 2.10.1 on RHEL 7.7 64bit
>  * OpenJDK 11.0.5
>  * Python 3.4.10
>  * Apache Qpid Proton 0.29.0 installed via PIP
> In addition to above, following different variations have been tested. 
> Problem still occurs with all these variations:
>  * Protocol was changed to STOMP.
>  * Window-Based Flow Control was turned off on both sides, client and server.
>  * 
>  ** 
> [https://activemq.apache.org/components/artemis/documentation/latest/flow-control.html]
>  consumerWindowSize
>  * Implementation was changed to Java using Apache Qpid JMS library (version 
> 0.39.0 for producer, version 0.46.0 for consumer).
> If this is not a bug, I would be very happy for any solution for this 
> problem, wheter it is pointing out some mistake in the configuration or in 
> the consumer, explaining if this is a designed feature or some other 
> explanation.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Work logged] (ARTEMIS-2560) Duplicate messages created by cluster lead to OOME crash

2019-12-03 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/ARTEMIS-2560?focusedWorklogId=352653&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-352653
 ]

ASF GitHub Bot logged work on ARTEMIS-2560:
---

Author: ASF GitHub Bot
Created on: 03/Dec/19 14:28
Start Date: 03/Dec/19 14:28
Worklog Time Spent: 10m 
  Work Description: clebertsuconic commented on issue #2906: ARTEMIS-2560 
Duplicate amqp messages over cluster
URL: https://github.com/apache/activemq-artemis/pull/2906#issuecomment-561191883
 
 
   AMQP is accessing a CoreMessage property... I don't like it on a first look
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 352653)
Time Spent: 20m  (was: 10m)

> Duplicate messages created by cluster lead to OOME crash
> 
>
> Key: ARTEMIS-2560
> URL: https://issues.apache.org/jira/browse/ARTEMIS-2560
> Project: ActiveMQ Artemis
>  Issue Type: Bug
>  Components: AMQP
>Affects Versions: 2.10.1
>Reporter: Mikko Niemi
>Priority: Major
> Attachments: python-qpid-consumer.py, python-qpid-producer.py, 
> server0-broker.xml, server1-broker.xml
>
>  Time Spent: 20m
>  Remaining Estimate: 0h
>
> Summary: When using two node cluster with very simple configuration (see 
> attached broker.xml files), duplicate messages are generated to queue when 
> Python client is used to consume messages one by one from alternating nodes. 
> Duplicate messages are generated until OutOfMemoryException crashes the 
> broker.
> Detailed description how to produce this problem:
>  # Create two node cluster using attached broker.xml files. Node names are 
> server0 and server1 for the rest of this description.
>  # Produce 100 messages to a queue defined in the address configuration 
> inside broker.xml file on node server0. See attached Python producer. 
> Produced messages have identical content. Command to produce the messages 
> using the attached Python producer: "python python-qpid-producer.py -u 
> $username -p $password -H server0 -a exampleQueue -m TestMessageFooBar -A 100"
>  # Consume one message from server1 using attached Python consumer. Cluster 
> will balance the messages to server1 and total amount of messages is 
> decreased by one. After one message is consumed, session and connection are 
> closed. Command to consume a message using attached Python consumer: "python 
> python-qpid-consumer.py -u $username -p $password -H server1 -a exampleQueue"
>  # Consume one message from server0 using attached Python consumer. Cluster 
> will balance the messages to server0 and total amount of messages is 
> decreased by one. After one message is consumed, session and connection are 
> closed. Command to consume a message using attached Python consumer: "python 
> python-qpid-consumer.py -u $username -p $password -H server0 -a exampleQueue"
>  # Consume one message from server1 using attached Python consumer. Cluster 
> will balance the messages to server1 but this time total amount of messages 
> is increased radically.
>  # If consuming of messages is continued in the manner described above (one 
> message from one node and then one message from another node), more messages 
> continue to appear into the queue until broker runs out of memory and crashes.
> Technical details considering Python test described above:
>  * Apache ActiveMQ Artemis 2.10.1 on RHEL 7.7 64bit
>  * OpenJDK 11.0.5
>  * Python 3.4.10
>  * Apache Qpid Proton 0.29.0 installed via PIP
> In addition to above, following different variations have been tested. 
> Problem still occurs with all these variations:
>  * Protocol was changed to STOMP.
>  * Window-Based Flow Control was turned off on both sides, client and server.
>  * 
>  ** 
> [https://activemq.apache.org/components/artemis/documentation/latest/flow-control.html]
>  consumerWindowSize
>  * Implementation was changed to Java using Apache Qpid JMS library (version 
> 0.39.0 for producer, version 0.46.0 for consumer).
> If this is not a bug, I would be very happy for any solution for this 
> problem, wheter it is pointing out some mistake in the configuration or in 
> the consumer, explaining if this is a designed feature or some other 
> explanation.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Work logged] (ARTEMIS-2560) Duplicate messages created by cluster lead to OOME crash

2019-12-03 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/ARTEMIS-2560?focusedWorklogId=352538&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-352538
 ]

ASF GitHub Bot logged work on ARTEMIS-2560:
---

Author: ASF GitHub Bot
Created on: 03/Dec/19 10:23
Start Date: 03/Dec/19 10:23
Worklog Time Spent: 10m 
  Work Description: gaohoward commented on pull request #2906: ARTEMIS-2560 
Duplicate amqp messages over cluster
URL: https://github.com/apache/activemq-artemis/pull/2906
 
 
   When AMQPMessages are redistributed from one node to
   another, the internal property of message is not
   cleaned up and this causes a message to be routed
   to a same queue more than once, causing duplicated
   messages.
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 352538)
Remaining Estimate: 0h
Time Spent: 10m

> Duplicate messages created by cluster lead to OOME crash
> 
>
> Key: ARTEMIS-2560
> URL: https://issues.apache.org/jira/browse/ARTEMIS-2560
> Project: ActiveMQ Artemis
>  Issue Type: Bug
>  Components: AMQP
>Affects Versions: 2.10.1
>Reporter: Mikko Niemi
>Priority: Major
> Attachments: python-qpid-consumer.py, python-qpid-producer.py, 
> server0-broker.xml, server1-broker.xml
>
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> Summary: When using two node cluster with very simple configuration (see 
> attached broker.xml files), duplicate messages are generated to queue when 
> Python client is used to consume messages one by one from alternating nodes. 
> Duplicate messages are generated until OutOfMemoryException crashes the 
> broker.
> Detailed description how to produce this problem:
>  # Create two node cluster using attached broker.xml files. Node names are 
> server0 and server1 for the rest of this description.
>  # Produce 100 messages to a queue defined in the address configuration 
> inside broker.xml file on node server0. See attached Python producer. 
> Produced messages have identical content. Command to produce the messages 
> using the attached Python producer: "python python-qpid-producer.py -u 
> $username -p $password -H server0 -a exampleQueue -m TestMessageFooBar -A 100"
>  # Consume one message from server1 using attached Python consumer. Cluster 
> will balance the messages to server1 and total amount of messages is 
> decreased by one. After one message is consumed, session and connection are 
> closed. Command to consume a message using attached Python consumer: "python 
> python-qpid-consumer.py -u $username -p $password -H server1 -a exampleQueue"
>  # Consume one message from server0 using attached Python consumer. Cluster 
> will balance the messages to server0 and total amount of messages is 
> decreased by one. After one message is consumed, session and connection are 
> closed. Command to consume a message using attached Python consumer: "python 
> python-qpid-consumer.py -u $username -p $password -H server0 -a exampleQueue"
>  # Consume one message from server1 using attached Python consumer. Cluster 
> will balance the messages to server1 but this time total amount of messages 
> is increased radically.
>  # If consuming of messages is continued in the manner described above (one 
> message from one node and then one message from another node), more messages 
> continue to appear into the queue until broker runs out of memory and crashes.
> Technical details considering Python test described above:
>  * Apache ActiveMQ Artemis 2.10.1 on RHEL 7.7 64bit
>  * OpenJDK 11.0.5
>  * Python 3.4.10
>  * Apache Qpid Proton 0.29.0 installed via PIP
> In addition to above, following different variations have been tested. 
> Problem still occurs with all these variations:
>  * Protocol was changed to STOMP.
>  * Window-Based Flow Control was turned off on both sides, client and server.
>  * 
>  ** 
> [https://activemq.apache.org/components/artemis/documentation/latest/flow-control.html]
>  consumerWindowSize
>  * Implementation was changed to Java using Apache Qpid JMS library (version 
> 0.39.0 for producer, version 0.46.0 for consumer).
> If this is not a bug, I would be very happy for any solution for this 
> problem, wheter it is pointing out some mistake in the configuration or in 
> the consumer, explaining if this is a designed feature or some other 
> explanation.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)