[jira] [Commented] (DISPATCH-1878) Client app not getting a response through tcpListener

2021-04-16 Thread ASF GitHub Bot (Jira)


[ 
https://issues.apache.org/jira/browse/DISPATCH-1878?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17324067#comment-17324067
 ] 

ASF GitHub Bot commented on DISPATCH-1878:
--

grs commented on a change in pull request #1129:
URL: https://github.com/apache/qpid-dispatch/pull/1129#discussion_r615108651



##
File path: src/adaptors/tcp_adaptor.c
##
@@ -288,18 +338,87 @@ static int handle_incoming_impl(qdr_tcp_connection_t 
*conn, bool close_pending)
 qd_message_set_q2_unblocked_handler(msg, qdr_tcp_q2_unblocked_handler, 
conn_sp);
 
 conn->instream = qdr_link_deliver(conn->incoming, msg, 0, false, 0, 0, 
0, 0);
-qd_log(tcp_adaptor->log_source, QD_LOG_DEBUG, 
"[C%"PRIu64"][L%"PRIu64"] Initiating message with %i bytes", conn->conn_id, 
conn->incoming_id, count);
+qd_log(log, QD_LOG_DEBUG,
+   "[C%"PRIu64"][L%"PRIu64"][D%"PRIu64"] Initiating ingress stream 
message with %u bytes",
+   conn->conn_id, conn->incoming_id, conn->instream->delivery_id, 
length);
+conn->incoming_started = true;
+
+// Handle deferment of write side close.
+sys_mutex_lock(conn->activation_lock);
+if (conn->read_eos_seen && !conn->raw_closed_write) {
+// to-raw-conn EOS was seen before the from-raw-conn instream 
delivery existed.

Review comment:
   If C3110  is on the tcpConnector, then it does not need to wait for 
reply-to in order to create the reply message as that was in the message it 
already received and read to EOS. If it does not have credit it should not have 
granted read buffers and so would not get the READ events until it did have 
credit.




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


> Client app not getting a response through tcpListener
> -
>
> Key: DISPATCH-1878
> URL: https://issues.apache.org/jira/browse/DISPATCH-1878
> Project: Qpid Dispatch
>  Issue Type: Bug
>  Components: Protocol Adaptors
>Affects Versions: 2.0.0
>Reporter: Fernando Giorgetti
>Assignee: Charles E. Rolke
>Priority: Major
> Attachments: D-1878 @e148c.svg, 
> DISPATCH-1878_fail-pn-raw-closed-before-ingress-stream-set-up.svg, 
> dispatch-1878-trace.html
>
>
> I have a tcp-echo server running locally through:
> podman run -d --rm --name tcpecho -p 9090:9090 quay.io/skupper/tcp-go-echo
> And I have a router configured with a tcpConnector to localhost at port 9090 
> and a tcpListener at port .
> I am able to use nc (netcat) to send data to the tcp-echo directly (port 
> 9090) or through
> the router (port ), if I run "nc 127.0.0.1 " and the send the data.
> But if I run it as:  "echo abcd | nc 127.0.0.1 " I am not seeing a 
> response, but if 
> I use port 9090 (original port) instead of  (router tcpListener), then I 
> get the
> response correctly.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: dev-unsubscr...@qpid.apache.org
For additional commands, e-mail: dev-h...@qpid.apache.org



[GitHub] [qpid-dispatch] grs commented on a change in pull request #1129: DISPATCH-1878: Handle half-closed TCP connections - DO NOT MERGE

2021-04-16 Thread GitBox


grs commented on a change in pull request #1129:
URL: https://github.com/apache/qpid-dispatch/pull/1129#discussion_r615108651



##
File path: src/adaptors/tcp_adaptor.c
##
@@ -288,18 +338,87 @@ static int handle_incoming_impl(qdr_tcp_connection_t 
*conn, bool close_pending)
 qd_message_set_q2_unblocked_handler(msg, qdr_tcp_q2_unblocked_handler, 
conn_sp);
 
 conn->instream = qdr_link_deliver(conn->incoming, msg, 0, false, 0, 0, 
0, 0);
-qd_log(tcp_adaptor->log_source, QD_LOG_DEBUG, 
"[C%"PRIu64"][L%"PRIu64"] Initiating message with %i bytes", conn->conn_id, 
conn->incoming_id, count);
+qd_log(log, QD_LOG_DEBUG,
+   "[C%"PRIu64"][L%"PRIu64"][D%"PRIu64"] Initiating ingress stream 
message with %u bytes",
+   conn->conn_id, conn->incoming_id, conn->instream->delivery_id, 
length);
+conn->incoming_started = true;
+
+// Handle deferment of write side close.
+sys_mutex_lock(conn->activation_lock);
+if (conn->read_eos_seen && !conn->raw_closed_write) {
+// to-raw-conn EOS was seen before the from-raw-conn instream 
delivery existed.

Review comment:
   If C3110  is on the tcpConnector, then it does not need to wait for 
reply-to in order to create the reply message as that was in the message it 
already received and read to EOS. If it does not have credit it should not have 
granted read buffers and so would not get the READ events until it did have 
credit.




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: dev-unsubscr...@qpid.apache.org
For additional commands, e-mail: dev-h...@qpid.apache.org



[jira] [Commented] (DISPATCH-1878) Client app not getting a response through tcpListener

2021-04-16 Thread ASF GitHub Bot (Jira)


[ 
https://issues.apache.org/jira/browse/DISPATCH-1878?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17324064#comment-17324064
 ] 

ASF GitHub Bot commented on DISPATCH-1878:
--

ChugR commented on a change in pull request #1129:
URL: https://github.com/apache/qpid-dispatch/pull/1129#discussion_r615103944



##
File path: src/adaptors/tcp_adaptor.c
##
@@ -288,18 +338,87 @@ static int handle_incoming_impl(qdr_tcp_connection_t 
*conn, bool close_pending)
 qd_message_set_q2_unblocked_handler(msg, qdr_tcp_q2_unblocked_handler, 
conn_sp);
 
 conn->instream = qdr_link_deliver(conn->incoming, msg, 0, false, 0, 0, 
0, 0);
-qd_log(tcp_adaptor->log_source, QD_LOG_DEBUG, 
"[C%"PRIu64"][L%"PRIu64"] Initiating message with %i bytes", conn->conn_id, 
conn->incoming_id, count);
+qd_log(log, QD_LOG_DEBUG,
+   "[C%"PRIu64"][L%"PRIu64"][D%"PRIu64"] Initiating ingress stream 
message with %u bytes",
+   conn->conn_id, conn->incoming_id, conn->instream->delivery_id, 
length);
+conn->incoming_started = true;
+
+// Handle deferment of write side close.
+sys_mutex_lock(conn->activation_lock);
+if (conn->read_eos_seen && !conn->raw_closed_write) {
+// to-raw-conn EOS was seen before the from-raw-conn instream 
delivery existed.

Review comment:
   From a trace the other day.
   
   ```
   TCP Adaptor half-closed hang after 1500 connections
   
on the server side connection [C3110] is tcpConnector connection:
* connection opens
* client data 5 bytes is written
* client connection stream to server is closed (EOS)
* server connection does WRITE_CLOSE (same as original client does 
open-write-close to tcpListener)
* server replies with 21 bytes
* server caches the 21 bytes since "Waiting for credit before initiating 
ingress stream message"
  there's no message to put the bytes into yet
* server application closes after writing the 21 byte reply

* with server connection WRITE_CLOSED and READ_CLOSED the 
CONNECTION_DISCONNECTED comes along
* handle_disconnected tears everything down and the cached bytes never get 
written into stream message

* There's a bug since the server connection should stay up long enough to 
write the cached bytes to the stream message

   TCP_ADAPTOR (debug) [C3110][L6219] (server outgoing) Created link to 
amqp:/_topo/0/D-1878/temp.RHkTIA2kerAComs
   TCP_ADAPTOR (trace) [C3110] handle_incoming qdr_tcp_deliver for server 
connection. read_closed:F, flow_enabled:F
   TCP_ADAPTOR (debug) [C3110][L6218] Waiting for credit before initiating 
ingress stream message
   TCP_ADAPTOR (debug) [C3110] pn_raw_connection_write_buffers wrote 5 bytes
   TCP_ADAPTOR (info) [C3110] EOS
   here the adaptor calls write_close() to server connection when EOS is 
seen
   TCP_ADAPTOR (debug) [C3110] handle_outgoing calling 
pn_raw_connection_write_close(). rcv_complete:T, send_complete:T
   TCP_ADAPTOR (debug) [C3110][L6218] qdr_tcp_offer: NOOP
   TCP_ADAPTOR (debug) [C3110][L6218] qdr_tcp_get_credit: NOOP
   TCP_ADAPTOR (debug) [C3110][L6218] (server outgoing) qdr_tcp_second_attach
   TCP_ADAPTOR (debug) [C3110][L6218] qdr_tcp_get_credit: NOOP
   TCP_ADAPTOR (debug) [C3110] PN_RAW_CONNECTION_CLOSED_WRITE
   adaptor connector to server is now write closed
   TCP_ADAPTOR (debug) [C3110] PN_RAW_CONNECTION_WRITTEN 
pn_raw_connection_take_written_buffers wrote 5 bytes. Total written 5 bytes
   TCP_ADAPTOR (debug) [C3110] pn_raw_connection_take_read_buffers() took 1, 
freed 0
   TCP_ADAPTOR (debug) [C3110] PN_RAW_CONNECTION_READ Read 21 bytes. Total read 
21 bytes
   TCP_ADAPTOR (debug) [C3110] PN_RAW_CONNECTION_READ Read 0 bytes. Total read 
21 bytes
   server echos the 5 bytes with a prefix to make it 21 bytes. Then server 
closes connection.
   TCP_ADAPTOR (debug) [C3110][L6219] PN_RAW_CONNECTION_CLOSED_READ
   TCP_ADAPTOR (trace) [C3110] handle_incoming PNRC_CLOSED_READ for server 
connection. read_closed:T, flow_enabled:F
   adaptor still has no place to put the 21 bytes
   TCP_ADAPTOR (debug) [C3110][L6218] Waiting for credit before initiating 
ingress stream message
   raw connection is write- and read-closed. it shuts down and the 21 bytes 
never went to tcpListener client
   TCP_ADAPTOR (info) [C3110] PN_RAW_CONNECTION_DISCONNECTED
   TCP_ADAPTOR (debug) [C3110][L6218] handle_disconnected - close outstream
   TCP_ADAPTOR (debug) [C3110][L6219] handle_disconnected - detach incoming
   TCP_ADAPTOR (debug) [C3110][L6218] handle_disconnected - detach outgoing
   TCP_ADAPTOR (debug) qdr_tcp_activate: no connection context
   TCP_ADAPTOR (debug) [C3110] qdr_add_tcp_connection_CT 127.0.0.1:9090 (2)
   TCP_ADAPTOR (debug) [C3110] qdr_del_tcp_connection_CT 127.0.0.1:9090 
deleted. bytes_in=21, bytes_out=5, opened_time=28, last_in_time=28, 
last_out_time=28. Connections 

[GitHub] [qpid-dispatch] ChugR commented on a change in pull request #1129: DISPATCH-1878: Handle half-closed TCP connections - DO NOT MERGE

2021-04-16 Thread GitBox


ChugR commented on a change in pull request #1129:
URL: https://github.com/apache/qpid-dispatch/pull/1129#discussion_r615103944



##
File path: src/adaptors/tcp_adaptor.c
##
@@ -288,18 +338,87 @@ static int handle_incoming_impl(qdr_tcp_connection_t 
*conn, bool close_pending)
 qd_message_set_q2_unblocked_handler(msg, qdr_tcp_q2_unblocked_handler, 
conn_sp);
 
 conn->instream = qdr_link_deliver(conn->incoming, msg, 0, false, 0, 0, 
0, 0);
-qd_log(tcp_adaptor->log_source, QD_LOG_DEBUG, 
"[C%"PRIu64"][L%"PRIu64"] Initiating message with %i bytes", conn->conn_id, 
conn->incoming_id, count);
+qd_log(log, QD_LOG_DEBUG,
+   "[C%"PRIu64"][L%"PRIu64"][D%"PRIu64"] Initiating ingress stream 
message with %u bytes",
+   conn->conn_id, conn->incoming_id, conn->instream->delivery_id, 
length);
+conn->incoming_started = true;
+
+// Handle deferment of write side close.
+sys_mutex_lock(conn->activation_lock);
+if (conn->read_eos_seen && !conn->raw_closed_write) {
+// to-raw-conn EOS was seen before the from-raw-conn instream 
delivery existed.

Review comment:
   From a trace the other day.
   
   ```
   TCP Adaptor half-closed hang after 1500 connections
   
on the server side connection [C3110] is tcpConnector connection:
* connection opens
* client data 5 bytes is written
* client connection stream to server is closed (EOS)
* server connection does WRITE_CLOSE (same as original client does 
open-write-close to tcpListener)
* server replies with 21 bytes
* server caches the 21 bytes since "Waiting for credit before initiating 
ingress stream message"
  there's no message to put the bytes into yet
* server application closes after writing the 21 byte reply

* with server connection WRITE_CLOSED and READ_CLOSED the 
CONNECTION_DISCONNECTED comes along
* handle_disconnected tears everything down and the cached bytes never get 
written into stream message

* There's a bug since the server connection should stay up long enough to 
write the cached bytes to the stream message

   TCP_ADAPTOR (debug) [C3110][L6219] (server outgoing) Created link to 
amqp:/_topo/0/D-1878/temp.RHkTIA2kerAComs
   TCP_ADAPTOR (trace) [C3110] handle_incoming qdr_tcp_deliver for server 
connection. read_closed:F, flow_enabled:F
   TCP_ADAPTOR (debug) [C3110][L6218] Waiting for credit before initiating 
ingress stream message
   TCP_ADAPTOR (debug) [C3110] pn_raw_connection_write_buffers wrote 5 bytes
   TCP_ADAPTOR (info) [C3110] EOS
   here the adaptor calls write_close() to server connection when EOS is 
seen
   TCP_ADAPTOR (debug) [C3110] handle_outgoing calling 
pn_raw_connection_write_close(). rcv_complete:T, send_complete:T
   TCP_ADAPTOR (debug) [C3110][L6218] qdr_tcp_offer: NOOP
   TCP_ADAPTOR (debug) [C3110][L6218] qdr_tcp_get_credit: NOOP
   TCP_ADAPTOR (debug) [C3110][L6218] (server outgoing) qdr_tcp_second_attach
   TCP_ADAPTOR (debug) [C3110][L6218] qdr_tcp_get_credit: NOOP
   TCP_ADAPTOR (debug) [C3110] PN_RAW_CONNECTION_CLOSED_WRITE
   adaptor connector to server is now write closed
   TCP_ADAPTOR (debug) [C3110] PN_RAW_CONNECTION_WRITTEN 
pn_raw_connection_take_written_buffers wrote 5 bytes. Total written 5 bytes
   TCP_ADAPTOR (debug) [C3110] pn_raw_connection_take_read_buffers() took 1, 
freed 0
   TCP_ADAPTOR (debug) [C3110] PN_RAW_CONNECTION_READ Read 21 bytes. Total read 
21 bytes
   TCP_ADAPTOR (debug) [C3110] PN_RAW_CONNECTION_READ Read 0 bytes. Total read 
21 bytes
   server echos the 5 bytes with a prefix to make it 21 bytes. Then server 
closes connection.
   TCP_ADAPTOR (debug) [C3110][L6219] PN_RAW_CONNECTION_CLOSED_READ
   TCP_ADAPTOR (trace) [C3110] handle_incoming PNRC_CLOSED_READ for server 
connection. read_closed:T, flow_enabled:F
   adaptor still has no place to put the 21 bytes
   TCP_ADAPTOR (debug) [C3110][L6218] Waiting for credit before initiating 
ingress stream message
   raw connection is write- and read-closed. it shuts down and the 21 bytes 
never went to tcpListener client
   TCP_ADAPTOR (info) [C3110] PN_RAW_CONNECTION_DISCONNECTED
   TCP_ADAPTOR (debug) [C3110][L6218] handle_disconnected - close outstream
   TCP_ADAPTOR (debug) [C3110][L6219] handle_disconnected - detach incoming
   TCP_ADAPTOR (debug) [C3110][L6218] handle_disconnected - detach outgoing
   TCP_ADAPTOR (debug) qdr_tcp_activate: no connection context
   TCP_ADAPTOR (debug) [C3110] qdr_add_tcp_connection_CT 127.0.0.1:9090 (2)
   TCP_ADAPTOR (debug) [C3110] qdr_del_tcp_connection_CT 127.0.0.1:9090 
deleted. bytes_in=21, bytes_out=5, opened_time=28, last_in_time=28, 
last_out_time=28. Connections remaining 1
   ```




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact 

[jira] [Commented] (PROTON-2370) [cpp] An accessor for the delivery tag

2021-04-16 Thread ASF GitHub Bot (Jira)


[ 
https://issues.apache.org/jira/browse/PROTON-2370?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17324060#comment-17324060
 ] 

ASF GitHub Bot commented on PROTON-2370:


jiridanek commented on a change in pull request #309:
URL: https://github.com/apache/qpid-proton/pull/309#discussion_r615101682



##
File path: cpp/src/delivery_test.cpp
##
@@ -0,0 +1,115 @@
+/*
+ *
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *   http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing,
+ * software distributed under the License is distributed on an
+ * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+ * KIND, either express or implied.  See the License for the
+ * specific language governing permissions and limitations
+ * under the License.
+ *
+ */
+
+#include 
+#include 
+#include 
+#include 
+#include 
+#include 
+#include 
+#include 
+#include 
+#include 
+#include 
+#include 
+#include 
+
+#include "proton/error_condition.hpp"
+#include "proton/internal/pn_unique_ptr.hpp"
+#include "proton/receiver_options.hpp"
+#include "proton/transport.hpp"
+#include "proton/work_queue.hpp"
+#include "test_bits.hpp"
+
+#include 
+#include 
+
+#include 
+#include 
+#include 
+#include 
+#include 
+
+class direct_recv : public proton::messaging_handler {
+  private:
+class listener_ready_handler : public proton::listen_handler {
+void on_open(proton::listener ) PN_CPP_OVERRIDE {
+std::cout << "listening on " << l.port() << std::endl;
+}
+};
+
+std::string url;
+proton::listener listener;
+listener_ready_handler listen_handler;
+
+  public:
+direct_recv(const std::string ) : url(s) {}
+
+void on_container_start(proton::container ) PN_CPP_OVERRIDE {
+listener = c.listen(url, listen_handler);
+}
+
+void on_message(proton::delivery , proton::message ) PN_CPP_OVERRIDE 
{
+
+std::cout << msg.body() << std::endl;
+d.receiver().close();
+d.connection().close();
+listener.stop();
+}
+};
+
+class simple_send : public proton::messaging_handler {
+  private:
+std::string url;
+proton::sender sender;
+
+  public:
+simple_send(const std::string ) : url(s) {}
+
+void on_container_start(proton::container ) PN_CPP_OVERRIDE {
+proton::connection_options co;
+sender = c.open_sender(url, co);
+}
+
+void on_sendable(proton::sender ) PN_CPP_OVERRIDE {
+proton::message msg;
+std::string m = "testing";
+msg.body(m);
+s.send(msg);
+}
+};
+
+int test_delivery() {
+std::string address("127.0.0.1:5672/examples");
+direct_recv recv(address);
+proton::container(recv).run();
+simple_send send(address);
+proton::container(send).run();
+
+return 0;
+}
+
+int main(int argc, char **argv) {
+int failed = 0;
+RUN_ARGV_TEST(failed, test_delivery());

Review comment:
   you surely noticed that there are two ways of writing tests in Proton 
cpp. One way uses the catch 1.x framework, and the other type of tests use 
these custom macros. You worked with Catch in your first task, here you are 
using the custom framework. Either is fine to use, at this point. Don't rewrite 
the test to Catch just because of this comment.




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


> [cpp] An accessor for the delivery tag
> --
>
> Key: PROTON-2370
> URL: https://issues.apache.org/jira/browse/PROTON-2370
> Project: Qpid Proton
>  Issue Type: Improvement
>  Components: cpp-binding
>Reporter: Justin Ross
>Assignee: Rakhi Kumari
>Priority: Major
>
> *Assignee: Rakhi*
> Steps:
>  * Get familiar with the AMQP idea of a delivery tag
>  * Add a delivery tag accessor to the C++ delivery class
>  * Document the accessor in the C++ header
>  * Add a test for the accessor to the C++ test suite
> Resources:
>  * 
> [http://docs.oasis-open.org/amqp/core/v1.0/os/amqp-core-transport-v1.0-os.html#doc-idp438000]
>  * 
> [http://docs.oasis-open.org/amqp/core/v1.0/os/amqp-core-transport-v1.0-os.html#type-delivery-tag]
>  * 
> [https://github.com/apache/qpid-proton/blob/main/c/include/proton/delivery.h#L106]
>  * 
> 

[GitHub] [qpid-proton] jiridanek commented on a change in pull request #309: [WIP] PROTON-2370: [cpp] An accessor for the delivery tag

2021-04-16 Thread GitBox


jiridanek commented on a change in pull request #309:
URL: https://github.com/apache/qpid-proton/pull/309#discussion_r615101682



##
File path: cpp/src/delivery_test.cpp
##
@@ -0,0 +1,115 @@
+/*
+ *
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *   http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing,
+ * software distributed under the License is distributed on an
+ * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+ * KIND, either express or implied.  See the License for the
+ * specific language governing permissions and limitations
+ * under the License.
+ *
+ */
+
+#include 
+#include 
+#include 
+#include 
+#include 
+#include 
+#include 
+#include 
+#include 
+#include 
+#include 
+#include 
+#include 
+
+#include "proton/error_condition.hpp"
+#include "proton/internal/pn_unique_ptr.hpp"
+#include "proton/receiver_options.hpp"
+#include "proton/transport.hpp"
+#include "proton/work_queue.hpp"
+#include "test_bits.hpp"
+
+#include 
+#include 
+
+#include 
+#include 
+#include 
+#include 
+#include 
+
+class direct_recv : public proton::messaging_handler {
+  private:
+class listener_ready_handler : public proton::listen_handler {
+void on_open(proton::listener ) PN_CPP_OVERRIDE {
+std::cout << "listening on " << l.port() << std::endl;
+}
+};
+
+std::string url;
+proton::listener listener;
+listener_ready_handler listen_handler;
+
+  public:
+direct_recv(const std::string ) : url(s) {}
+
+void on_container_start(proton::container ) PN_CPP_OVERRIDE {
+listener = c.listen(url, listen_handler);
+}
+
+void on_message(proton::delivery , proton::message ) PN_CPP_OVERRIDE 
{
+
+std::cout << msg.body() << std::endl;
+d.receiver().close();
+d.connection().close();
+listener.stop();
+}
+};
+
+class simple_send : public proton::messaging_handler {
+  private:
+std::string url;
+proton::sender sender;
+
+  public:
+simple_send(const std::string ) : url(s) {}
+
+void on_container_start(proton::container ) PN_CPP_OVERRIDE {
+proton::connection_options co;
+sender = c.open_sender(url, co);
+}
+
+void on_sendable(proton::sender ) PN_CPP_OVERRIDE {
+proton::message msg;
+std::string m = "testing";
+msg.body(m);
+s.send(msg);
+}
+};
+
+int test_delivery() {
+std::string address("127.0.0.1:5672/examples");
+direct_recv recv(address);
+proton::container(recv).run();
+simple_send send(address);
+proton::container(send).run();
+
+return 0;
+}
+
+int main(int argc, char **argv) {
+int failed = 0;
+RUN_ARGV_TEST(failed, test_delivery());

Review comment:
   you surely noticed that there are two ways of writing tests in Proton 
cpp. One way uses the catch 1.x framework, and the other type of tests use 
these custom macros. You worked with Catch in your first task, here you are 
using the custom framework. Either is fine to use, at this point. Don't rewrite 
the test to Catch just because of this comment.




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: dev-unsubscr...@qpid.apache.org
For additional commands, e-mail: dev-h...@qpid.apache.org



[jira] [Commented] (PROTON-2370) [cpp] An accessor for the delivery tag

2021-04-16 Thread ASF GitHub Bot (Jira)


[ 
https://issues.apache.org/jira/browse/PROTON-2370?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17324058#comment-17324058
 ] 

ASF GitHub Bot commented on PROTON-2370:


jiridanek commented on a change in pull request #309:
URL: https://github.com/apache/qpid-proton/pull/309#discussion_r615100630



##
File path: cpp/src/delivery_test.cpp
##
@@ -0,0 +1,115 @@
+/*
+ *
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *   http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing,
+ * software distributed under the License is distributed on an
+ * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+ * KIND, either express or implied.  See the License for the
+ * specific language governing permissions and limitations
+ * under the License.
+ *
+ */
+
+#include 
+#include 
+#include 
+#include 
+#include 
+#include 
+#include 
+#include 
+#include 
+#include 
+#include 
+#include 
+#include 
+
+#include "proton/error_condition.hpp"
+#include "proton/internal/pn_unique_ptr.hpp"
+#include "proton/receiver_options.hpp"
+#include "proton/transport.hpp"
+#include "proton/work_queue.hpp"
+#include "test_bits.hpp"
+
+#include 
+#include 
+
+#include 
+#include 
+#include 
+#include 
+#include 
+
+class direct_recv : public proton::messaging_handler {
+  private:
+class listener_ready_handler : public proton::listen_handler {
+void on_open(proton::listener ) PN_CPP_OVERRIDE {
+std::cout << "listening on " << l.port() << std::endl;
+}
+};
+
+std::string url;
+proton::listener listener;
+listener_ready_handler listen_handler;
+
+  public:
+direct_recv(const std::string ) : url(s) {}
+
+void on_container_start(proton::container ) PN_CPP_OVERRIDE {
+listener = c.listen(url, listen_handler);
+}
+
+void on_message(proton::delivery , proton::message ) PN_CPP_OVERRIDE 
{
+
+std::cout << msg.body() << std::endl;
+d.receiver().close();
+d.connection().close();
+listener.stop();
+}
+};
+
+class simple_send : public proton::messaging_handler {
+  private:
+std::string url;
+proton::sender sender;
+
+  public:
+simple_send(const std::string ) : url(s) {}
+
+void on_container_start(proton::container ) PN_CPP_OVERRIDE {
+proton::connection_options co;
+sender = c.open_sender(url, co);
+}
+
+void on_sendable(proton::sender ) PN_CPP_OVERRIDE {
+proton::message msg;
+std::string m = "testing";
+msg.body(m);
+s.send(msg);
+}
+};
+
+int test_delivery() {
+std::string address("127.0.0.1:5672/examples");

Review comment:
   hardcoded port; we'd like to allocate a random free port to avoid port 
clashes. but that is something for later, for now, just focus on getting the 
test to pass, you can improve it later




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


> [cpp] An accessor for the delivery tag
> --
>
> Key: PROTON-2370
> URL: https://issues.apache.org/jira/browse/PROTON-2370
> Project: Qpid Proton
>  Issue Type: Improvement
>  Components: cpp-binding
>Reporter: Justin Ross
>Assignee: Rakhi Kumari
>Priority: Major
>
> *Assignee: Rakhi*
> Steps:
>  * Get familiar with the AMQP idea of a delivery tag
>  * Add a delivery tag accessor to the C++ delivery class
>  * Document the accessor in the C++ header
>  * Add a test for the accessor to the C++ test suite
> Resources:
>  * 
> [http://docs.oasis-open.org/amqp/core/v1.0/os/amqp-core-transport-v1.0-os.html#doc-idp438000]
>  * 
> [http://docs.oasis-open.org/amqp/core/v1.0/os/amqp-core-transport-v1.0-os.html#type-delivery-tag]
>  * 
> [https://github.com/apache/qpid-proton/blob/main/c/include/proton/delivery.h#L106]
>  * 
> [https://github.com/apache/qpid-proton/blob/main/cpp/include/proton/delivery.hpp]



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: dev-unsubscr...@qpid.apache.org
For additional commands, e-mail: dev-h...@qpid.apache.org



[GitHub] [qpid-proton] jiridanek commented on a change in pull request #309: [WIP] PROTON-2370: [cpp] An accessor for the delivery tag

2021-04-16 Thread GitBox


jiridanek commented on a change in pull request #309:
URL: https://github.com/apache/qpid-proton/pull/309#discussion_r615100630



##
File path: cpp/src/delivery_test.cpp
##
@@ -0,0 +1,115 @@
+/*
+ *
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *   http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing,
+ * software distributed under the License is distributed on an
+ * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+ * KIND, either express or implied.  See the License for the
+ * specific language governing permissions and limitations
+ * under the License.
+ *
+ */
+
+#include 
+#include 
+#include 
+#include 
+#include 
+#include 
+#include 
+#include 
+#include 
+#include 
+#include 
+#include 
+#include 
+
+#include "proton/error_condition.hpp"
+#include "proton/internal/pn_unique_ptr.hpp"
+#include "proton/receiver_options.hpp"
+#include "proton/transport.hpp"
+#include "proton/work_queue.hpp"
+#include "test_bits.hpp"
+
+#include 
+#include 
+
+#include 
+#include 
+#include 
+#include 
+#include 
+
+class direct_recv : public proton::messaging_handler {
+  private:
+class listener_ready_handler : public proton::listen_handler {
+void on_open(proton::listener ) PN_CPP_OVERRIDE {
+std::cout << "listening on " << l.port() << std::endl;
+}
+};
+
+std::string url;
+proton::listener listener;
+listener_ready_handler listen_handler;
+
+  public:
+direct_recv(const std::string ) : url(s) {}
+
+void on_container_start(proton::container ) PN_CPP_OVERRIDE {
+listener = c.listen(url, listen_handler);
+}
+
+void on_message(proton::delivery , proton::message ) PN_CPP_OVERRIDE 
{
+
+std::cout << msg.body() << std::endl;
+d.receiver().close();
+d.connection().close();
+listener.stop();
+}
+};
+
+class simple_send : public proton::messaging_handler {
+  private:
+std::string url;
+proton::sender sender;
+
+  public:
+simple_send(const std::string ) : url(s) {}
+
+void on_container_start(proton::container ) PN_CPP_OVERRIDE {
+proton::connection_options co;
+sender = c.open_sender(url, co);
+}
+
+void on_sendable(proton::sender ) PN_CPP_OVERRIDE {
+proton::message msg;
+std::string m = "testing";
+msg.body(m);
+s.send(msg);
+}
+};
+
+int test_delivery() {
+std::string address("127.0.0.1:5672/examples");

Review comment:
   hardcoded port; we'd like to allocate a random free port to avoid port 
clashes. but that is something for later, for now, just focus on getting the 
test to pass, you can improve it later




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: dev-unsubscr...@qpid.apache.org
For additional commands, e-mail: dev-h...@qpid.apache.org



[jira] [Commented] (PROTON-2370) [cpp] An accessor for the delivery tag

2021-04-16 Thread ASF GitHub Bot (Jira)


[ 
https://issues.apache.org/jira/browse/PROTON-2370?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17324057#comment-17324057
 ] 

ASF GitHub Bot commented on PROTON-2370:


jiridanek commented on a change in pull request #309:
URL: https://github.com/apache/qpid-proton/pull/309#discussion_r615100296



##
File path: cpp/src/delivery_test.cpp
##
@@ -0,0 +1,115 @@
+/*
+ *
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *   http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing,
+ * software distributed under the License is distributed on an
+ * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+ * KIND, either express or implied.  See the License for the
+ * specific language governing permissions and limitations
+ * under the License.
+ *
+ */
+
+#include 
+#include 
+#include 
+#include 
+#include 
+#include 
+#include 
+#include 
+#include 
+#include 
+#include 
+#include 
+#include 
+
+#include "proton/error_condition.hpp"
+#include "proton/internal/pn_unique_ptr.hpp"
+#include "proton/receiver_options.hpp"
+#include "proton/transport.hpp"
+#include "proton/work_queue.hpp"
+#include "test_bits.hpp"
+
+#include 
+#include 
+
+#include 
+#include 
+#include 
+#include 
+#include 
+
+class direct_recv : public proton::messaging_handler {
+  private:
+class listener_ready_handler : public proton::listen_handler {
+void on_open(proton::listener ) PN_CPP_OVERRIDE {
+std::cout << "listening on " << l.port() << std::endl;
+}
+};
+
+std::string url;
+proton::listener listener;
+listener_ready_handler listen_handler;
+
+  public:
+direct_recv(const std::string ) : url(s) {}
+
+void on_container_start(proton::container ) PN_CPP_OVERRIDE {
+listener = c.listen(url, listen_handler);
+}
+
+void on_message(proton::delivery , proton::message ) PN_CPP_OVERRIDE 
{
+
+std::cout << msg.body() << std::endl;
+d.receiver().close();
+d.connection().close();
+listener.stop();
+}
+};
+
+class simple_send : public proton::messaging_handler {
+  private:
+std::string url;
+proton::sender sender;
+
+  public:
+simple_send(const std::string ) : url(s) {}
+
+void on_container_start(proton::container ) PN_CPP_OVERRIDE {
+proton::connection_options co;
+sender = c.open_sender(url, co);
+}
+
+void on_sendable(proton::sender ) PN_CPP_OVERRIDE {
+proton::message msg;
+std::string m = "testing";
+msg.body(m);
+s.send(msg);
+}
+};
+
+int test_delivery() {
+std::string address("127.0.0.1:5672/examples");
+direct_recv recv(address);
+proton::container(recv).run();

Review comment:
   This call to `run()` blocks. The receiver waits here for message and it 
never runs the lines below to actually send the message. I can think of two 
solutions: either start the receiver in a new thread, or use a single 
proton::container that opens both a sender and a receiver and sends a message 
to itself.




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


> [cpp] An accessor for the delivery tag
> --
>
> Key: PROTON-2370
> URL: https://issues.apache.org/jira/browse/PROTON-2370
> Project: Qpid Proton
>  Issue Type: Improvement
>  Components: cpp-binding
>Reporter: Justin Ross
>Assignee: Rakhi Kumari
>Priority: Major
>
> *Assignee: Rakhi*
> Steps:
>  * Get familiar with the AMQP idea of a delivery tag
>  * Add a delivery tag accessor to the C++ delivery class
>  * Document the accessor in the C++ header
>  * Add a test for the accessor to the C++ test suite
> Resources:
>  * 
> [http://docs.oasis-open.org/amqp/core/v1.0/os/amqp-core-transport-v1.0-os.html#doc-idp438000]
>  * 
> [http://docs.oasis-open.org/amqp/core/v1.0/os/amqp-core-transport-v1.0-os.html#type-delivery-tag]
>  * 
> [https://github.com/apache/qpid-proton/blob/main/c/include/proton/delivery.h#L106]
>  * 
> [https://github.com/apache/qpid-proton/blob/main/cpp/include/proton/delivery.hpp]



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: 

[GitHub] [qpid-proton] jiridanek commented on a change in pull request #309: [WIP] PROTON-2370: [cpp] An accessor for the delivery tag

2021-04-16 Thread GitBox


jiridanek commented on a change in pull request #309:
URL: https://github.com/apache/qpid-proton/pull/309#discussion_r615100296



##
File path: cpp/src/delivery_test.cpp
##
@@ -0,0 +1,115 @@
+/*
+ *
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *   http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing,
+ * software distributed under the License is distributed on an
+ * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+ * KIND, either express or implied.  See the License for the
+ * specific language governing permissions and limitations
+ * under the License.
+ *
+ */
+
+#include 
+#include 
+#include 
+#include 
+#include 
+#include 
+#include 
+#include 
+#include 
+#include 
+#include 
+#include 
+#include 
+
+#include "proton/error_condition.hpp"
+#include "proton/internal/pn_unique_ptr.hpp"
+#include "proton/receiver_options.hpp"
+#include "proton/transport.hpp"
+#include "proton/work_queue.hpp"
+#include "test_bits.hpp"
+
+#include 
+#include 
+
+#include 
+#include 
+#include 
+#include 
+#include 
+
+class direct_recv : public proton::messaging_handler {
+  private:
+class listener_ready_handler : public proton::listen_handler {
+void on_open(proton::listener ) PN_CPP_OVERRIDE {
+std::cout << "listening on " << l.port() << std::endl;
+}
+};
+
+std::string url;
+proton::listener listener;
+listener_ready_handler listen_handler;
+
+  public:
+direct_recv(const std::string ) : url(s) {}
+
+void on_container_start(proton::container ) PN_CPP_OVERRIDE {
+listener = c.listen(url, listen_handler);
+}
+
+void on_message(proton::delivery , proton::message ) PN_CPP_OVERRIDE 
{
+
+std::cout << msg.body() << std::endl;
+d.receiver().close();
+d.connection().close();
+listener.stop();
+}
+};
+
+class simple_send : public proton::messaging_handler {
+  private:
+std::string url;
+proton::sender sender;
+
+  public:
+simple_send(const std::string ) : url(s) {}
+
+void on_container_start(proton::container ) PN_CPP_OVERRIDE {
+proton::connection_options co;
+sender = c.open_sender(url, co);
+}
+
+void on_sendable(proton::sender ) PN_CPP_OVERRIDE {
+proton::message msg;
+std::string m = "testing";
+msg.body(m);
+s.send(msg);
+}
+};
+
+int test_delivery() {
+std::string address("127.0.0.1:5672/examples");
+direct_recv recv(address);
+proton::container(recv).run();

Review comment:
   This call to `run()` blocks. The receiver waits here for message and it 
never runs the lines below to actually send the message. I can think of two 
solutions: either start the receiver in a new thread, or use a single 
proton::container that opens both a sender and a receiver and sends a message 
to itself.




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: dev-unsubscr...@qpid.apache.org
For additional commands, e-mail: dev-h...@qpid.apache.org



[jira] [Commented] (PROTON-2370) [cpp] An accessor for the delivery tag

2021-04-16 Thread ASF GitHub Bot (Jira)


[ 
https://issues.apache.org/jira/browse/PROTON-2370?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17324056#comment-17324056
 ] 

ASF GitHub Bot commented on PROTON-2370:


jiridanek commented on a change in pull request #309:
URL: https://github.com/apache/qpid-proton/pull/309#discussion_r615099019



##
File path: cpp/src/delivery.cpp
##
@@ -40,6 +42,7 @@ namespace proton {
 
 delivery::delivery(pn_delivery_t* d): transfer(make_wrapper(d)) {}
 receiver delivery::receiver() const { return make_wrapper(pn_delivery_link(pn_object())); }
+tag delivery::tag() const { return make_wrapper(pn_delivery_tag(pn_object())); }

Review comment:
   The `send()` method makes sense to me. If the API it good enough for 
Python, it should make some sense for C++ too ;P
   
   I'll add few notes to the code. It is not something to be addressed right 
away. Those are just improvements I thought of. I agree that writing a test 
should be the thing to do next.




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


> [cpp] An accessor for the delivery tag
> --
>
> Key: PROTON-2370
> URL: https://issues.apache.org/jira/browse/PROTON-2370
> Project: Qpid Proton
>  Issue Type: Improvement
>  Components: cpp-binding
>Reporter: Justin Ross
>Assignee: Rakhi Kumari
>Priority: Major
>
> *Assignee: Rakhi*
> Steps:
>  * Get familiar with the AMQP idea of a delivery tag
>  * Add a delivery tag accessor to the C++ delivery class
>  * Document the accessor in the C++ header
>  * Add a test for the accessor to the C++ test suite
> Resources:
>  * 
> [http://docs.oasis-open.org/amqp/core/v1.0/os/amqp-core-transport-v1.0-os.html#doc-idp438000]
>  * 
> [http://docs.oasis-open.org/amqp/core/v1.0/os/amqp-core-transport-v1.0-os.html#type-delivery-tag]
>  * 
> [https://github.com/apache/qpid-proton/blob/main/c/include/proton/delivery.h#L106]
>  * 
> [https://github.com/apache/qpid-proton/blob/main/cpp/include/proton/delivery.hpp]



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: dev-unsubscr...@qpid.apache.org
For additional commands, e-mail: dev-h...@qpid.apache.org



[GitHub] [qpid-proton] jiridanek commented on a change in pull request #309: [WIP] PROTON-2370: [cpp] An accessor for the delivery tag

2021-04-16 Thread GitBox


jiridanek commented on a change in pull request #309:
URL: https://github.com/apache/qpid-proton/pull/309#discussion_r615099019



##
File path: cpp/src/delivery.cpp
##
@@ -40,6 +42,7 @@ namespace proton {
 
 delivery::delivery(pn_delivery_t* d): transfer(make_wrapper(d)) {}
 receiver delivery::receiver() const { return make_wrapper(pn_delivery_link(pn_object())); }
+tag delivery::tag() const { return make_wrapper(pn_delivery_tag(pn_object())); }

Review comment:
   The `send()` method makes sense to me. If the API it good enough for 
Python, it should make some sense for C++ too ;P
   
   I'll add few notes to the code. It is not something to be addressed right 
away. Those are just improvements I thought of. I agree that writing a test 
should be the thing to do next.




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: dev-unsubscr...@qpid.apache.org
For additional commands, e-mail: dev-h...@qpid.apache.org



[jira] [Commented] (PROTON-2370) [cpp] An accessor for the delivery tag

2021-04-16 Thread ASF GitHub Bot (Jira)


[ 
https://issues.apache.org/jira/browse/PROTON-2370?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17324053#comment-17324053
 ] 

ASF GitHub Bot commented on PROTON-2370:


jiridanek commented on pull request #309:
URL: https://github.com/apache/qpid-proton/pull/309#issuecomment-821532591


   > I am still working on fixing tests.
   
   Sure. My point was that when you have a CI available, like here, it's best 
to first get the PR compiling. That way you get some feedback that the PR is ok 
on some level (that at least it compiles) and avoid some unpleasant surprises 
later, as you continue working.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


> [cpp] An accessor for the delivery tag
> --
>
> Key: PROTON-2370
> URL: https://issues.apache.org/jira/browse/PROTON-2370
> Project: Qpid Proton
>  Issue Type: Improvement
>  Components: cpp-binding
>Reporter: Justin Ross
>Assignee: Rakhi Kumari
>Priority: Major
>
> *Assignee: Rakhi*
> Steps:
>  * Get familiar with the AMQP idea of a delivery tag
>  * Add a delivery tag accessor to the C++ delivery class
>  * Document the accessor in the C++ header
>  * Add a test for the accessor to the C++ test suite
> Resources:
>  * 
> [http://docs.oasis-open.org/amqp/core/v1.0/os/amqp-core-transport-v1.0-os.html#doc-idp438000]
>  * 
> [http://docs.oasis-open.org/amqp/core/v1.0/os/amqp-core-transport-v1.0-os.html#type-delivery-tag]
>  * 
> [https://github.com/apache/qpid-proton/blob/main/c/include/proton/delivery.h#L106]
>  * 
> [https://github.com/apache/qpid-proton/blob/main/cpp/include/proton/delivery.hpp]



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: dev-unsubscr...@qpid.apache.org
For additional commands, e-mail: dev-h...@qpid.apache.org



[GitHub] [qpid-proton] jiridanek commented on pull request #309: [WIP] PROTON-2370: [cpp] An accessor for the delivery tag

2021-04-16 Thread GitBox


jiridanek commented on pull request #309:
URL: https://github.com/apache/qpid-proton/pull/309#issuecomment-821532591


   > I am still working on fixing tests.
   
   Sure. My point was that when you have a CI available, like here, it's best 
to first get the PR compiling. That way you get some feedback that the PR is ok 
on some level (that at least it compiles) and avoid some unpleasant surprises 
later, as you continue working.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: dev-unsubscr...@qpid.apache.org
For additional commands, e-mail: dev-h...@qpid.apache.org



[jira] [Commented] (DISPATCH-1878) Client app not getting a response through tcpListener

2021-04-16 Thread ASF GitHub Bot (Jira)


[ 
https://issues.apache.org/jira/browse/DISPATCH-1878?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17324047#comment-17324047
 ] 

ASF GitHub Bot commented on DISPATCH-1878:
--

grs commented on a change in pull request #1129:
URL: https://github.com/apache/qpid-dispatch/pull/1129#discussion_r615089829



##
File path: src/adaptors/tcp_adaptor.c
##
@@ -288,18 +338,87 @@ static int handle_incoming_impl(qdr_tcp_connection_t 
*conn, bool close_pending)
 qd_message_set_q2_unblocked_handler(msg, qdr_tcp_q2_unblocked_handler, 
conn_sp);
 
 conn->instream = qdr_link_deliver(conn->incoming, msg, 0, false, 0, 0, 
0, 0);
-qd_log(tcp_adaptor->log_source, QD_LOG_DEBUG, 
"[C%"PRIu64"][L%"PRIu64"] Initiating message with %i bytes", conn->conn_id, 
conn->incoming_id, count);
+qd_log(log, QD_LOG_DEBUG,
+   "[C%"PRIu64"][L%"PRIu64"][D%"PRIu64"] Initiating ingress stream 
message with %u bytes",
+   conn->conn_id, conn->incoming_id, conn->instream->delivery_id, 
length);
+conn->incoming_started = true;
+
+// Handle deferment of write side close.
+sys_mutex_lock(conn->activation_lock);
+if (conn->read_eos_seen && !conn->raw_closed_write) {
+// to-raw-conn EOS was seen before the from-raw-conn instream 
delivery existed.

Review comment:
   That sounds like the problem is that we handle CLOSED_WRITE by closing 
the whole connection, rather than simply relaying what the two endpoints 
themselves do.




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


> Client app not getting a response through tcpListener
> -
>
> Key: DISPATCH-1878
> URL: https://issues.apache.org/jira/browse/DISPATCH-1878
> Project: Qpid Dispatch
>  Issue Type: Bug
>  Components: Protocol Adaptors
>Affects Versions: 2.0.0
>Reporter: Fernando Giorgetti
>Assignee: Charles E. Rolke
>Priority: Major
> Attachments: D-1878 @e148c.svg, 
> DISPATCH-1878_fail-pn-raw-closed-before-ingress-stream-set-up.svg, 
> dispatch-1878-trace.html
>
>
> I have a tcp-echo server running locally through:
> podman run -d --rm --name tcpecho -p 9090:9090 quay.io/skupper/tcp-go-echo
> And I have a router configured with a tcpConnector to localhost at port 9090 
> and a tcpListener at port .
> I am able to use nc (netcat) to send data to the tcp-echo directly (port 
> 9090) or through
> the router (port ), if I run "nc 127.0.0.1 " and the send the data.
> But if I run it as:  "echo abcd | nc 127.0.0.1 " I am not seeing a 
> response, but if 
> I use port 9090 (original port) instead of  (router tcpListener), then I 
> get the
> response correctly.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: dev-unsubscr...@qpid.apache.org
For additional commands, e-mail: dev-h...@qpid.apache.org



[GitHub] [qpid-dispatch] grs commented on a change in pull request #1129: DISPATCH-1878: Handle half-closed TCP connections - DO NOT MERGE

2021-04-16 Thread GitBox


grs commented on a change in pull request #1129:
URL: https://github.com/apache/qpid-dispatch/pull/1129#discussion_r615089829



##
File path: src/adaptors/tcp_adaptor.c
##
@@ -288,18 +338,87 @@ static int handle_incoming_impl(qdr_tcp_connection_t 
*conn, bool close_pending)
 qd_message_set_q2_unblocked_handler(msg, qdr_tcp_q2_unblocked_handler, 
conn_sp);
 
 conn->instream = qdr_link_deliver(conn->incoming, msg, 0, false, 0, 0, 
0, 0);
-qd_log(tcp_adaptor->log_source, QD_LOG_DEBUG, 
"[C%"PRIu64"][L%"PRIu64"] Initiating message with %i bytes", conn->conn_id, 
conn->incoming_id, count);
+qd_log(log, QD_LOG_DEBUG,
+   "[C%"PRIu64"][L%"PRIu64"][D%"PRIu64"] Initiating ingress stream 
message with %u bytes",
+   conn->conn_id, conn->incoming_id, conn->instream->delivery_id, 
length);
+conn->incoming_started = true;
+
+// Handle deferment of write side close.
+sys_mutex_lock(conn->activation_lock);
+if (conn->read_eos_seen && !conn->raw_closed_write) {
+// to-raw-conn EOS was seen before the from-raw-conn instream 
delivery existed.

Review comment:
   That sounds like the problem is that we handle CLOSED_WRITE by closing 
the whole connection, rather than simply relaying what the two endpoints 
themselves do.




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: dev-unsubscr...@qpid.apache.org
For additional commands, e-mail: dev-h...@qpid.apache.org



[jira] [Created] (DISPATCH-2059) Support running router under rr during test execution

2021-04-16 Thread Jira
Jiri Daněk created DISPATCH-2059:


 Summary: Support running router under rr during test execution
 Key: DISPATCH-2059
 URL: https://issues.apache.org/jira/browse/DISPATCH-2059
 Project: Qpid Dispatch
  Issue Type: Wish
  Components: Tests
Affects Versions: 1.15.0
Reporter: Jiri Daněk


Dispatch has env variable {{QPID_DISPATCH_RUNNER}} which is (according to 
comment) intended to be used for running tests under valgrind. That is outdated 
comment, because the memory checking is currently solved in a different way, in 
{{RuntimeChecks.cmake}}. One tool that would make sense to use to wrap dispatch 
is rr, the record-replay debugger from Mozilla (https://rr-project.org/).

I've previously tried rr with (very) limited success in DISPATCH-782.

[~aconway] considered it while working on DISPATCH-902 and used it on other 
issues.

There has been an attempt 
https://issues.apache.org/jira/browse/DISPATCH-739?focusedCommentId=15983719=com.atlassian.jira.plugin.system.issuetabpanels%3Acomment-tabpanel#comment-15983719
 to use rr which however did not survive in the mainline to the present day.

I have two problems with rr:

# Dispatch system-tests send SIGTERM to the subprocess itself, which is rr. 
What is necessary is to kill its children instead. Killing rr causes abrupt 
termination of the recording. When I issue ^C to a {{rr record qdrouterd -c 
...}} in the terminal, that signal goes correctly to the child. I am not sure 
what's happening there in the test, where the difference comes from. Explicitly 
killing only children in the system test does the right thing. Sadly doing that 
requires hacks, python's subprocess does not allow to query children easily. 
The os module has some ways; psutil is the easiest, but thats a 3rd party 
dependency.
# CLion debugger disconnects during replay when qdrouterd gets SIGTERM, but the 
router handles that signal and continues running (cleanup)

One awesome feature of rr is that the recording can be replayed many times, 
backwards and forwards, and all memory addresses stay the same in the 
recording, on every replay. Meaning that one can use {{watch -l *0x000}} 
breakpoints to watch specific places of memory, and use {{reverse-cont}} gdb 
command. (rr emulates the gdb UI, it's a wrapper over gdb, actually, if I 
understand correctly.)



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: dev-unsubscr...@qpid.apache.org
For additional commands, e-mail: dev-h...@qpid.apache.org



[jira] [Resolved] (DISPATCH-1858) HTTP1: occasional failure with leak of qd_deferred_call_t item

2021-04-16 Thread Ganesh Murthy (Jira)


 [ 
https://issues.apache.org/jira/browse/DISPATCH-1858?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ganesh Murthy resolved DISPATCH-1858.
-
Resolution: Duplicate

> HTTP1: occasional failure with leak of qd_deferred_call_t item
> --
>
> Key: DISPATCH-1858
> URL: https://issues.apache.org/jira/browse/DISPATCH-1858
> Project: Qpid Dispatch
>  Issue Type: Bug
>  Components: Protocol Adaptors
>Affects Versions: 1.15.0
>Reporter: Ken Giusti
>Assignee: Ken Giusti
>Priority: Major
> Fix For: 1.16.0
>
>
> When running a debug build occasionally the HTTP1 system tests will fail with 
> a leak of a qd_deferred_call_t item.
> {{70: Leak: 2020-11-26 11:15:43.910062 -0500 type: qd_deferred_call_t 
> address: 0xebc310 }}
> {{70: 
> /home/kgiusti/work/dispatch/qpid-dispatch/BUILD/src/libqpid-dispatch.so(qd_alloc+0x20c)
>  [0x7fcbd9ceac69] }}
> {{70: 
> /home/kgiusti/work/dispatch/qpid-dispatch/BUILD/src/libqpid-dispatch.so(new_qd_deferred_call_t+0x26)
>  [0x7fcbd9d5b115] }}
> {{70: 
> /home/kgiusti/work/dispatch/qpid-dispatch/BUILD/src/libqpid-dispatch.so(qd_connection_invoke_deferred+0x24)
>  [0x7fcbd9d60568] }}
> {{70: 
> /home/kgiusti/work/dispatch/qpid-dispatch/BUILD/src/libqpid-dispatch.so(qd_link_restart_rx+0x95)
>  [0x7fcbd9d59bd6] }}
> {{70: 
> /home/kgiusti/work/dispatch/qpid-dispatch/BUILD/src/libqpid-dispatch.so(qd_message_stream_data_release+0x43e)
>  [0x7fcbd9d06304] }}
> {{70: 
> /home/kgiusti/work/dispatch/qpid-dispatch/BUILD/src/libqpid-dispatch.so(qdr_http1_out_data_fifo_cleanup+0x155)
>  [0x7fcbd9cda3b7] }}
> {{70: 
> /home/kgiusti/work/dispatch/qpid-dispatch/BUILD/src/libqpid-dispatch.so(+0x535d0)
>  [0x7fcbd9ce45d0] }}
> {{70: 
> /home/kgiusti/work/dispatch/qpid-dispatch/BUILD/src/libqpid-dispatch.so(qdr_http1_server_conn_cleanup+0x29)
>  [0x7fcbd9ce4759] }}
> {{70: 
> /home/kgiusti/work/dispatch/qpid-dispatch/BUILD/src/libqpid-dispatch.so(qdr_http1_connection_free+0x221)
>  [0x7fcbd9cda19b] }}
> {{70: 
> /home/kgiusti/work/dispatch/qpid-dispatch/BUILD/src/libqpid-dispatch.so(+0x4ae18)
>  [0x7fcbd9cdbe18] }}
>  
> {{This is concerning as it appears to be related to Q2 relief being triggered 
> during a connection close.  Needs investigation.}}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: dev-unsubscr...@qpid.apache.org
For additional commands, e-mail: dev-h...@qpid.apache.org



[jira] [Updated] (DISPATCH-2048) system_tests_router_mesh failure due to leak of qd_deferred_call_t

2021-04-16 Thread Ganesh Murthy (Jira)


 [ 
https://issues.apache.org/jira/browse/DISPATCH-2048?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ganesh Murthy updated DISPATCH-2048:

Fix Version/s: 1.16.0

> system_tests_router_mesh failure due to leak of qd_deferred_call_t
> --
>
> Key: DISPATCH-2048
> URL: https://issues.apache.org/jira/browse/DISPATCH-2048
> Project: Qpid Dispatch
>  Issue Type: Test
>  Components: Tests
>Affects Versions: 1.15.0
>Reporter: Ganesh Murthy
>Priority: Major
> Fix For: 1.16.0
>
>
> {noformat}
>  63: Test command: /usr/bin/python 
> "/home/travis/build/apache/qpid-dispatch/build/tests/run.py" "-m" "unittest" 
> "-v" "system_tests_router_mesh"
> 63: Test timeout computed to be: 600
> 63: test_01_rx_failover_clean (system_tests_router_mesh.ThreeRouterTest) ... 
> ok
> 63: test_02_rx_failover_dirty (system_tests_router_mesh.ThreeRouterTest) ... 
> ok
> 63: test_03_unavailable_link_attach 
> (system_tests_router_mesh.ThreeRouterTest) ... ok
> 63: test_04_unavailable_anonymous_link_attach 
> (system_tests_router_mesh.ThreeRouterTest) ... ok
> 63: test_05_unavailable_anonymous_link_send 
> (system_tests_router_mesh.ThreeRouterTest) ... ok
> 63: test_06_parallel_priority (system_tests_router_mesh.ThreeRouterTest) ... 
> ERROR
> 63: ERROR
> 63: 
> 63: Router RouterC output file:
> 63: ERROR: Aborted due to unexpected alloc pool leak of type 
> 'qd_deferred_call_t'
> 63: 
> 63: 
> 63: 
> 63: Router RouterC debug dump file:
> 63: 
> 63: alloc.c: Items of type 'qd_iterator_t' remain allocated at shutdown: 80 
> (SUPPRESSED)
> 63: alloc.c: Items of type 'qd_timer_t' remain allocated at shutdown: 1 
> (SUPPRESSED)
> 63: alloc.c: Items of type 'qd_bitmask_t' remain allocated at shutdown: 8 
> (SUPPRESSED)
> 63: alloc.c: Items of type 'qd_buffer_t' remain allocated at shutdown: 287 
> (SUPPRESSED)
> 63: alloc.c: Items of type 'qd_parsed_field_t' remain allocated at shutdown: 
> 32 (SUPPRESSED)
> 63: alloc.c: Items of type 'qd_connector_t' remain allocated at shutdown: 1 
> (SUPPRESSED)
> 63: alloc.c: Items of type 'qd_message_t' remain allocated at shutdown: 8 
> (SUPPRESSED)
> 63: alloc.c: Items of type 'qd_message_content_t' remain allocated at 
> shutdown: 8 (SUPPRESSED)
> 63: alloc.c: Items of type 'qdr_delivery_t' remain allocated at shutdown: 8 
> (SUPPRESSED)
> 63: alloc.c: Items of type 'qd_link_ref_t' remain allocated at shutdown: 8 
> (SUPPRESSED)
> 63: alloc.c: Items of type 'qd_deferred_call_t' remain allocated at shutdown: 
> 15
> 63: Leak: 2021-04-15 03:43:23.586342 + type: qd_deferred_call_t address: 
> 0x6110001b7d10
> 63: /usr/lib/x86_64-linux-gnu/libasan.so.2(+0x4a077) [0x7f3be716c077]
> 63: 
> /home/travis/build/apache/qpid-dispatch/build/src/libqpid-dispatch.so(qd_alloc+0xc70)
>  [0x7f3be68ee7a0]
> 63: 
> /home/travis/build/apache/qpid-dispatch/build/src/libqpid-dispatch.so(new_qd_deferred_call_t+0x26)
>  [0x7f3be6a9]
> 63: 
> /home/travis/build/apache/qpid-dispatch/build/src/libqpid-dispatch.so(qd_connection_invoke_deferred+0x25)
>  [0x7f3be6aaa479]
> 63: 
> /home/travis/build/apache/qpid-dispatch/build/src/libqpid-dispatch.so(+0x53431e)
>  [0x7f3be6a9231e]
> 63: 
> /home/travis/build/apache/qpid-dispatch/build/src/libqpid-dispatch.so(qdr_connection_process+0x2720)
>  [0x7f3be69b31d4]
> 63: 
> /home/travis/build/apache/qpid-dispatch/build/src/libqpid-dispatch.so(+0x52a9c7)
>  [0x7f3be6a889c7]
> 63: 
> /home/travis/build/apache/qpid-dispatch/build/src/libqpid-dispatch.so(+0x3b631c)
>  [0x7f3be691431c]
> 63: 
> /home/travis/build/apache/qpid-dispatch/build/src/libqpid-dispatch.so(qd_container_handle_event+0x223b)
>  [0x7f3be6917d6a]
> 63: 
> /home/travis/build/apache/qpid-dispatch/build/src/libqpid-dispatch.so(+0x54621b)
>  [0x7f3be6aa421b]
> 63: 
> 63: Leak: 2021-04-15 03:43:23.586520 + type: qd_deferred_call_t address: 
> 0x61100014b9d0
> 63: /usr/lib/x86_64-linux-gnu/libasan.so.2(+0x4a077) [0x7f3be716c077]
> 63: 
> /home/travis/build/apache/qpid-dispatch/build/src/libqpid-dispatch.so(qd_alloc+0xc70)
>  [0x7f3be68ee7a0]
> 63: 
> /home/travis/build/apache/qpid-dispatch/build/src/libqpid-dispatch.so(new_qd_deferred_call_t+0x26)
>  [0x7f3be6a9]
> 63: 
> /home/travis/build/apache/qpid-dispatch/build/src/libqpid-dispatch.so(qd_connection_invoke_deferred+0x25)
>  [0x7f3be6aaa479]
> 63: 
> /home/travis/build/apache/qpid-dispatch/build/src/libqpid-dispatch.so(+0x53431e)
>  [0x7f3be6a9231e]
> 63: 
> /home/travis/build/apache/qpid-dispatch/build/src/libqpid-dispatch.so(qdr_connection_process+0x2720)
>  [0x7f3be69b31d4]
> 63: 
> /home/travis/build/apache/qpid-dispatch/build/src/libqpid-dispatch.so(+0x52a9c7)
>  [0x7f3be6a889c7]
> 63: 
> /home/travis/build/apache/qpid-dispatch/build/src/libqpid-dispatch.so(+0x3b631c)
>  [0x7f3be691431c]
> 63: 
> 

[jira] [Commented] (DISPATCH-1878) Client app not getting a response through tcpListener

2021-04-16 Thread ASF GitHub Bot (Jira)


[ 
https://issues.apache.org/jira/browse/DISPATCH-1878?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17324035#comment-17324035
 ] 

ASF GitHub Bot commented on DISPATCH-1878:
--

grs commented on a change in pull request #1129:
URL: https://github.com/apache/qpid-dispatch/pull/1129#discussion_r615073245



##
File path: src/adaptors/tcp_adaptor.c
##
@@ -1257,13 +1489,17 @@ static void qdr_tcp_delivery_update(void *context, 
qdr_delivery_t *dlv, uint64_t
 void* link_context = qdr_link_get_context(qdr_delivery_link(dlv));
 if (link_context) {
 qdr_tcp_connection_t* tc = (qdr_tcp_connection_t*) link_context;
-qd_log(tcp_adaptor->log_source, QD_LOG_DEBUG, DLV_FMT" 
qdr_tcp_delivery_update: disp: %"PRIu64", settled: %s",
+qd_log(tcp_adaptor->log_source, QD_LOG_DEBUG,
+   DLV_FMT" qdr_tcp_delivery_update: disp: %"PRIu64", settled: %s",
DLV_ARGS(dlv), disp, settled ? "true" : "false");
 
 //
 // If one of the streaming deliveries is ever settled, the connection 
must be torn down.

Review comment:
   That is exactly my point. The same is true for the other direction, a 
tcpListener CLOSED_WRITE should be propagated to the peer tcpConnector where it 
will call read_close(). The CLOSED_WRITE at the tcpConnector will be propagated 
to the tcpListener for it to call read_close(). Read and write streams are 
chained in both directions.
   
   I argue that the comment above is actually short circuiting this (as it 
predates the API to handle read_close()/write_close() separately). Though it 
may work for the ncat case, that short circuiting seems less clean than a 
proper symmetric chaining.




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


> Client app not getting a response through tcpListener
> -
>
> Key: DISPATCH-1878
> URL: https://issues.apache.org/jira/browse/DISPATCH-1878
> Project: Qpid Dispatch
>  Issue Type: Bug
>  Components: Protocol Adaptors
>Affects Versions: 2.0.0
>Reporter: Fernando Giorgetti
>Assignee: Charles E. Rolke
>Priority: Major
> Attachments: D-1878 @e148c.svg, 
> DISPATCH-1878_fail-pn-raw-closed-before-ingress-stream-set-up.svg, 
> dispatch-1878-trace.html
>
>
> I have a tcp-echo server running locally through:
> podman run -d --rm --name tcpecho -p 9090:9090 quay.io/skupper/tcp-go-echo
> And I have a router configured with a tcpConnector to localhost at port 9090 
> and a tcpListener at port .
> I am able to use nc (netcat) to send data to the tcp-echo directly (port 
> 9090) or through
> the router (port ), if I run "nc 127.0.0.1 " and the send the data.
> But if I run it as:  "echo abcd | nc 127.0.0.1 " I am not seeing a 
> response, but if 
> I use port 9090 (original port) instead of  (router tcpListener), then I 
> get the
> response correctly.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: dev-unsubscr...@qpid.apache.org
For additional commands, e-mail: dev-h...@qpid.apache.org



[GitHub] [qpid-dispatch] grs commented on a change in pull request #1129: DISPATCH-1878: Handle half-closed TCP connections - DO NOT MERGE

2021-04-16 Thread GitBox


grs commented on a change in pull request #1129:
URL: https://github.com/apache/qpid-dispatch/pull/1129#discussion_r615073245



##
File path: src/adaptors/tcp_adaptor.c
##
@@ -1257,13 +1489,17 @@ static void qdr_tcp_delivery_update(void *context, 
qdr_delivery_t *dlv, uint64_t
 void* link_context = qdr_link_get_context(qdr_delivery_link(dlv));
 if (link_context) {
 qdr_tcp_connection_t* tc = (qdr_tcp_connection_t*) link_context;
-qd_log(tcp_adaptor->log_source, QD_LOG_DEBUG, DLV_FMT" 
qdr_tcp_delivery_update: disp: %"PRIu64", settled: %s",
+qd_log(tcp_adaptor->log_source, QD_LOG_DEBUG,
+   DLV_FMT" qdr_tcp_delivery_update: disp: %"PRIu64", settled: %s",
DLV_ARGS(dlv), disp, settled ? "true" : "false");
 
 //
 // If one of the streaming deliveries is ever settled, the connection 
must be torn down.

Review comment:
   That is exactly my point. The same is true for the other direction, a 
tcpListener CLOSED_WRITE should be propagated to the peer tcpConnector where it 
will call read_close(). The CLOSED_WRITE at the tcpConnector will be propagated 
to the tcpListener for it to call read_close(). Read and write streams are 
chained in both directions.
   
   I argue that the comment above is actually short circuiting this (as it 
predates the API to handle read_close()/write_close() separately). Though it 
may work for the ncat case, that short circuiting seems less clean than a 
proper symmetric chaining.




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: dev-unsubscr...@qpid.apache.org
For additional commands, e-mail: dev-h...@qpid.apache.org



[jira] [Resolved] (DISPATCH-2034) Rare flaky failure in system_tests_protocol_settings: AssertionError: ' incoming-window=10,' not found in '2021-03-31 00:25:14.317182 +0000 PROTOCOL (trace) [C4]:FRAM

2021-04-16 Thread Ganesh Murthy (Jira)


 [ 
https://issues.apache.org/jira/browse/DISPATCH-2034?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ganesh Murthy resolved DISPATCH-2034.
-
Resolution: Fixed

> Rare flaky failure in system_tests_protocol_settings: AssertionError: ' 
> incoming-window=10,' not found in '2021-03-31 00:25:14.317182 + PROTOCOL 
> (trace) [C4]:FRAME: 0 <- @begin(17) [next-outgoing-id=0, incoming-
> ---
>
> Key: DISPATCH-2034
> URL: https://issues.apache.org/jira/browse/DISPATCH-2034
> Project: Qpid Dispatch
>  Issue Type: Test
>Reporter: Jiri Daněk
>Assignee: Ganesh Murthy
>Priority: Minor
> Fix For: 1.16.0
>
>
> https://travis-ci.com/github/apache/qpid-dispatch/jobs/495082109#L4626
> {noformat}
> 28: ==
> 28: FAIL: test_connector_default 
> (system_tests_protocol_settings.ConnectorSettingsNondefaultTest)
> 28: --
> 28: Traceback (most recent call last):
> 28:   File 
> "/home/travis/build/apache/qpid-dispatch/tests/system_tests_protocol_settings.py",
>  line 395, in test_connector_default
> 28: self.assertIn(" incoming-window=10,", begin_lines[0])
> 28: AssertionError: ' incoming-window=10,' not found in '2021-03-31 
> 00:25:14.317182 + PROTOCOL (trace) [C4]:FRAME: 0 <- @begin(17) 
> [next-outgoing-id=0, incoming-window=2147483647, outgoing-window=2147483647] 
> (/home/travis/build/apache/qpid-dispatch/src/server.c:113)'
> 28: 
> 28: --
> 28: Ran 10 tests in 8.082s
> 28: 
> 28: FAILED (failures=1)
> 28/73 Test #28: system_tests_protocol_settings ***Failed  
>   8.20 sec
> {noformat}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: dev-unsubscr...@qpid.apache.org
For additional commands, e-mail: dev-h...@qpid.apache.org



[jira] [Assigned] (DISPATCH-2034) Rare flaky failure in system_tests_protocol_settings: AssertionError: ' incoming-window=10,' not found in '2021-03-31 00:25:14.317182 +0000 PROTOCOL (trace) [C4]:FRAM

2021-04-16 Thread Ganesh Murthy (Jira)


 [ 
https://issues.apache.org/jira/browse/DISPATCH-2034?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ganesh Murthy reassigned DISPATCH-2034:
---

Assignee: Ganesh Murthy  (was: Ken Giusti)

> Rare flaky failure in system_tests_protocol_settings: AssertionError: ' 
> incoming-window=10,' not found in '2021-03-31 00:25:14.317182 + PROTOCOL 
> (trace) [C4]:FRAME: 0 <- @begin(17) [next-outgoing-id=0, incoming-
> ---
>
> Key: DISPATCH-2034
> URL: https://issues.apache.org/jira/browse/DISPATCH-2034
> Project: Qpid Dispatch
>  Issue Type: Test
>Reporter: Jiri Daněk
>Assignee: Ganesh Murthy
>Priority: Minor
> Fix For: 1.16.0
>
>
> https://travis-ci.com/github/apache/qpid-dispatch/jobs/495082109#L4626
> {noformat}
> 28: ==
> 28: FAIL: test_connector_default 
> (system_tests_protocol_settings.ConnectorSettingsNondefaultTest)
> 28: --
> 28: Traceback (most recent call last):
> 28:   File 
> "/home/travis/build/apache/qpid-dispatch/tests/system_tests_protocol_settings.py",
>  line 395, in test_connector_default
> 28: self.assertIn(" incoming-window=10,", begin_lines[0])
> 28: AssertionError: ' incoming-window=10,' not found in '2021-03-31 
> 00:25:14.317182 + PROTOCOL (trace) [C4]:FRAME: 0 <- @begin(17) 
> [next-outgoing-id=0, incoming-window=2147483647, outgoing-window=2147483647] 
> (/home/travis/build/apache/qpid-dispatch/src/server.c:113)'
> 28: 
> 28: --
> 28: Ran 10 tests in 8.082s
> 28: 
> 28: FAILED (failures=1)
> 28/73 Test #28: system_tests_protocol_settings ***Failed  
>   8.20 sec
> {noformat}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: dev-unsubscr...@qpid.apache.org
For additional commands, e-mail: dev-h...@qpid.apache.org



[jira] [Commented] (DISPATCH-2034) Rare flaky failure in system_tests_protocol_settings: AssertionError: ' incoming-window=10,' not found in '2021-03-31 00:25:14.317182 +0000 PROTOCOL (trace) [C4]:FRA

2021-04-16 Thread ASF subversion and git services (Jira)


[ 
https://issues.apache.org/jira/browse/DISPATCH-2034?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17324032#comment-17324032
 ] 

ASF subversion and git services commented on DISPATCH-2034:
---

Commit 98821b495b9c50b9c2f3003959d98aa3d13b51a6 in qpid-dispatch's branch 
refs/heads/main from Ganesh Murthy
[ https://gitbox.apache.org/repos/asf?p=qpid-dispatch.git;h=98821b4 ]

DISPATCH-2034: Search for the existence of a specific incoming window in all 
the lines with a @begin frame, not just the first line. This closes #1131.


> Rare flaky failure in system_tests_protocol_settings: AssertionError: ' 
> incoming-window=10,' not found in '2021-03-31 00:25:14.317182 + PROTOCOL 
> (trace) [C4]:FRAME: 0 <- @begin(17) [next-outgoing-id=0, incoming-
> ---
>
> Key: DISPATCH-2034
> URL: https://issues.apache.org/jira/browse/DISPATCH-2034
> Project: Qpid Dispatch
>  Issue Type: Test
>Reporter: Jiri Daněk
>Assignee: Ken Giusti
>Priority: Minor
> Fix For: 1.16.0
>
>
> https://travis-ci.com/github/apache/qpid-dispatch/jobs/495082109#L4626
> {noformat}
> 28: ==
> 28: FAIL: test_connector_default 
> (system_tests_protocol_settings.ConnectorSettingsNondefaultTest)
> 28: --
> 28: Traceback (most recent call last):
> 28:   File 
> "/home/travis/build/apache/qpid-dispatch/tests/system_tests_protocol_settings.py",
>  line 395, in test_connector_default
> 28: self.assertIn(" incoming-window=10,", begin_lines[0])
> 28: AssertionError: ' incoming-window=10,' not found in '2021-03-31 
> 00:25:14.317182 + PROTOCOL (trace) [C4]:FRAME: 0 <- @begin(17) 
> [next-outgoing-id=0, incoming-window=2147483647, outgoing-window=2147483647] 
> (/home/travis/build/apache/qpid-dispatch/src/server.c:113)'
> 28: 
> 28: --
> 28: Ran 10 tests in 8.082s
> 28: 
> 28: FAILED (failures=1)
> 28/73 Test #28: system_tests_protocol_settings ***Failed  
>   8.20 sec
> {noformat}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: dev-unsubscr...@qpid.apache.org
For additional commands, e-mail: dev-h...@qpid.apache.org



[jira] [Commented] (DISPATCH-2034) Rare flaky failure in system_tests_protocol_settings: AssertionError: ' incoming-window=10,' not found in '2021-03-31 00:25:14.317182 +0000 PROTOCOL (trace) [C4]:FRA

2021-04-16 Thread ASF GitHub Bot (Jira)


[ 
https://issues.apache.org/jira/browse/DISPATCH-2034?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17324033#comment-17324033
 ] 

ASF GitHub Bot commented on DISPATCH-2034:
--

asfgit closed pull request #1131:
URL: https://github.com/apache/qpid-dispatch/pull/1131


   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


> Rare flaky failure in system_tests_protocol_settings: AssertionError: ' 
> incoming-window=10,' not found in '2021-03-31 00:25:14.317182 + PROTOCOL 
> (trace) [C4]:FRAME: 0 <- @begin(17) [next-outgoing-id=0, incoming-
> ---
>
> Key: DISPATCH-2034
> URL: https://issues.apache.org/jira/browse/DISPATCH-2034
> Project: Qpid Dispatch
>  Issue Type: Test
>Reporter: Jiri Daněk
>Assignee: Ken Giusti
>Priority: Minor
> Fix For: 1.16.0
>
>
> https://travis-ci.com/github/apache/qpid-dispatch/jobs/495082109#L4626
> {noformat}
> 28: ==
> 28: FAIL: test_connector_default 
> (system_tests_protocol_settings.ConnectorSettingsNondefaultTest)
> 28: --
> 28: Traceback (most recent call last):
> 28:   File 
> "/home/travis/build/apache/qpid-dispatch/tests/system_tests_protocol_settings.py",
>  line 395, in test_connector_default
> 28: self.assertIn(" incoming-window=10,", begin_lines[0])
> 28: AssertionError: ' incoming-window=10,' not found in '2021-03-31 
> 00:25:14.317182 + PROTOCOL (trace) [C4]:FRAME: 0 <- @begin(17) 
> [next-outgoing-id=0, incoming-window=2147483647, outgoing-window=2147483647] 
> (/home/travis/build/apache/qpid-dispatch/src/server.c:113)'
> 28: 
> 28: --
> 28: Ran 10 tests in 8.082s
> 28: 
> 28: FAILED (failures=1)
> 28/73 Test #28: system_tests_protocol_settings ***Failed  
>   8.20 sec
> {noformat}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: dev-unsubscr...@qpid.apache.org
For additional commands, e-mail: dev-h...@qpid.apache.org



[GitHub] [qpid-dispatch] asfgit closed pull request #1131: DISPATCH-2034: Search for the existence of a specific incoming window…

2021-04-16 Thread GitBox


asfgit closed pull request #1131:
URL: https://github.com/apache/qpid-dispatch/pull/1131


   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: dev-unsubscr...@qpid.apache.org
For additional commands, e-mail: dev-h...@qpid.apache.org



[jira] [Commented] (DISPATCH-1878) Client app not getting a response through tcpListener

2021-04-16 Thread ASF GitHub Bot (Jira)


[ 
https://issues.apache.org/jira/browse/DISPATCH-1878?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17324031#comment-17324031
 ] 

ASF GitHub Bot commented on DISPATCH-1878:
--

grs commented on a change in pull request #1129:
URL: https://github.com/apache/qpid-dispatch/pull/1129#discussion_r615069069



##
File path: src/adaptors/tcp_adaptor.c
##
@@ -57,11 +57,15 @@ struct qdr_tcp_connection_t {
 qdr_delivery_t   *outstream;
 bool  ingress;
 bool  flow_enabled;
+bool  incoming_started;
 bool  egress_dispatcher;
 bool  connector_closed;//only used if 
egress_dispatcher=true
 bool  in_list; // This connection is in the 
adaptor's connections list
-bool  raw_closed_read;
-bool  raw_closed_write;
+bool  raw_closed_read;   // proton event seen
+bool  raw_closed_write;  // proton event seen or 
write_close called
+bool  raw_read_shutdown; // stream closed
+bool  read_eos_seen;
+qd_buffer_list_t  early_raw_read_bufs; // read from raw conn before 
ingress stream ready

Review comment:
   I don't believe we would need a wake as we would be creating the 
instream on the io thread anyway. All that is needed is to drain the data 
available and append it to the instream.
   
   We can't get a READ event if we don't grant any buffers anyway. Presumably 
the READ_CLOSED event must come after we have read all the data, else how would 
we know there was no more left. @astitcher ?




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


> Client app not getting a response through tcpListener
> -
>
> Key: DISPATCH-1878
> URL: https://issues.apache.org/jira/browse/DISPATCH-1878
> Project: Qpid Dispatch
>  Issue Type: Bug
>  Components: Protocol Adaptors
>Affects Versions: 2.0.0
>Reporter: Fernando Giorgetti
>Assignee: Charles E. Rolke
>Priority: Major
> Attachments: D-1878 @e148c.svg, 
> DISPATCH-1878_fail-pn-raw-closed-before-ingress-stream-set-up.svg, 
> dispatch-1878-trace.html
>
>
> I have a tcp-echo server running locally through:
> podman run -d --rm --name tcpecho -p 9090:9090 quay.io/skupper/tcp-go-echo
> And I have a router configured with a tcpConnector to localhost at port 9090 
> and a tcpListener at port .
> I am able to use nc (netcat) to send data to the tcp-echo directly (port 
> 9090) or through
> the router (port ), if I run "nc 127.0.0.1 " and the send the data.
> But if I run it as:  "echo abcd | nc 127.0.0.1 " I am not seeing a 
> response, but if 
> I use port 9090 (original port) instead of  (router tcpListener), then I 
> get the
> response correctly.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: dev-unsubscr...@qpid.apache.org
For additional commands, e-mail: dev-h...@qpid.apache.org



[jira] [Commented] (DISPATCH-1878) Client app not getting a response through tcpListener

2021-04-16 Thread ASF GitHub Bot (Jira)


[ 
https://issues.apache.org/jira/browse/DISPATCH-1878?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17324030#comment-17324030
 ] 

ASF GitHub Bot commented on DISPATCH-1878:
--

ChugR commented on pull request #1129:
URL: https://github.com/apache/qpid-dispatch/pull/1129#issuecomment-821506067


   Adding a self test for this now


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


> Client app not getting a response through tcpListener
> -
>
> Key: DISPATCH-1878
> URL: https://issues.apache.org/jira/browse/DISPATCH-1878
> Project: Qpid Dispatch
>  Issue Type: Bug
>  Components: Protocol Adaptors
>Affects Versions: 2.0.0
>Reporter: Fernando Giorgetti
>Assignee: Charles E. Rolke
>Priority: Major
> Attachments: D-1878 @e148c.svg, 
> DISPATCH-1878_fail-pn-raw-closed-before-ingress-stream-set-up.svg, 
> dispatch-1878-trace.html
>
>
> I have a tcp-echo server running locally through:
> podman run -d --rm --name tcpecho -p 9090:9090 quay.io/skupper/tcp-go-echo
> And I have a router configured with a tcpConnector to localhost at port 9090 
> and a tcpListener at port .
> I am able to use nc (netcat) to send data to the tcp-echo directly (port 
> 9090) or through
> the router (port ), if I run "nc 127.0.0.1 " and the send the data.
> But if I run it as:  "echo abcd | nc 127.0.0.1 " I am not seeing a 
> response, but if 
> I use port 9090 (original port) instead of  (router tcpListener), then I 
> get the
> response correctly.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: dev-unsubscr...@qpid.apache.org
For additional commands, e-mail: dev-h...@qpid.apache.org



[GitHub] [qpid-dispatch] grs commented on a change in pull request #1129: DISPATCH-1878: Handle half-closed TCP connections - DO NOT MERGE

2021-04-16 Thread GitBox


grs commented on a change in pull request #1129:
URL: https://github.com/apache/qpid-dispatch/pull/1129#discussion_r615069069



##
File path: src/adaptors/tcp_adaptor.c
##
@@ -57,11 +57,15 @@ struct qdr_tcp_connection_t {
 qdr_delivery_t   *outstream;
 bool  ingress;
 bool  flow_enabled;
+bool  incoming_started;
 bool  egress_dispatcher;
 bool  connector_closed;//only used if 
egress_dispatcher=true
 bool  in_list; // This connection is in the 
adaptor's connections list
-bool  raw_closed_read;
-bool  raw_closed_write;
+bool  raw_closed_read;   // proton event seen
+bool  raw_closed_write;  // proton event seen or 
write_close called
+bool  raw_read_shutdown; // stream closed
+bool  read_eos_seen;
+qd_buffer_list_t  early_raw_read_bufs; // read from raw conn before 
ingress stream ready

Review comment:
   I don't believe we would need a wake as we would be creating the 
instream on the io thread anyway. All that is needed is to drain the data 
available and append it to the instream.
   
   We can't get a READ event if we don't grant any buffers anyway. Presumably 
the READ_CLOSED event must come after we have read all the data, else how would 
we know there was no more left. @astitcher ?




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: dev-unsubscr...@qpid.apache.org
For additional commands, e-mail: dev-h...@qpid.apache.org



[GitHub] [qpid-dispatch] ChugR commented on pull request #1129: DISPATCH-1878: Handle half-closed TCP connections - DO NOT MERGE

2021-04-16 Thread GitBox


ChugR commented on pull request #1129:
URL: https://github.com/apache/qpid-dispatch/pull/1129#issuecomment-821506067


   Adding a self test for this now


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: dev-unsubscr...@qpid.apache.org
For additional commands, e-mail: dev-h...@qpid.apache.org



[jira] [Commented] (DISPATCH-1878) Client app not getting a response through tcpListener

2021-04-16 Thread ASF GitHub Bot (Jira)


[ 
https://issues.apache.org/jira/browse/DISPATCH-1878?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17324028#comment-17324028
 ] 

ASF GitHub Bot commented on DISPATCH-1878:
--

grs commented on a change in pull request #1129:
URL: https://github.com/apache/qpid-dispatch/pull/1129#discussion_r615061964



##
File path: src/adaptors/tcp_adaptor.c
##
@@ -782,6 +965,14 @@ static void 
qdr_tcp_open_server_side_connection(qdr_tcp_connection_t* tc)
 
 // This attach passes the ownership of the delivery from the core-side 
connection and link
 // to the adaptor-side outgoing connection and link.
+uint64_t i_conn_id = 0;
+uint64_t i_link_id = 0;
+uint64_t i_dlv_id = 0;
+if (!!tc->initial_delivery) {
+i_conn_id = tc->initial_delivery->conn_id;

Review comment:
   Sorry, I still don't get it. The i_dlv_id is not referenced in the 
qdr_link_first_attach call.




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


> Client app not getting a response through tcpListener
> -
>
> Key: DISPATCH-1878
> URL: https://issues.apache.org/jira/browse/DISPATCH-1878
> Project: Qpid Dispatch
>  Issue Type: Bug
>  Components: Protocol Adaptors
>Affects Versions: 2.0.0
>Reporter: Fernando Giorgetti
>Assignee: Charles E. Rolke
>Priority: Major
> Attachments: D-1878 @e148c.svg, 
> DISPATCH-1878_fail-pn-raw-closed-before-ingress-stream-set-up.svg, 
> dispatch-1878-trace.html
>
>
> I have a tcp-echo server running locally through:
> podman run -d --rm --name tcpecho -p 9090:9090 quay.io/skupper/tcp-go-echo
> And I have a router configured with a tcpConnector to localhost at port 9090 
> and a tcpListener at port .
> I am able to use nc (netcat) to send data to the tcp-echo directly (port 
> 9090) or through
> the router (port ), if I run "nc 127.0.0.1 " and the send the data.
> But if I run it as:  "echo abcd | nc 127.0.0.1 " I am not seeing a 
> response, but if 
> I use port 9090 (original port) instead of  (router tcpListener), then I 
> get the
> response correctly.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: dev-unsubscr...@qpid.apache.org
For additional commands, e-mail: dev-h...@qpid.apache.org



[GitHub] [qpid-dispatch] grs commented on a change in pull request #1129: DISPATCH-1878: Handle half-closed TCP connections - DO NOT MERGE

2021-04-16 Thread GitBox


grs commented on a change in pull request #1129:
URL: https://github.com/apache/qpid-dispatch/pull/1129#discussion_r615061964



##
File path: src/adaptors/tcp_adaptor.c
##
@@ -782,6 +965,14 @@ static void 
qdr_tcp_open_server_side_connection(qdr_tcp_connection_t* tc)
 
 // This attach passes the ownership of the delivery from the core-side 
connection and link
 // to the adaptor-side outgoing connection and link.
+uint64_t i_conn_id = 0;
+uint64_t i_link_id = 0;
+uint64_t i_dlv_id = 0;
+if (!!tc->initial_delivery) {
+i_conn_id = tc->initial_delivery->conn_id;

Review comment:
   Sorry, I still don't get it. The i_dlv_id is not referenced in the 
qdr_link_first_attach call.




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: dev-unsubscr...@qpid.apache.org
For additional commands, e-mail: dev-h...@qpid.apache.org



[jira] [Commented] (DISPATCH-1878) Client app not getting a response through tcpListener

2021-04-16 Thread ASF GitHub Bot (Jira)


[ 
https://issues.apache.org/jira/browse/DISPATCH-1878?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17324027#comment-17324027
 ] 

ASF GitHub Bot commented on DISPATCH-1878:
--

ChugR commented on a change in pull request #1129:
URL: https://github.com/apache/qpid-dispatch/pull/1129#discussion_r615061225



##
File path: src/adaptors/tcp_adaptor.c
##
@@ -1257,13 +1489,17 @@ static void qdr_tcp_delivery_update(void *context, 
qdr_delivery_t *dlv, uint64_t
 void* link_context = qdr_link_get_context(qdr_delivery_link(dlv));
 if (link_context) {
 qdr_tcp_connection_t* tc = (qdr_tcp_connection_t*) link_context;
-qd_log(tcp_adaptor->log_source, QD_LOG_DEBUG, DLV_FMT" 
qdr_tcp_delivery_update: disp: %"PRIu64", settled: %s",
+qd_log(tcp_adaptor->log_source, QD_LOG_DEBUG,
+   DLV_FMT" qdr_tcp_delivery_update: disp: %"PRIu64", settled: %s",
DLV_ARGS(dlv), disp, settled ? "true" : "false");
 
 //
 // If one of the streaming deliveries is ever settled, the connection 
must be torn down.

Review comment:
   I disagree. This code patch is trying to fix DISPATCH-1878 where a 
client sends data and closes the connection. The expectation is that the data 
gets sent over the wire to where it's going and the propagates the close after 
that. Any data will sent by the server will come back to the client. Then when 
the server closes that close will be propagated to the client.
   A tcpListener READ_CLOSED event must be propagated to the peer tcpConnector 
as a write_close() control. Then the connector READ_CLOSE event becomes the 
listener write_close() control.
   Short circuiting that control flow will lose data.




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


> Client app not getting a response through tcpListener
> -
>
> Key: DISPATCH-1878
> URL: https://issues.apache.org/jira/browse/DISPATCH-1878
> Project: Qpid Dispatch
>  Issue Type: Bug
>  Components: Protocol Adaptors
>Affects Versions: 2.0.0
>Reporter: Fernando Giorgetti
>Assignee: Charles E. Rolke
>Priority: Major
> Attachments: D-1878 @e148c.svg, 
> DISPATCH-1878_fail-pn-raw-closed-before-ingress-stream-set-up.svg, 
> dispatch-1878-trace.html
>
>
> I have a tcp-echo server running locally through:
> podman run -d --rm --name tcpecho -p 9090:9090 quay.io/skupper/tcp-go-echo
> And I have a router configured with a tcpConnector to localhost at port 9090 
> and a tcpListener at port .
> I am able to use nc (netcat) to send data to the tcp-echo directly (port 
> 9090) or through
> the router (port ), if I run "nc 127.0.0.1 " and the send the data.
> But if I run it as:  "echo abcd | nc 127.0.0.1 " I am not seeing a 
> response, but if 
> I use port 9090 (original port) instead of  (router tcpListener), then I 
> get the
> response correctly.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: dev-unsubscr...@qpid.apache.org
For additional commands, e-mail: dev-h...@qpid.apache.org



[GitHub] [qpid-dispatch] ChugR commented on a change in pull request #1129: DISPATCH-1878: Handle half-closed TCP connections - DO NOT MERGE

2021-04-16 Thread GitBox


ChugR commented on a change in pull request #1129:
URL: https://github.com/apache/qpid-dispatch/pull/1129#discussion_r615061225



##
File path: src/adaptors/tcp_adaptor.c
##
@@ -1257,13 +1489,17 @@ static void qdr_tcp_delivery_update(void *context, 
qdr_delivery_t *dlv, uint64_t
 void* link_context = qdr_link_get_context(qdr_delivery_link(dlv));
 if (link_context) {
 qdr_tcp_connection_t* tc = (qdr_tcp_connection_t*) link_context;
-qd_log(tcp_adaptor->log_source, QD_LOG_DEBUG, DLV_FMT" 
qdr_tcp_delivery_update: disp: %"PRIu64", settled: %s",
+qd_log(tcp_adaptor->log_source, QD_LOG_DEBUG,
+   DLV_FMT" qdr_tcp_delivery_update: disp: %"PRIu64", settled: %s",
DLV_ARGS(dlv), disp, settled ? "true" : "false");
 
 //
 // If one of the streaming deliveries is ever settled, the connection 
must be torn down.

Review comment:
   I disagree. This code patch is trying to fix DISPATCH-1878 where a 
client sends data and closes the connection. The expectation is that the data 
gets sent over the wire to where it's going and the propagates the close after 
that. Any data will sent by the server will come back to the client. Then when 
the server closes that close will be propagated to the client.
   A tcpListener READ_CLOSED event must be propagated to the peer tcpConnector 
as a write_close() control. Then the connector READ_CLOSE event becomes the 
listener write_close() control.
   Short circuiting that control flow will lose data.




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: dev-unsubscr...@qpid.apache.org
For additional commands, e-mail: dev-h...@qpid.apache.org



[jira] [Commented] (DISPATCH-1878) Client app not getting a response through tcpListener

2021-04-16 Thread ASF GitHub Bot (Jira)


[ 
https://issues.apache.org/jira/browse/DISPATCH-1878?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17324026#comment-17324026
 ] 

ASF GitHub Bot commented on DISPATCH-1878:
--

grs commented on a change in pull request #1129:
URL: https://github.com/apache/qpid-dispatch/pull/1129#discussion_r615060537



##
File path: src/adaptors/tcp_adaptor.c
##
@@ -116,6 +120,26 @@ static inline uint64_t qdr_tcp_conn_linkid(const 
qdr_tcp_connection_t *conn)
 return conn->instream ? conn->incoming_id : conn->outgoing_id;
 }
 
+static inline const char * qdr_link_direction_name(const qdr_link_t *link)
+{
+assert(link);
+return qdr_link_direction(link) == QD_OUTGOING ? "outgoing" : "incoming";
+}
+
+static inline const char * qdr_tcp_connection_role_name(const 
qdr_tcp_connection_t *tc)
+{
+assert(tc);
+return tc->ingress ? "client" : "server";

Review comment:
   > Clients never accept connections and servers never originate 
connections.
   
   But that is not what the line above implies, is it? If ingress=true it is a 
connection managed by a TcpListener, i,e, it was an accepted connection.




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


> Client app not getting a response through tcpListener
> -
>
> Key: DISPATCH-1878
> URL: https://issues.apache.org/jira/browse/DISPATCH-1878
> Project: Qpid Dispatch
>  Issue Type: Bug
>  Components: Protocol Adaptors
>Affects Versions: 2.0.0
>Reporter: Fernando Giorgetti
>Assignee: Charles E. Rolke
>Priority: Major
> Attachments: D-1878 @e148c.svg, 
> DISPATCH-1878_fail-pn-raw-closed-before-ingress-stream-set-up.svg, 
> dispatch-1878-trace.html
>
>
> I have a tcp-echo server running locally through:
> podman run -d --rm --name tcpecho -p 9090:9090 quay.io/skupper/tcp-go-echo
> And I have a router configured with a tcpConnector to localhost at port 9090 
> and a tcpListener at port .
> I am able to use nc (netcat) to send data to the tcp-echo directly (port 
> 9090) or through
> the router (port ), if I run "nc 127.0.0.1 " and the send the data.
> But if I run it as:  "echo abcd | nc 127.0.0.1 " I am not seeing a 
> response, but if 
> I use port 9090 (original port) instead of  (router tcpListener), then I 
> get the
> response correctly.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: dev-unsubscr...@qpid.apache.org
For additional commands, e-mail: dev-h...@qpid.apache.org



[GitHub] [qpid-dispatch] grs commented on a change in pull request #1129: DISPATCH-1878: Handle half-closed TCP connections - DO NOT MERGE

2021-04-16 Thread GitBox


grs commented on a change in pull request #1129:
URL: https://github.com/apache/qpid-dispatch/pull/1129#discussion_r615060537



##
File path: src/adaptors/tcp_adaptor.c
##
@@ -116,6 +120,26 @@ static inline uint64_t qdr_tcp_conn_linkid(const 
qdr_tcp_connection_t *conn)
 return conn->instream ? conn->incoming_id : conn->outgoing_id;
 }
 
+static inline const char * qdr_link_direction_name(const qdr_link_t *link)
+{
+assert(link);
+return qdr_link_direction(link) == QD_OUTGOING ? "outgoing" : "incoming";
+}
+
+static inline const char * qdr_tcp_connection_role_name(const 
qdr_tcp_connection_t *tc)
+{
+assert(tc);
+return tc->ingress ? "client" : "server";

Review comment:
   > Clients never accept connections and servers never originate 
connections.
   
   But that is not what the line above implies, is it? If ingress=true it is a 
connection managed by a TcpListener, i,e, it was an accepted connection.




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: dev-unsubscr...@qpid.apache.org
For additional commands, e-mail: dev-h...@qpid.apache.org



[jira] [Commented] (DISPATCH-1878) Client app not getting a response through tcpListener

2021-04-16 Thread ASF GitHub Bot (Jira)


[ 
https://issues.apache.org/jira/browse/DISPATCH-1878?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17324022#comment-17324022
 ] 

ASF GitHub Bot commented on DISPATCH-1878:
--

ChugR commented on a change in pull request #1129:
URL: https://github.com/apache/qpid-dispatch/pull/1129#discussion_r615055634



##
File path: src/adaptors/tcp_adaptor.c
##
@@ -782,6 +965,14 @@ static void 
qdr_tcp_open_server_side_connection(qdr_tcp_connection_t* tc)
 
 // This attach passes the ownership of the delivery from the core-side 
connection and link
 // to the adaptor-side outgoing connection and link.
+uint64_t i_conn_id = 0;
+uint64_t i_link_id = 0;
+uint64_t i_dlv_id = 0;
+if (!!tc->initial_delivery) {
+i_conn_id = tc->initial_delivery->conn_id;

Review comment:
   qdr_link_first_attach conditionally "adjusts" the delivery's identity. 
This code is conditionally memorizing what the initial delivery had before the 
adjustment.
   The log statement giving identical [Cx]{Lx][Dx] values before and after was 
pretty lame.




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


> Client app not getting a response through tcpListener
> -
>
> Key: DISPATCH-1878
> URL: https://issues.apache.org/jira/browse/DISPATCH-1878
> Project: Qpid Dispatch
>  Issue Type: Bug
>  Components: Protocol Adaptors
>Affects Versions: 2.0.0
>Reporter: Fernando Giorgetti
>Assignee: Charles E. Rolke
>Priority: Major
> Attachments: D-1878 @e148c.svg, 
> DISPATCH-1878_fail-pn-raw-closed-before-ingress-stream-set-up.svg, 
> dispatch-1878-trace.html
>
>
> I have a tcp-echo server running locally through:
> podman run -d --rm --name tcpecho -p 9090:9090 quay.io/skupper/tcp-go-echo
> And I have a router configured with a tcpConnector to localhost at port 9090 
> and a tcpListener at port .
> I am able to use nc (netcat) to send data to the tcp-echo directly (port 
> 9090) or through
> the router (port ), if I run "nc 127.0.0.1 " and the send the data.
> But if I run it as:  "echo abcd | nc 127.0.0.1 " I am not seeing a 
> response, but if 
> I use port 9090 (original port) instead of  (router tcpListener), then I 
> get the
> response correctly.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: dev-unsubscr...@qpid.apache.org
For additional commands, e-mail: dev-h...@qpid.apache.org



[GitHub] [qpid-dispatch] ChugR commented on a change in pull request #1129: DISPATCH-1878: Handle half-closed TCP connections - DO NOT MERGE

2021-04-16 Thread GitBox


ChugR commented on a change in pull request #1129:
URL: https://github.com/apache/qpid-dispatch/pull/1129#discussion_r615055634



##
File path: src/adaptors/tcp_adaptor.c
##
@@ -782,6 +965,14 @@ static void 
qdr_tcp_open_server_side_connection(qdr_tcp_connection_t* tc)
 
 // This attach passes the ownership of the delivery from the core-side 
connection and link
 // to the adaptor-side outgoing connection and link.
+uint64_t i_conn_id = 0;
+uint64_t i_link_id = 0;
+uint64_t i_dlv_id = 0;
+if (!!tc->initial_delivery) {
+i_conn_id = tc->initial_delivery->conn_id;

Review comment:
   qdr_link_first_attach conditionally "adjusts" the delivery's identity. 
This code is conditionally memorizing what the initial delivery had before the 
adjustment.
   The log statement giving identical [Cx]{Lx][Dx] values before and after was 
pretty lame.




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: dev-unsubscr...@qpid.apache.org
For additional commands, e-mail: dev-h...@qpid.apache.org



[jira] [Commented] (DISPATCH-2048) system_tests_router_mesh failure due to leak of qd_deferred_call_t

2021-04-16 Thread ASF GitHub Bot (Jira)


[ 
https://issues.apache.org/jira/browse/DISPATCH-2048?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17324019#comment-17324019
 ] 

ASF GitHub Bot commented on DISPATCH-2048:
--

ganeshmurthy opened a new pull request #1132:
URL: https://github.com/apache/qpid-dispatch/pull/1132


   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


> system_tests_router_mesh failure due to leak of qd_deferred_call_t
> --
>
> Key: DISPATCH-2048
> URL: https://issues.apache.org/jira/browse/DISPATCH-2048
> Project: Qpid Dispatch
>  Issue Type: Test
>  Components: Tests
>Affects Versions: 1.15.0
>Reporter: Ganesh Murthy
>Priority: Major
>
> {noformat}
>  63: Test command: /usr/bin/python 
> "/home/travis/build/apache/qpid-dispatch/build/tests/run.py" "-m" "unittest" 
> "-v" "system_tests_router_mesh"
> 63: Test timeout computed to be: 600
> 63: test_01_rx_failover_clean (system_tests_router_mesh.ThreeRouterTest) ... 
> ok
> 63: test_02_rx_failover_dirty (system_tests_router_mesh.ThreeRouterTest) ... 
> ok
> 63: test_03_unavailable_link_attach 
> (system_tests_router_mesh.ThreeRouterTest) ... ok
> 63: test_04_unavailable_anonymous_link_attach 
> (system_tests_router_mesh.ThreeRouterTest) ... ok
> 63: test_05_unavailable_anonymous_link_send 
> (system_tests_router_mesh.ThreeRouterTest) ... ok
> 63: test_06_parallel_priority (system_tests_router_mesh.ThreeRouterTest) ... 
> ERROR
> 63: ERROR
> 63: 
> 63: Router RouterC output file:
> 63: ERROR: Aborted due to unexpected alloc pool leak of type 
> 'qd_deferred_call_t'
> 63: 
> 63: 
> 63: 
> 63: Router RouterC debug dump file:
> 63: 
> 63: alloc.c: Items of type 'qd_iterator_t' remain allocated at shutdown: 80 
> (SUPPRESSED)
> 63: alloc.c: Items of type 'qd_timer_t' remain allocated at shutdown: 1 
> (SUPPRESSED)
> 63: alloc.c: Items of type 'qd_bitmask_t' remain allocated at shutdown: 8 
> (SUPPRESSED)
> 63: alloc.c: Items of type 'qd_buffer_t' remain allocated at shutdown: 287 
> (SUPPRESSED)
> 63: alloc.c: Items of type 'qd_parsed_field_t' remain allocated at shutdown: 
> 32 (SUPPRESSED)
> 63: alloc.c: Items of type 'qd_connector_t' remain allocated at shutdown: 1 
> (SUPPRESSED)
> 63: alloc.c: Items of type 'qd_message_t' remain allocated at shutdown: 8 
> (SUPPRESSED)
> 63: alloc.c: Items of type 'qd_message_content_t' remain allocated at 
> shutdown: 8 (SUPPRESSED)
> 63: alloc.c: Items of type 'qdr_delivery_t' remain allocated at shutdown: 8 
> (SUPPRESSED)
> 63: alloc.c: Items of type 'qd_link_ref_t' remain allocated at shutdown: 8 
> (SUPPRESSED)
> 63: alloc.c: Items of type 'qd_deferred_call_t' remain allocated at shutdown: 
> 15
> 63: Leak: 2021-04-15 03:43:23.586342 + type: qd_deferred_call_t address: 
> 0x6110001b7d10
> 63: /usr/lib/x86_64-linux-gnu/libasan.so.2(+0x4a077) [0x7f3be716c077]
> 63: 
> /home/travis/build/apache/qpid-dispatch/build/src/libqpid-dispatch.so(qd_alloc+0xc70)
>  [0x7f3be68ee7a0]
> 63: 
> /home/travis/build/apache/qpid-dispatch/build/src/libqpid-dispatch.so(new_qd_deferred_call_t+0x26)
>  [0x7f3be6a9]
> 63: 
> /home/travis/build/apache/qpid-dispatch/build/src/libqpid-dispatch.so(qd_connection_invoke_deferred+0x25)
>  [0x7f3be6aaa479]
> 63: 
> /home/travis/build/apache/qpid-dispatch/build/src/libqpid-dispatch.so(+0x53431e)
>  [0x7f3be6a9231e]
> 63: 
> /home/travis/build/apache/qpid-dispatch/build/src/libqpid-dispatch.so(qdr_connection_process+0x2720)
>  [0x7f3be69b31d4]
> 63: 
> /home/travis/build/apache/qpid-dispatch/build/src/libqpid-dispatch.so(+0x52a9c7)
>  [0x7f3be6a889c7]
> 63: 
> /home/travis/build/apache/qpid-dispatch/build/src/libqpid-dispatch.so(+0x3b631c)
>  [0x7f3be691431c]
> 63: 
> /home/travis/build/apache/qpid-dispatch/build/src/libqpid-dispatch.so(qd_container_handle_event+0x223b)
>  [0x7f3be6917d6a]
> 63: 
> /home/travis/build/apache/qpid-dispatch/build/src/libqpid-dispatch.so(+0x54621b)
>  [0x7f3be6aa421b]
> 63: 
> 63: Leak: 2021-04-15 03:43:23.586520 + type: qd_deferred_call_t address: 
> 0x61100014b9d0
> 63: /usr/lib/x86_64-linux-gnu/libasan.so.2(+0x4a077) [0x7f3be716c077]
> 63: 
> /home/travis/build/apache/qpid-dispatch/build/src/libqpid-dispatch.so(qd_alloc+0xc70)
>  [0x7f3be68ee7a0]
> 63: 
> /home/travis/build/apache/qpid-dispatch/build/src/libqpid-dispatch.so(new_qd_deferred_call_t+0x26)
>  [0x7f3be6a9]
> 63: 
> /home/travis/build/apache/qpid-dispatch/build/src/libqpid-dispatch.so(qd_connection_invoke_deferred+0x25)
>  [0x7f3be6aaa479]
> 63: 
> /home/travis/build/apache/qpid-dispatch/build/src/libqpid-dispatch.so(+0x53431e)
>  [0x7f3be6a9231e]
> 63: 
> 

[GitHub] [qpid-dispatch] ganeshmurthy opened a new pull request #1132: DISPATCH-2048: Fix shutdown leak of qd_deferred_call_t

2021-04-16 Thread GitBox


ganeshmurthy opened a new pull request #1132:
URL: https://github.com/apache/qpid-dispatch/pull/1132


   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: dev-unsubscr...@qpid.apache.org
For additional commands, e-mail: dev-h...@qpid.apache.org



[jira] [Commented] (DISPATCH-1878) Client app not getting a response through tcpListener

2021-04-16 Thread ASF GitHub Bot (Jira)


[ 
https://issues.apache.org/jira/browse/DISPATCH-1878?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17324018#comment-17324018
 ] 

ASF GitHub Bot commented on DISPATCH-1878:
--

ChugR commented on a change in pull request #1129:
URL: https://github.com/apache/qpid-dispatch/pull/1129#discussion_r615052175



##
File path: src/adaptors/tcp_adaptor.c
##
@@ -630,64 +780,92 @@ static void handle_connection_event(pn_event_t *e, 
qd_server_t *qd_server, void
 }
 }
 case PN_RAW_CONNECTION_CLOSED_READ: {
-qd_log(log, QD_LOG_DEBUG, "[C%"PRIu64"] 
PN_RAW_CONNECTION_CLOSED_READ", conn->conn_id);
-conn->q2_blocked = false;
-handle_incoming_impl(conn, true);
+qd_log(log, QD_LOG_DEBUG, "[C%"PRIu64"][L%"PRIu64"] 
PN_RAW_CONNECTION_CLOSED_READ",
+   conn->conn_id, conn->incoming_id);
 sys_mutex_lock(conn->activation_lock);
+conn->q2_blocked = false;
 conn->raw_closed_read = true;
 sys_mutex_unlock(conn->activation_lock);
-pn_raw_connection_close(conn->pn_raw_conn);
+handle_incoming(conn, "PNRC_CLOSED_READ");
 break;
 }
 case PN_RAW_CONNECTION_CLOSED_WRITE: {
-qd_log(log, QD_LOG_DEBUG, "[C%"PRIu64"] 
PN_RAW_CONNECTION_CLOSED_WRITE", conn->conn_id);
+qd_log(log, QD_LOG_DEBUG,
+   "[C%"PRIu64"] PN_RAW_CONNECTION_CLOSED_WRITE",
+   conn->conn_id);
 sys_mutex_lock(conn->activation_lock);
 conn->raw_closed_write = true;
 sys_mutex_unlock(conn->activation_lock);
-pn_raw_connection_close(conn->pn_raw_conn);
+if (conn->ingress) {

Review comment:
   I will try removing that conditiona.




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


> Client app not getting a response through tcpListener
> -
>
> Key: DISPATCH-1878
> URL: https://issues.apache.org/jira/browse/DISPATCH-1878
> Project: Qpid Dispatch
>  Issue Type: Bug
>  Components: Protocol Adaptors
>Affects Versions: 2.0.0
>Reporter: Fernando Giorgetti
>Assignee: Charles E. Rolke
>Priority: Major
> Attachments: D-1878 @e148c.svg, 
> DISPATCH-1878_fail-pn-raw-closed-before-ingress-stream-set-up.svg, 
> dispatch-1878-trace.html
>
>
> I have a tcp-echo server running locally through:
> podman run -d --rm --name tcpecho -p 9090:9090 quay.io/skupper/tcp-go-echo
> And I have a router configured with a tcpConnector to localhost at port 9090 
> and a tcpListener at port .
> I am able to use nc (netcat) to send data to the tcp-echo directly (port 
> 9090) or through
> the router (port ), if I run "nc 127.0.0.1 " and the send the data.
> But if I run it as:  "echo abcd | nc 127.0.0.1 " I am not seeing a 
> response, but if 
> I use port 9090 (original port) instead of  (router tcpListener), then I 
> get the
> response correctly.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: dev-unsubscr...@qpid.apache.org
For additional commands, e-mail: dev-h...@qpid.apache.org



[GitHub] [qpid-dispatch] ChugR commented on a change in pull request #1129: DISPATCH-1878: Handle half-closed TCP connections - DO NOT MERGE

2021-04-16 Thread GitBox


ChugR commented on a change in pull request #1129:
URL: https://github.com/apache/qpid-dispatch/pull/1129#discussion_r615052175



##
File path: src/adaptors/tcp_adaptor.c
##
@@ -630,64 +780,92 @@ static void handle_connection_event(pn_event_t *e, 
qd_server_t *qd_server, void
 }
 }
 case PN_RAW_CONNECTION_CLOSED_READ: {
-qd_log(log, QD_LOG_DEBUG, "[C%"PRIu64"] 
PN_RAW_CONNECTION_CLOSED_READ", conn->conn_id);
-conn->q2_blocked = false;
-handle_incoming_impl(conn, true);
+qd_log(log, QD_LOG_DEBUG, "[C%"PRIu64"][L%"PRIu64"] 
PN_RAW_CONNECTION_CLOSED_READ",
+   conn->conn_id, conn->incoming_id);
 sys_mutex_lock(conn->activation_lock);
+conn->q2_blocked = false;
 conn->raw_closed_read = true;
 sys_mutex_unlock(conn->activation_lock);
-pn_raw_connection_close(conn->pn_raw_conn);
+handle_incoming(conn, "PNRC_CLOSED_READ");
 break;
 }
 case PN_RAW_CONNECTION_CLOSED_WRITE: {
-qd_log(log, QD_LOG_DEBUG, "[C%"PRIu64"] 
PN_RAW_CONNECTION_CLOSED_WRITE", conn->conn_id);
+qd_log(log, QD_LOG_DEBUG,
+   "[C%"PRIu64"] PN_RAW_CONNECTION_CLOSED_WRITE",
+   conn->conn_id);
 sys_mutex_lock(conn->activation_lock);
 conn->raw_closed_write = true;
 sys_mutex_unlock(conn->activation_lock);
-pn_raw_connection_close(conn->pn_raw_conn);
+if (conn->ingress) {

Review comment:
   I will try removing that conditiona.




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: dev-unsubscr...@qpid.apache.org
For additional commands, e-mail: dev-h...@qpid.apache.org



[jira] [Commented] (DISPATCH-1878) Client app not getting a response through tcpListener

2021-04-16 Thread ASF GitHub Bot (Jira)


[ 
https://issues.apache.org/jira/browse/DISPATCH-1878?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17324017#comment-17324017
 ] 

ASF GitHub Bot commented on DISPATCH-1878:
--

ChugR commented on a change in pull request #1129:
URL: https://github.com/apache/qpid-dispatch/pull/1129#discussion_r615051274



##
File path: src/adaptors/tcp_adaptor.c
##
@@ -288,18 +338,87 @@ static int handle_incoming_impl(qdr_tcp_connection_t 
*conn, bool close_pending)
 qd_message_set_q2_unblocked_handler(msg, qdr_tcp_q2_unblocked_handler, 
conn_sp);
 
 conn->instream = qdr_link_deliver(conn->incoming, msg, 0, false, 0, 0, 
0, 0);
-qd_log(tcp_adaptor->log_source, QD_LOG_DEBUG, 
"[C%"PRIu64"][L%"PRIu64"] Initiating message with %i bytes", conn->conn_id, 
conn->incoming_id, count);
+qd_log(log, QD_LOG_DEBUG,
+   "[C%"PRIu64"][L%"PRIu64"][D%"PRIu64"] Initiating ingress stream 
message with %u bytes",
+   conn->conn_id, conn->incoming_id, conn->instream->delivery_id, 
length);
+conn->incoming_started = true;
+
+// Handle deferment of write side close.
+sys_mutex_lock(conn->activation_lock);
+if (conn->read_eos_seen && !conn->raw_closed_write) {
+// to-raw-conn EOS was seen before the from-raw-conn instream 
delivery existed.

Review comment:
   Because if the instream creation is too slow then close the raw 
connection becomes WRITE_CLOSED, then the server sends the response and closes 
it's connection, the raw connection becomes READ_CLOSED, and DISCONNECTED and 
everything is discarded.
   All this happens while waiting for the instream to come up and get credit.
   By deferring the write-close until the instream comes up then any server 
response has a place to go before the raw connection gets closed.
   On a single router test this condition would show up after maybe 500 or 3000 
successful tries. It was hard to track down, easy to fix.




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


> Client app not getting a response through tcpListener
> -
>
> Key: DISPATCH-1878
> URL: https://issues.apache.org/jira/browse/DISPATCH-1878
> Project: Qpid Dispatch
>  Issue Type: Bug
>  Components: Protocol Adaptors
>Affects Versions: 2.0.0
>Reporter: Fernando Giorgetti
>Assignee: Charles E. Rolke
>Priority: Major
> Attachments: D-1878 @e148c.svg, 
> DISPATCH-1878_fail-pn-raw-closed-before-ingress-stream-set-up.svg, 
> dispatch-1878-trace.html
>
>
> I have a tcp-echo server running locally through:
> podman run -d --rm --name tcpecho -p 9090:9090 quay.io/skupper/tcp-go-echo
> And I have a router configured with a tcpConnector to localhost at port 9090 
> and a tcpListener at port .
> I am able to use nc (netcat) to send data to the tcp-echo directly (port 
> 9090) or through
> the router (port ), if I run "nc 127.0.0.1 " and the send the data.
> But if I run it as:  "echo abcd | nc 127.0.0.1 " I am not seeing a 
> response, but if 
> I use port 9090 (original port) instead of  (router tcpListener), then I 
> get the
> response correctly.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: dev-unsubscr...@qpid.apache.org
For additional commands, e-mail: dev-h...@qpid.apache.org



[GitHub] [qpid-dispatch] ChugR commented on a change in pull request #1129: DISPATCH-1878: Handle half-closed TCP connections - DO NOT MERGE

2021-04-16 Thread GitBox


ChugR commented on a change in pull request #1129:
URL: https://github.com/apache/qpid-dispatch/pull/1129#discussion_r615051274



##
File path: src/adaptors/tcp_adaptor.c
##
@@ -288,18 +338,87 @@ static int handle_incoming_impl(qdr_tcp_connection_t 
*conn, bool close_pending)
 qd_message_set_q2_unblocked_handler(msg, qdr_tcp_q2_unblocked_handler, 
conn_sp);
 
 conn->instream = qdr_link_deliver(conn->incoming, msg, 0, false, 0, 0, 
0, 0);
-qd_log(tcp_adaptor->log_source, QD_LOG_DEBUG, 
"[C%"PRIu64"][L%"PRIu64"] Initiating message with %i bytes", conn->conn_id, 
conn->incoming_id, count);
+qd_log(log, QD_LOG_DEBUG,
+   "[C%"PRIu64"][L%"PRIu64"][D%"PRIu64"] Initiating ingress stream 
message with %u bytes",
+   conn->conn_id, conn->incoming_id, conn->instream->delivery_id, 
length);
+conn->incoming_started = true;
+
+// Handle deferment of write side close.
+sys_mutex_lock(conn->activation_lock);
+if (conn->read_eos_seen && !conn->raw_closed_write) {
+// to-raw-conn EOS was seen before the from-raw-conn instream 
delivery existed.

Review comment:
   Because if the instream creation is too slow then close the raw 
connection becomes WRITE_CLOSED, then the server sends the response and closes 
it's connection, the raw connection becomes READ_CLOSED, and DISCONNECTED and 
everything is discarded.
   All this happens while waiting for the instream to come up and get credit.
   By deferring the write-close until the instream comes up then any server 
response has a place to go before the raw connection gets closed.
   On a single router test this condition would show up after maybe 500 or 3000 
successful tries. It was hard to track down, easy to fix.




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: dev-unsubscr...@qpid.apache.org
For additional commands, e-mail: dev-h...@qpid.apache.org



[jira] [Commented] (DISPATCH-1878) Client app not getting a response through tcpListener

2021-04-16 Thread ASF GitHub Bot (Jira)


[ 
https://issues.apache.org/jira/browse/DISPATCH-1878?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17324013#comment-17324013
 ] 

ASF GitHub Bot commented on DISPATCH-1878:
--

ChugR commented on a change in pull request #1129:
URL: https://github.com/apache/qpid-dispatch/pull/1129#discussion_r615047802



##
File path: src/adaptors/tcp_adaptor.c
##
@@ -116,6 +120,26 @@ static inline uint64_t qdr_tcp_conn_linkid(const 
qdr_tcp_connection_t *conn)
 return conn->instream ? conn->incoming_id : conn->outgoing_id;
 }
 
+static inline const char * qdr_link_direction_name(const qdr_link_t *link)
+{
+assert(link);
+return qdr_link_direction(link) == QD_OUTGOING ? "outgoing" : "incoming";
+}
+
+static inline const char * qdr_tcp_connection_role_name(const 
qdr_tcp_connection_t *tc)
+{
+assert(tc);
+return tc->ingress ? "client" : "server";

Review comment:
   Ambiguity abounds no matter how it is described. I figure that a client 
originates a connection to a listener and that a connector originates a 
connection to a server. Clients never accept connections and servers never 
originate connections. 
   Also, there was some bad code logic in these descriptions that caused the 
logs to say wrong things. An update is coming shortly.




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


> Client app not getting a response through tcpListener
> -
>
> Key: DISPATCH-1878
> URL: https://issues.apache.org/jira/browse/DISPATCH-1878
> Project: Qpid Dispatch
>  Issue Type: Bug
>  Components: Protocol Adaptors
>Affects Versions: 2.0.0
>Reporter: Fernando Giorgetti
>Assignee: Charles E. Rolke
>Priority: Major
> Attachments: D-1878 @e148c.svg, 
> DISPATCH-1878_fail-pn-raw-closed-before-ingress-stream-set-up.svg, 
> dispatch-1878-trace.html
>
>
> I have a tcp-echo server running locally through:
> podman run -d --rm --name tcpecho -p 9090:9090 quay.io/skupper/tcp-go-echo
> And I have a router configured with a tcpConnector to localhost at port 9090 
> and a tcpListener at port .
> I am able to use nc (netcat) to send data to the tcp-echo directly (port 
> 9090) or through
> the router (port ), if I run "nc 127.0.0.1 " and the send the data.
> But if I run it as:  "echo abcd | nc 127.0.0.1 " I am not seeing a 
> response, but if 
> I use port 9090 (original port) instead of  (router tcpListener), then I 
> get the
> response correctly.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: dev-unsubscr...@qpid.apache.org
For additional commands, e-mail: dev-h...@qpid.apache.org



[GitHub] [qpid-dispatch] ChugR commented on a change in pull request #1129: DISPATCH-1878: Handle half-closed TCP connections - DO NOT MERGE

2021-04-16 Thread GitBox


ChugR commented on a change in pull request #1129:
URL: https://github.com/apache/qpid-dispatch/pull/1129#discussion_r615047802



##
File path: src/adaptors/tcp_adaptor.c
##
@@ -116,6 +120,26 @@ static inline uint64_t qdr_tcp_conn_linkid(const 
qdr_tcp_connection_t *conn)
 return conn->instream ? conn->incoming_id : conn->outgoing_id;
 }
 
+static inline const char * qdr_link_direction_name(const qdr_link_t *link)
+{
+assert(link);
+return qdr_link_direction(link) == QD_OUTGOING ? "outgoing" : "incoming";
+}
+
+static inline const char * qdr_tcp_connection_role_name(const 
qdr_tcp_connection_t *tc)
+{
+assert(tc);
+return tc->ingress ? "client" : "server";

Review comment:
   Ambiguity abounds no matter how it is described. I figure that a client 
originates a connection to a listener and that a connector originates a 
connection to a server. Clients never accept connections and servers never 
originate connections. 
   Also, there was some bad code logic in these descriptions that caused the 
logs to say wrong things. An update is coming shortly.




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: dev-unsubscr...@qpid.apache.org
For additional commands, e-mail: dev-h...@qpid.apache.org



[jira] [Commented] (DISPATCH-1878) Client app not getting a response through tcpListener

2021-04-16 Thread ASF GitHub Bot (Jira)


[ 
https://issues.apache.org/jira/browse/DISPATCH-1878?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17324009#comment-17324009
 ] 

ASF GitHub Bot commented on DISPATCH-1878:
--

ChugR commented on a change in pull request #1129:
URL: https://github.com/apache/qpid-dispatch/pull/1129#discussion_r615043535



##
File path: src/adaptors/tcp_adaptor.c
##
@@ -57,11 +57,15 @@ struct qdr_tcp_connection_t {
 qdr_delivery_t   *outstream;
 bool  ingress;
 bool  flow_enabled;
+bool  incoming_started;
 bool  egress_dispatcher;
 bool  connector_closed;//only used if 
egress_dispatcher=true
 bool  in_list; // This connection is in the 
adaptor's connections list
-bool  raw_closed_read;
-bool  raw_closed_write;
+bool  raw_closed_read;   // proton event seen
+bool  raw_closed_write;  // proton event seen or 
write_close called
+bool  raw_read_shutdown; // stream closed
+bool  read_eos_seen;
+qd_buffer_list_t  early_raw_read_bufs; // read from raw conn before 
ingress stream ready

Review comment:
   This was on recommendation from proton developers: giving buffers to the 
read side is pretty harmless.
   
   Reading from raw connection before the ingress stream is ready is the first 
strategy of several that has worked. Over the wire the client has executed 
open-write-close before the TCP adaptor has even been notified that the 
connection has been accepted. Then follows a READ event which TCP can't handle 
since it has no stream or credit. Then follows a READ_CLOSED which still can't 
be handled since there is no stream or credit. Eventually the stream comes up 
and credit arrives but these events don't arrive in the proper context for 
reading the raw connection. One could schedule a wake but complicated state is 
required to signal that on a random wake it's time to read from a closed 
connection. 




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


> Client app not getting a response through tcpListener
> -
>
> Key: DISPATCH-1878
> URL: https://issues.apache.org/jira/browse/DISPATCH-1878
> Project: Qpid Dispatch
>  Issue Type: Bug
>  Components: Protocol Adaptors
>Affects Versions: 2.0.0
>Reporter: Fernando Giorgetti
>Assignee: Charles E. Rolke
>Priority: Major
> Attachments: D-1878 @e148c.svg, 
> DISPATCH-1878_fail-pn-raw-closed-before-ingress-stream-set-up.svg, 
> dispatch-1878-trace.html
>
>
> I have a tcp-echo server running locally through:
> podman run -d --rm --name tcpecho -p 9090:9090 quay.io/skupper/tcp-go-echo
> And I have a router configured with a tcpConnector to localhost at port 9090 
> and a tcpListener at port .
> I am able to use nc (netcat) to send data to the tcp-echo directly (port 
> 9090) or through
> the router (port ), if I run "nc 127.0.0.1 " and the send the data.
> But if I run it as:  "echo abcd | nc 127.0.0.1 " I am not seeing a 
> response, but if 
> I use port 9090 (original port) instead of  (router tcpListener), then I 
> get the
> response correctly.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: dev-unsubscr...@qpid.apache.org
For additional commands, e-mail: dev-h...@qpid.apache.org



[GitHub] [qpid-dispatch] ChugR commented on a change in pull request #1129: DISPATCH-1878: Handle half-closed TCP connections - DO NOT MERGE

2021-04-16 Thread GitBox


ChugR commented on a change in pull request #1129:
URL: https://github.com/apache/qpid-dispatch/pull/1129#discussion_r615043535



##
File path: src/adaptors/tcp_adaptor.c
##
@@ -57,11 +57,15 @@ struct qdr_tcp_connection_t {
 qdr_delivery_t   *outstream;
 bool  ingress;
 bool  flow_enabled;
+bool  incoming_started;
 bool  egress_dispatcher;
 bool  connector_closed;//only used if 
egress_dispatcher=true
 bool  in_list; // This connection is in the 
adaptor's connections list
-bool  raw_closed_read;
-bool  raw_closed_write;
+bool  raw_closed_read;   // proton event seen
+bool  raw_closed_write;  // proton event seen or 
write_close called
+bool  raw_read_shutdown; // stream closed
+bool  read_eos_seen;
+qd_buffer_list_t  early_raw_read_bufs; // read from raw conn before 
ingress stream ready

Review comment:
   This was on recommendation from proton developers: giving buffers to the 
read side is pretty harmless.
   
   Reading from raw connection before the ingress stream is ready is the first 
strategy of several that has worked. Over the wire the client has executed 
open-write-close before the TCP adaptor has even been notified that the 
connection has been accepted. Then follows a READ event which TCP can't handle 
since it has no stream or credit. Then follows a READ_CLOSED which still can't 
be handled since there is no stream or credit. Eventually the stream comes up 
and credit arrives but these events don't arrive in the proper context for 
reading the raw connection. One could schedule a wake but complicated state is 
required to signal that on a random wake it's time to read from a closed 
connection. 




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: dev-unsubscr...@qpid.apache.org
For additional commands, e-mail: dev-h...@qpid.apache.org



[jira] [Commented] (DISPATCH-2034) Rare flaky failure in system_tests_protocol_settings: AssertionError: ' incoming-window=10,' not found in '2021-03-31 00:25:14.317182 +0000 PROTOCOL (trace) [C4]:FRA

2021-04-16 Thread ASF GitHub Bot (Jira)


[ 
https://issues.apache.org/jira/browse/DISPATCH-2034?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17323951#comment-17323951
 ] 

ASF GitHub Bot commented on DISPATCH-2034:
--

ganeshmurthy opened a new pull request #1131:
URL: https://github.com/apache/qpid-dispatch/pull/1131


   … in all the lines with a @begin frame, not just the first line


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


> Rare flaky failure in system_tests_protocol_settings: AssertionError: ' 
> incoming-window=10,' not found in '2021-03-31 00:25:14.317182 + PROTOCOL 
> (trace) [C4]:FRAME: 0 <- @begin(17) [next-outgoing-id=0, incoming-
> ---
>
> Key: DISPATCH-2034
> URL: https://issues.apache.org/jira/browse/DISPATCH-2034
> Project: Qpid Dispatch
>  Issue Type: Test
>Reporter: Jiri Daněk
>Assignee: Ken Giusti
>Priority: Minor
> Fix For: 1.16.0
>
>
> https://travis-ci.com/github/apache/qpid-dispatch/jobs/495082109#L4626
> {noformat}
> 28: ==
> 28: FAIL: test_connector_default 
> (system_tests_protocol_settings.ConnectorSettingsNondefaultTest)
> 28: --
> 28: Traceback (most recent call last):
> 28:   File 
> "/home/travis/build/apache/qpid-dispatch/tests/system_tests_protocol_settings.py",
>  line 395, in test_connector_default
> 28: self.assertIn(" incoming-window=10,", begin_lines[0])
> 28: AssertionError: ' incoming-window=10,' not found in '2021-03-31 
> 00:25:14.317182 + PROTOCOL (trace) [C4]:FRAME: 0 <- @begin(17) 
> [next-outgoing-id=0, incoming-window=2147483647, outgoing-window=2147483647] 
> (/home/travis/build/apache/qpid-dispatch/src/server.c:113)'
> 28: 
> 28: --
> 28: Ran 10 tests in 8.082s
> 28: 
> 28: FAILED (failures=1)
> 28/73 Test #28: system_tests_protocol_settings ***Failed  
>   8.20 sec
> {noformat}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: dev-unsubscr...@qpid.apache.org
For additional commands, e-mail: dev-h...@qpid.apache.org



[GitHub] [qpid-dispatch] ganeshmurthy opened a new pull request #1131: DISPATCH-2034: Search for the existence of a specific incoming window…

2021-04-16 Thread GitBox


ganeshmurthy opened a new pull request #1131:
URL: https://github.com/apache/qpid-dispatch/pull/1131


   … in all the lines with a @begin frame, not just the first line


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: dev-unsubscr...@qpid.apache.org
For additional commands, e-mail: dev-h...@qpid.apache.org



[jira] [Commented] (DISPATCH-1878) Client app not getting a response through tcpListener

2021-04-16 Thread ASF GitHub Bot (Jira)


[ 
https://issues.apache.org/jira/browse/DISPATCH-1878?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17323949#comment-17323949
 ] 

ASF GitHub Bot commented on DISPATCH-1878:
--

grs commented on a change in pull request #1129:
URL: https://github.com/apache/qpid-dispatch/pull/1129#discussion_r614998347



##
File path: src/adaptors/tcp_adaptor.c
##
@@ -288,18 +338,87 @@ static int handle_incoming_impl(qdr_tcp_connection_t 
*conn, bool close_pending)
 qd_message_set_q2_unblocked_handler(msg, qdr_tcp_q2_unblocked_handler, 
conn_sp);
 
 conn->instream = qdr_link_deliver(conn->incoming, msg, 0, false, 0, 0, 
0, 0);
-qd_log(tcp_adaptor->log_source, QD_LOG_DEBUG, 
"[C%"PRIu64"][L%"PRIu64"] Initiating message with %i bytes", conn->conn_id, 
conn->incoming_id, count);
+qd_log(log, QD_LOG_DEBUG,
+   "[C%"PRIu64"][L%"PRIu64"][D%"PRIu64"] Initiating ingress stream 
message with %u bytes",
+   conn->conn_id, conn->incoming_id, conn->instream->delivery_id, 
length);
+conn->incoming_started = true;
+
+// Handle deferment of write side close.
+sys_mutex_lock(conn->activation_lock);
+if (conn->read_eos_seen && !conn->raw_closed_write) {
+// to-raw-conn EOS was seen before the from-raw-conn instream 
delivery existed.

Review comment:
   Why do we need to wait for the instream delivery before calling 
write_close()?




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


> Client app not getting a response through tcpListener
> -
>
> Key: DISPATCH-1878
> URL: https://issues.apache.org/jira/browse/DISPATCH-1878
> Project: Qpid Dispatch
>  Issue Type: Bug
>  Components: Protocol Adaptors
>Affects Versions: 2.0.0
>Reporter: Fernando Giorgetti
>Assignee: Charles E. Rolke
>Priority: Major
> Attachments: D-1878 @e148c.svg, 
> DISPATCH-1878_fail-pn-raw-closed-before-ingress-stream-set-up.svg, 
> dispatch-1878-trace.html
>
>
> I have a tcp-echo server running locally through:
> podman run -d --rm --name tcpecho -p 9090:9090 quay.io/skupper/tcp-go-echo
> And I have a router configured with a tcpConnector to localhost at port 9090 
> and a tcpListener at port .
> I am able to use nc (netcat) to send data to the tcp-echo directly (port 
> 9090) or through
> the router (port ), if I run "nc 127.0.0.1 " and the send the data.
> But if I run it as:  "echo abcd | nc 127.0.0.1 " I am not seeing a 
> response, but if 
> I use port 9090 (original port) instead of  (router tcpListener), then I 
> get the
> response correctly.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: dev-unsubscr...@qpid.apache.org
For additional commands, e-mail: dev-h...@qpid.apache.org



[GitHub] [qpid-dispatch] grs commented on a change in pull request #1129: DISPATCH-1878: Handle half-closed TCP connections - DO NOT MERGE

2021-04-16 Thread GitBox


grs commented on a change in pull request #1129:
URL: https://github.com/apache/qpid-dispatch/pull/1129#discussion_r614998347



##
File path: src/adaptors/tcp_adaptor.c
##
@@ -288,18 +338,87 @@ static int handle_incoming_impl(qdr_tcp_connection_t 
*conn, bool close_pending)
 qd_message_set_q2_unblocked_handler(msg, qdr_tcp_q2_unblocked_handler, 
conn_sp);
 
 conn->instream = qdr_link_deliver(conn->incoming, msg, 0, false, 0, 0, 
0, 0);
-qd_log(tcp_adaptor->log_source, QD_LOG_DEBUG, 
"[C%"PRIu64"][L%"PRIu64"] Initiating message with %i bytes", conn->conn_id, 
conn->incoming_id, count);
+qd_log(log, QD_LOG_DEBUG,
+   "[C%"PRIu64"][L%"PRIu64"][D%"PRIu64"] Initiating ingress stream 
message with %u bytes",
+   conn->conn_id, conn->incoming_id, conn->instream->delivery_id, 
length);
+conn->incoming_started = true;
+
+// Handle deferment of write side close.
+sys_mutex_lock(conn->activation_lock);
+if (conn->read_eos_seen && !conn->raw_closed_write) {
+// to-raw-conn EOS was seen before the from-raw-conn instream 
delivery existed.

Review comment:
   Why do we need to wait for the instream delivery before calling 
write_close()?




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: dev-unsubscr...@qpid.apache.org
For additional commands, e-mail: dev-h...@qpid.apache.org



[jira] [Commented] (DISPATCH-1878) Client app not getting a response through tcpListener

2021-04-16 Thread ASF GitHub Bot (Jira)


[ 
https://issues.apache.org/jira/browse/DISPATCH-1878?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17323942#comment-17323942
 ] 

ASF GitHub Bot commented on DISPATCH-1878:
--

grs commented on a change in pull request #1129:
URL: https://github.com/apache/qpid-dispatch/pull/1129#discussion_r614972470



##
File path: src/adaptors/tcp_adaptor.c
##
@@ -57,11 +57,15 @@ struct qdr_tcp_connection_t {
 qdr_delivery_t   *outstream;
 bool  ingress;
 bool  flow_enabled;
+bool  incoming_started;
 bool  egress_dispatcher;
 bool  connector_closed;//only used if 
egress_dispatcher=true
 bool  in_list; // This connection is in the 
adaptor's connections list
-bool  raw_closed_read;
-bool  raw_closed_write;
+bool  raw_closed_read;   // proton event seen
+bool  raw_closed_write;  // proton event seen or 
write_close called
+bool  raw_read_shutdown; // stream closed
+bool  read_eos_seen;
+qd_buffer_list_t  early_raw_read_bufs; // read from raw conn before 
ingress stream ready

Review comment:
   I'm not too keen on granting buffers before we are ready for them. It 
doesn't seem necessary to me and it complicates reasoning about the code.

##
File path: src/adaptors/tcp_adaptor.c
##
@@ -116,6 +120,26 @@ static inline uint64_t qdr_tcp_conn_linkid(const 
qdr_tcp_connection_t *conn)
 return conn->instream ? conn->incoming_id : conn->outgoing_id;
 }
 
+static inline const char * qdr_link_direction_name(const qdr_link_t *link)
+{
+assert(link);
+return qdr_link_direction(link) == QD_OUTGOING ? "outgoing" : "incoming";
+}
+
+static inline const char * qdr_tcp_connection_role_name(const 
qdr_tcp_connection_t *tc)
+{
+assert(tc);
+return tc->ingress ? "client" : "server";

Review comment:
   I find the use of client and server confusing here. What you call the 
client is actually the server side in tcp terms. I think ingress/egress is less 
ambiguous (perhaps qualified by bridge, e.g. 'ingress bridge') and is already 
used elsewhere even with this patch.

##
File path: src/adaptors/tcp_adaptor.c
##
@@ -782,6 +965,14 @@ static void 
qdr_tcp_open_server_side_connection(qdr_tcp_connection_t* tc)
 
 // This attach passes the ownership of the delivery from the core-side 
connection and link
 // to the adaptor-side outgoing connection and link.
+uint64_t i_conn_id = 0;
+uint64_t i_link_id = 0;
+uint64_t i_dlv_id = 0;
+if (!!tc->initial_delivery) {
+i_conn_id = tc->initial_delivery->conn_id;

Review comment:
   Minor point, but why are these initialised in a separate conditional 
block up here, when they seem only to be used in the subsequent block for the 
identical condition and are not used in the qdr_link_first_attach() call that 
is the only thing separating these blocks? Makes it easier to read if we just 
declared these just before use I think.

##
File path: src/adaptors/tcp_adaptor.c
##
@@ -1257,13 +1489,17 @@ static void qdr_tcp_delivery_update(void *context, 
qdr_delivery_t *dlv, uint64_t
 void* link_context = qdr_link_get_context(qdr_delivery_link(dlv));
 if (link_context) {
 qdr_tcp_connection_t* tc = (qdr_tcp_connection_t*) link_context;
-qd_log(tcp_adaptor->log_source, QD_LOG_DEBUG, DLV_FMT" 
qdr_tcp_delivery_update: disp: %"PRIu64", settled: %s",
+qd_log(tcp_adaptor->log_source, QD_LOG_DEBUG,
+   DLV_FMT" qdr_tcp_delivery_update: disp: %"PRIu64", settled: %s",
DLV_ARGS(dlv), disp, settled ? "true" : "false");
 
 //
 // If one of the streaming deliveries is ever settled, the connection 
must be torn down.

Review comment:
   I think now that we have separate close calls for read and write sides, 
we should close for read if the instream is settled and close for write when 
the outstream is settled.
   
   If we then settle and complete the instream on CLOSED_READ and settle the 
outstream on CLOSED_WRITE, we are then conveying more accurately what each tcp 
endpoint does to its peer. If the instream has not been started when we get 
CLOSED_READ on ingress, we can close the connection entirely (on egress we can 
just send an empty reply). 

##
File path: src/adaptors/tcp_adaptor.c
##
@@ -630,64 +780,92 @@ static void handle_connection_event(pn_event_t *e, 
qd_server_t *qd_server, void
 }
 }
 case PN_RAW_CONNECTION_CLOSED_READ: {
-qd_log(log, QD_LOG_DEBUG, "[C%"PRIu64"] 
PN_RAW_CONNECTION_CLOSED_READ", conn->conn_id);
-conn->q2_blocked = false;
-handle_incoming_impl(conn, true);
+qd_log(log, QD_LOG_DEBUG, 

[GitHub] [qpid-dispatch] grs commented on a change in pull request #1129: DISPATCH-1878: Handle half-closed TCP connections - DO NOT MERGE

2021-04-16 Thread GitBox


grs commented on a change in pull request #1129:
URL: https://github.com/apache/qpid-dispatch/pull/1129#discussion_r614972470



##
File path: src/adaptors/tcp_adaptor.c
##
@@ -57,11 +57,15 @@ struct qdr_tcp_connection_t {
 qdr_delivery_t   *outstream;
 bool  ingress;
 bool  flow_enabled;
+bool  incoming_started;
 bool  egress_dispatcher;
 bool  connector_closed;//only used if 
egress_dispatcher=true
 bool  in_list; // This connection is in the 
adaptor's connections list
-bool  raw_closed_read;
-bool  raw_closed_write;
+bool  raw_closed_read;   // proton event seen
+bool  raw_closed_write;  // proton event seen or 
write_close called
+bool  raw_read_shutdown; // stream closed
+bool  read_eos_seen;
+qd_buffer_list_t  early_raw_read_bufs; // read from raw conn before 
ingress stream ready

Review comment:
   I'm not too keen on granting buffers before we are ready for them. It 
doesn't seem necessary to me and it complicates reasoning about the code.

##
File path: src/adaptors/tcp_adaptor.c
##
@@ -116,6 +120,26 @@ static inline uint64_t qdr_tcp_conn_linkid(const 
qdr_tcp_connection_t *conn)
 return conn->instream ? conn->incoming_id : conn->outgoing_id;
 }
 
+static inline const char * qdr_link_direction_name(const qdr_link_t *link)
+{
+assert(link);
+return qdr_link_direction(link) == QD_OUTGOING ? "outgoing" : "incoming";
+}
+
+static inline const char * qdr_tcp_connection_role_name(const 
qdr_tcp_connection_t *tc)
+{
+assert(tc);
+return tc->ingress ? "client" : "server";

Review comment:
   I find the use of client and server confusing here. What you call the 
client is actually the server side in tcp terms. I think ingress/egress is less 
ambiguous (perhaps qualified by bridge, e.g. 'ingress bridge') and is already 
used elsewhere even with this patch.

##
File path: src/adaptors/tcp_adaptor.c
##
@@ -782,6 +965,14 @@ static void 
qdr_tcp_open_server_side_connection(qdr_tcp_connection_t* tc)
 
 // This attach passes the ownership of the delivery from the core-side 
connection and link
 // to the adaptor-side outgoing connection and link.
+uint64_t i_conn_id = 0;
+uint64_t i_link_id = 0;
+uint64_t i_dlv_id = 0;
+if (!!tc->initial_delivery) {
+i_conn_id = tc->initial_delivery->conn_id;

Review comment:
   Minor point, but why are these initialised in a separate conditional 
block up here, when they seem only to be used in the subsequent block for the 
identical condition and are not used in the qdr_link_first_attach() call that 
is the only thing separating these blocks? Makes it easier to read if we just 
declared these just before use I think.

##
File path: src/adaptors/tcp_adaptor.c
##
@@ -1257,13 +1489,17 @@ static void qdr_tcp_delivery_update(void *context, 
qdr_delivery_t *dlv, uint64_t
 void* link_context = qdr_link_get_context(qdr_delivery_link(dlv));
 if (link_context) {
 qdr_tcp_connection_t* tc = (qdr_tcp_connection_t*) link_context;
-qd_log(tcp_adaptor->log_source, QD_LOG_DEBUG, DLV_FMT" 
qdr_tcp_delivery_update: disp: %"PRIu64", settled: %s",
+qd_log(tcp_adaptor->log_source, QD_LOG_DEBUG,
+   DLV_FMT" qdr_tcp_delivery_update: disp: %"PRIu64", settled: %s",
DLV_ARGS(dlv), disp, settled ? "true" : "false");
 
 //
 // If one of the streaming deliveries is ever settled, the connection 
must be torn down.

Review comment:
   I think now that we have separate close calls for read and write sides, 
we should close for read if the instream is settled and close for write when 
the outstream is settled.
   
   If we then settle and complete the instream on CLOSED_READ and settle the 
outstream on CLOSED_WRITE, we are then conveying more accurately what each tcp 
endpoint does to its peer. If the instream has not been started when we get 
CLOSED_READ on ingress, we can close the connection entirely (on egress we can 
just send an empty reply). 

##
File path: src/adaptors/tcp_adaptor.c
##
@@ -630,64 +780,92 @@ static void handle_connection_event(pn_event_t *e, 
qd_server_t *qd_server, void
 }
 }
 case PN_RAW_CONNECTION_CLOSED_READ: {
-qd_log(log, QD_LOG_DEBUG, "[C%"PRIu64"] 
PN_RAW_CONNECTION_CLOSED_READ", conn->conn_id);
-conn->q2_blocked = false;
-handle_incoming_impl(conn, true);
+qd_log(log, QD_LOG_DEBUG, "[C%"PRIu64"][L%"PRIu64"] 
PN_RAW_CONNECTION_CLOSED_READ",
+   conn->conn_id, conn->incoming_id);
 sys_mutex_lock(conn->activation_lock);
+conn->q2_blocked = false;
 conn->raw_closed_read = true;
 

[jira] [Updated] (DISPATCH-2046) Panic in handle due to deleted connector.

2021-04-16 Thread Ganesh Murthy (Jira)


 [ 
https://issues.apache.org/jira/browse/DISPATCH-2046?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ganesh Murthy updated DISPATCH-2046:

Fix Version/s: 1.16.0

> Panic in handle due to deleted connector.
> -
>
> Key: DISPATCH-2046
> URL: https://issues.apache.org/jira/browse/DISPATCH-2046
> Project: Qpid Dispatch
>  Issue Type: Bug
>  Components: Routing Engine
>Affects Versions: 1.15.0
>Reporter: Alex Ward
>Priority: Major
> Fix For: 1.16.0
>
>
> I am seeing a segv at line 1063 due to ctx->connector being 0 in 
> server.c:handle(). 
> {code:java}
> (ugdb-x86_64-7.11-261) p ctx
> $6 = (qd_connection_t *) 0x802e55890
> (ugdb-x86_64-7.11-261) p *ctx
> $7 = {prev = 0x8023df490, next = 0x802e7a890, name = 0x0, server = 
> 0x80220d0c0, opened = false, closed = false, closed_locally = false, enqueued 
> = 0, timer = 0x0, pn_conn = 0x8023ec050, pn_sessions = {0x0  times>}, ssl = 0x0, listener = 0x0, connector = 0x0, connector_lock = 
> 0x8023e6600, context = 0x0, user_context = 0x0, link_context = 0x0, 
> connection_id = 12, user_id = 0x0, free_user_id = false, policy_settings = 
> 0x0, n_sessions = 0, n_senders = 0, n_receivers = 0, open_container = 0x0, 
> deferred_calls = {head = 0x0, tail = 0x0, scratch = 0x0, size = 0}, 
> deferred_call_lock = 0x8023e65c0, policy_counted = false, role = 0x801a17660 
> "route-container", free_link_session_list = {head = 0x0, tail = 0x0, scratch 
> = 0x0, size = 0}, strip_annotations_in = false, strip_annotations_out = 
> false, wake = 0x8005d6e20 , rhost = '\000'  times>, rhost_port = '\000' }
> {code}
> As the comment at the start of handle() states, there is only one event being 
> handled at a time, but it doesn’t look like there is anything to stop someone 
> clearing ctx->connector between lines 1055 and 1063. The python code seems to 
> be coming in and deleting the connector while handle() is using it.
> [https://github.com/apache/qpid-dispatch/blob/main/src/server.c]
> {code:java}
>  989 /* Events involving a connection or listener are serialized by the 
> proactor so
>  990  * only one event per connection / listener will be processed at a time. 
>  
>  991  */  
>  992 static bool handle(qd_server_t *qd_server, pn_event_t *e, 
> pn_connection_t *pn_conn, qd_connection_t *ctx) 
>  993 {
>  
>  994 if (pn_conn && qdr_is_authentication_service_connection(pn_conn)) {  
>  
>  995 qdr_handle_authentication_service_connection_event(e);   
>  
>  996 return true; 
>  
>  997 }
>  
>  998  
>   
>  999 switch (pn_event_type(e)) { 
>  …
> 1051 case PN_TRANSPORT_ERROR: 
>  
> 1052 {
>  
> 1053 pn_transport_t *transport = pn_event_transport(e);   
>  
> 1054 pn_condition_t* condition = transport ? 
> pn_transport_condition(transport) : NULL; 
> 1055 if (ctx && ctx->connector) { /* Outgoing connection */   
>  
> 1056 qd_increment_conn_index(ctx);
>  
> 1057 const qd_server_config_t *config = 
> >connector->config;   
> 1058 ctx->connector->state = CXTR_STATE_FAILED;   
>  
> 1059 char conn_msg[300];  
>  
> 1060 if (condition  && pn_condition_is_set(condition)) {  
>  
> 1061 qd_format_string(conn_msg, 300, "[C%"PRIu64"] 
> Connection to %s failed: %s %s", ctx->connection_id, config->host_port,   
> 
> 1062 pn_condition_get_name(condition), 
> pn_condition_get_description(condition));   
> 1063 strcpy(ctx->connector->conn_msg, conn_msg); 
> {code}
> Another thread is at line 1063 in qd_connection_manager_delete_connector so 
> it looks like this thread just 

[jira] [Commented] (DISPATCH-2046) Panic in handle due to deleted connector.

2021-04-16 Thread Alex Ward (Jira)


[ 
https://issues.apache.org/jira/browse/DISPATCH-2046?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17323915#comment-17323915
 ] 

Alex Ward commented on DISPATCH-2046:
-

Thanks Ganesh, I'll try that on our code too, let me know how your testing goes.

> Panic in handle due to deleted connector.
> -
>
> Key: DISPATCH-2046
> URL: https://issues.apache.org/jira/browse/DISPATCH-2046
> Project: Qpid Dispatch
>  Issue Type: Bug
>  Components: Routing Engine
>Affects Versions: 1.15.0
>Reporter: Alex Ward
>Priority: Major
>
> I am seeing a segv at line 1063 due to ctx->connector being 0 in 
> server.c:handle(). 
> {code:java}
> (ugdb-x86_64-7.11-261) p ctx
> $6 = (qd_connection_t *) 0x802e55890
> (ugdb-x86_64-7.11-261) p *ctx
> $7 = {prev = 0x8023df490, next = 0x802e7a890, name = 0x0, server = 
> 0x80220d0c0, opened = false, closed = false, closed_locally = false, enqueued 
> = 0, timer = 0x0, pn_conn = 0x8023ec050, pn_sessions = {0x0  times>}, ssl = 0x0, listener = 0x0, connector = 0x0, connector_lock = 
> 0x8023e6600, context = 0x0, user_context = 0x0, link_context = 0x0, 
> connection_id = 12, user_id = 0x0, free_user_id = false, policy_settings = 
> 0x0, n_sessions = 0, n_senders = 0, n_receivers = 0, open_container = 0x0, 
> deferred_calls = {head = 0x0, tail = 0x0, scratch = 0x0, size = 0}, 
> deferred_call_lock = 0x8023e65c0, policy_counted = false, role = 0x801a17660 
> "route-container", free_link_session_list = {head = 0x0, tail = 0x0, scratch 
> = 0x0, size = 0}, strip_annotations_in = false, strip_annotations_out = 
> false, wake = 0x8005d6e20 , rhost = '\000'  times>, rhost_port = '\000' }
> {code}
> As the comment at the start of handle() states, there is only one event being 
> handled at a time, but it doesn’t look like there is anything to stop someone 
> clearing ctx->connector between lines 1055 and 1063. The python code seems to 
> be coming in and deleting the connector while handle() is using it.
> [https://github.com/apache/qpid-dispatch/blob/main/src/server.c]
> {code:java}
>  989 /* Events involving a connection or listener are serialized by the 
> proactor so
>  990  * only one event per connection / listener will be processed at a time. 
>  
>  991  */  
>  992 static bool handle(qd_server_t *qd_server, pn_event_t *e, 
> pn_connection_t *pn_conn, qd_connection_t *ctx) 
>  993 {
>  
>  994 if (pn_conn && qdr_is_authentication_service_connection(pn_conn)) {  
>  
>  995 qdr_handle_authentication_service_connection_event(e);   
>  
>  996 return true; 
>  
>  997 }
>  
>  998  
>   
>  999 switch (pn_event_type(e)) { 
>  …
> 1051 case PN_TRANSPORT_ERROR: 
>  
> 1052 {
>  
> 1053 pn_transport_t *transport = pn_event_transport(e);   
>  
> 1054 pn_condition_t* condition = transport ? 
> pn_transport_condition(transport) : NULL; 
> 1055 if (ctx && ctx->connector) { /* Outgoing connection */   
>  
> 1056 qd_increment_conn_index(ctx);
>  
> 1057 const qd_server_config_t *config = 
> >connector->config;   
> 1058 ctx->connector->state = CXTR_STATE_FAILED;   
>  
> 1059 char conn_msg[300];  
>  
> 1060 if (condition  && pn_condition_is_set(condition)) {  
>  
> 1061 qd_format_string(conn_msg, 300, "[C%"PRIu64"] 
> Connection to %s failed: %s %s", ctx->connection_id, config->host_port,   
> 
> 1062 pn_condition_get_name(condition), 
> pn_condition_get_description(condition));   
> 1063 strcpy(ctx->connector->conn_msg, conn_msg); 
> {code}
> Another thread is at line 1063 in 

[jira] [Updated] (DISPATCH-2039) Memory pool should be manually poisoned so that ASAN works with it

2021-04-16 Thread Jira


 [ 
https://issues.apache.org/jira/browse/DISPATCH-2039?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jiri Daněk updated DISPATCH-2039:
-
Description: 
>From https://github.com/google/sanitizers/wiki/AddressSanitizerManualPoisoning

bq. A user may poison/unpoison a region of memory manually. Use this feature 
with caution. In many cases good old malloc+free is a better way to find heap 
bugs than using custom allocators with manual poisoning.

As far as I can tell, it is nowadays not possible to turn off the pool 
allocation and use malloc/free, because the pool mechanism also implements the 
weak pointers and ref counters. That means giving hints to ASAN is the only way 
to discover memory bugs of the type (if what Chuck speculated is true) of 
DISPATCH-2032.

bq. If you have a custom allocation arena, the typical workflow would be to 
poison the entire arena first, and then unpoison allocated chunks of memory 
leaving poisoned redzones between them. The allocated chunks should start with 
8-aligned addresses.

Alternatively, the current memory debugging machinery for the pool could take 
care of it on its own... but using ASAN seems sensible to me.

http://blog.hostilefork.com/poison-memory-without-asan/

h3. Nice to have extra features (which won't be implemented at first)

* redzones, there should be chunks of poison on either end of a returned 
memory, to detect invalid accesses out of bounds; this means deliberate waste 
of memory (I am thinking 3x increase, to make implementation easy)
* quarantine, returned chunks should be kept in the pool for some time before 
they are returned as new allocations, to catch use-after-free; this policy goes 
against performance considerations

h3. Open issues

Is it necessary to lock around the poison macros? I did not understand the 
thread safety note in API comment fully.

h3. One thought

Actually, setting a limit on free_list length == 0 would effectively disable 
pool and turn the calls into simple wrappers over malloc/free. It would be 
enough to make this configurable at build time. Then asan should work just fine 
without poison.

  was:
>From https://github.com/google/sanitizers/wiki/AddressSanitizerManualPoisoning

bq. A user may poison/unpoison a region of memory manually. Use this feature 
with caution. In many cases good old malloc+free is a better way to find heap 
bugs than using custom allocators with manual poisoning.

As far as I can tell, it is nowadays not possible to turn off the pool 
allocation and use malloc/free, because the pool mechanism also implements the 
weak pointers and ref counters. That means giving hints to ASAN is the only way 
to discover memory bugs of the type (if what Chuck speculated is true) of 
DISPATCH-2032.

bq. If you have a custom allocation arena, the typical workflow would be to 
poison the entire arena first, and then unpoison allocated chunks of memory 
leaving poisoned redzones between them. The allocated chunks should start with 
8-aligned addresses.

Alternatively, the current memory debugging machinery for the pool could take 
care of it on its own... but using ASAN seems sensible to me.

http://blog.hostilefork.com/poison-memory-without-asan/

h3. Nice to have extra features (which won't be implemented at first)

* redzones, there should be chunks of poison on either end of a returned 
memory, to detect invalid accesses out of bounds; this means deliberate waste 
of memory (I am thinking 3x increase, to make implementation easy)
* quarantine, returned chunks should be kept in the pool for some time before 
they are returned as new allocations, to catch use-after-free; this policy goes 
against performance considerations

h3. Open issues

Is it necessary to lock around the poison macros? I did not understand the 
thread safety note in API comment fully.


> Memory pool should be manually poisoned so that ASAN works with it
> --
>
> Key: DISPATCH-2039
> URL: https://issues.apache.org/jira/browse/DISPATCH-2039
> Project: Qpid Dispatch
>  Issue Type: Wish
>Affects Versions: 1.15.0
>Reporter: Jiri Daněk
>Priority: Minor
>
> From https://github.com/google/sanitizers/wiki/AddressSanitizerManualPoisoning
> bq. A user may poison/unpoison a region of memory manually. Use this feature 
> with caution. In many cases good old malloc+free is a better way to find heap 
> bugs than using custom allocators with manual poisoning.
> As far as I can tell, it is nowadays not possible to turn off the pool 
> allocation and use malloc/free, because the pool mechanism also implements 
> the weak pointers and ref counters. That means giving hints to ASAN is the 
> only way to discover memory bugs of the type (if what Chuck speculated is 
> true) of DISPATCH-2032.
> bq. If you have a custom allocation arena, the typical 

[jira] [Commented] (DISPATCH-2058) qdr_delivery_cleanup_t leak in system_tests_grpc

2021-04-16 Thread ASF GitHub Bot (Jira)


[ 
https://issues.apache.org/jira/browse/DISPATCH-2058?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17323862#comment-17323862
 ] 

ASF GitHub Bot commented on DISPATCH-2058:
--

ganeshmurthy opened a new pull request #1130:
URL: https://github.com/apache/qpid-dispatch/pull/1130


   …in the core->delivery_cleanup_list


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


> qdr_delivery_cleanup_t leak in system_tests_grpc
> 
>
> Key: DISPATCH-2058
> URL: https://issues.apache.org/jira/browse/DISPATCH-2058
> Project: Qpid Dispatch
>  Issue Type: Test
>  Components: Tests
>Reporter: Ganesh Murthy
>Assignee: Ganesh Murthy
>Priority: Major
> Fix For: 1.16.0
>
>
> {noformat}
> test 70
>   Start 70: system_tests_grpc70: Test command: /usr/bin/bwrap "--bind" 
> "/" "/" "--unshare-net" "--dev" "/dev" "--die-with-parent" "--" 
> "/opt/hostedtoolcache/Python/3.7.10/x64/bin/python3.7" 
> "/home/runner/work/qpid-dispatch/qpid-dispatch/qpid-dispatch/build/tests/run.py"
>  "-m" "unittest" "-v" "system_tests_grpc"
> 70: Test timeout computed to be: 600
> 70: test_grpc_01_unary (system_tests_grpc.GrpcServiceMethodsTest)
> 70: Validates unary request and response message ... ok
> 70: test_grpc_02_bidirectional_stream 
> (system_tests_grpc.GrpcServiceMethodsTest)
> 70: Validates bidirectional streaming request and response messages ... ok
> 70: test_grpc_03_server_stream (system_tests_grpc.GrpcServiceMethodsTest)
> 70: Validates server streaming response messages ... ok
> 70: test_grpc_04_client_stream (system_tests_grpc.GrpcServiceMethodsTest)
> 70: Validates client streaming request messages ... ok
> 70: 
> 70: Router QDR output file:
> 70: 
> 70: ERROR: Aborted due to unexpected alloc pool leak of type 
> 'qdr_delivery_cleanup_t'
> 70: 
> 70: 
> 70: 
> 70: Router QDR debug dump file:
> 70: 
> 70: alloc.c: Items of type 'qd_buffer_t' remain allocated at shutdown: 5 
> (SUPPRESSED)
> 70: alloc.c: Items of type 'qd_message_t' remain allocated at shutdown: 3 
> (SUPPRESSED)
> 70: alloc.c: Items of type 'qd_message_content_t' remain allocated at 
> shutdown: 2 (SUPPRESSED)
> 70: alloc.c: Items of type 'qdr_delivery_t' remain allocated at shutdown: 1 
> (SUPPRESSED)
> 70: alloc.c: Items of type 'qdr_delivery_cleanup_t' remain allocated at 
> shutdown: 2
> 70: Leak: 2021-04-16 13:40:34.959155 + type: qdr_delivery_cleanup_t 
> address: 0x6112b190
> 70: /lib/x86_64-linux-gnu/libasan.so.5(+0x6cd30) [0x7f3ce3bcbd30]
> 70: 
> /home/runner/work/qpid-dispatch/qpid-dispatch/qpid-dispatch/build/src/libqpid-dispatch.so(qd_alloc+0x37b)
>  [0x7f3ce336563b]
> 70: 
> /home/runner/work/qpid-dispatch/qpid-dispatch/qpid-dispatch/build/src/libqpid-dispatch.so(+0x680580)
>  [0x7f3ce34ae580]
> 70: 
> /home/runner/work/qpid-dispatch/qpid-dispatch/qpid-dispatch/build/src/libqpid-dispatch.so(qdr_link_cleanup_deliveries_CT+0xd76)
>  [0x7f3ce348d266]
> 70: 
> /home/runner/work/qpid-dispatch/qpid-dispatch/qpid-dispatch/build/src/libqpid-dispatch.so(qdr_core_free+0x1772)
>  [0x7f3ce3503c42]
> 70: 
> /home/runner/work/qpid-dispatch/qpid-dispatch/qpid-dispatch/build/src/libqpid-dispatch.so(qd_router_free+0xf2)
>  [0x7f3ce3574df2]
> 70: 
> /home/runner/work/qpid-dispatch/qpid-dispatch/qpid-dispatch/build/src/libqpid-dispatch.so(qd_dispatch_free+0x242)
>  [0x7f3ce33db9e2]
> 70: qdrouterd(+0x61d4) [0x559fcb6591d4]
> 70: qdrouterd(main+0x231) [0x559fcb658ce1]
> 70: /lib/x86_64-linux-gnu/libc.so.6(__libc_start_main+0xf3) [0x7f3ce22d30b3]
> 70: 
> 70: Leak: 2021-04-16 13:40:34.959240 + type: qdr_delivery_cleanup_t 
> address: 0x6112bcd0
> 70: /lib/x86_64-linux-gnu/libasan.so.5(+0x6cd30) [0x7f3ce3bcbd30]
> 70: 
> /home/runner/work/qpid-dispatch/qpid-dispatch/qpid-dispatch/build/src/libqpid-dispatch.so(qd_alloc+0x37b)
>  [0x7f3ce336563b]
> 70: 
> /home/runner/work/qpid-dispatch/qpid-dispatch/qpid-dispatch/build/src/libqpid-dispatch.so(+0x680580)
>  [0x7f3ce34ae580]
> 70: 
> /home/runner/work/qpid-dispatch/qpid-dispatch/qpid-dispatch/build/src/libqpid-dispatch.so(qdr_link_cleanup_deliveries_CT+0xd76)
>  [0x7f3ce348d266]
> 70: 
> /home/runner/work/qpid-dispatch/qpid-dispatch/qpid-dispatch/build/src/libqpid-dispatch.so(qdr_core_free+0x1772)
>  [0x7f3ce3503c42]
> 70: 
> /home/runner/work/qpid-dispatch/qpid-dispatch/qpid-dispatch/build/src/libqpid-dispatch.so(qd_router_free+0xf2)
>  [0x7f3ce3574df2]
> 70: 
> /home/runner/work/qpid-dispatch/qpid-dispatch/qpid-dispatch/build/src/libqpid-dispatch.so(qd_dispatch_free+0x242)
>  [0x7f3ce33db9e2]
> 70: qdrouterd(+0x61d4) [0x559fcb6591d4]
> 70: qdrouterd(main+0x231) 

[GitHub] [qpid-dispatch] ganeshmurthy opened a new pull request #1130: DISPATCH-2058: Added code in core free to remove and free everything …

2021-04-16 Thread GitBox


ganeshmurthy opened a new pull request #1130:
URL: https://github.com/apache/qpid-dispatch/pull/1130


   …in the core->delivery_cleanup_list


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: dev-unsubscr...@qpid.apache.org
For additional commands, e-mail: dev-h...@qpid.apache.org



[jira] [Commented] (DISPATCH-2058) qdr_delivery_cleanup_t leak in system_tests_grpc

2021-04-16 Thread Ganesh Murthy (Jira)


[ 
https://issues.apache.org/jira/browse/DISPATCH-2058?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17323860#comment-17323860
 ] 

Ganesh Murthy commented on DISPATCH-2058:
-

This leak also happens in other tests as well.

> qdr_delivery_cleanup_t leak in system_tests_grpc
> 
>
> Key: DISPATCH-2058
> URL: https://issues.apache.org/jira/browse/DISPATCH-2058
> Project: Qpid Dispatch
>  Issue Type: Test
>  Components: Tests
>Reporter: Ganesh Murthy
>Assignee: Ganesh Murthy
>Priority: Major
> Fix For: 1.16.0
>
>
> {noformat}
> test 70
>   Start 70: system_tests_grpc70: Test command: /usr/bin/bwrap "--bind" 
> "/" "/" "--unshare-net" "--dev" "/dev" "--die-with-parent" "--" 
> "/opt/hostedtoolcache/Python/3.7.10/x64/bin/python3.7" 
> "/home/runner/work/qpid-dispatch/qpid-dispatch/qpid-dispatch/build/tests/run.py"
>  "-m" "unittest" "-v" "system_tests_grpc"
> 70: Test timeout computed to be: 600
> 70: test_grpc_01_unary (system_tests_grpc.GrpcServiceMethodsTest)
> 70: Validates unary request and response message ... ok
> 70: test_grpc_02_bidirectional_stream 
> (system_tests_grpc.GrpcServiceMethodsTest)
> 70: Validates bidirectional streaming request and response messages ... ok
> 70: test_grpc_03_server_stream (system_tests_grpc.GrpcServiceMethodsTest)
> 70: Validates server streaming response messages ... ok
> 70: test_grpc_04_client_stream (system_tests_grpc.GrpcServiceMethodsTest)
> 70: Validates client streaming request messages ... ok
> 70: 
> 70: Router QDR output file:
> 70: 
> 70: ERROR: Aborted due to unexpected alloc pool leak of type 
> 'qdr_delivery_cleanup_t'
> 70: 
> 70: 
> 70: 
> 70: Router QDR debug dump file:
> 70: 
> 70: alloc.c: Items of type 'qd_buffer_t' remain allocated at shutdown: 5 
> (SUPPRESSED)
> 70: alloc.c: Items of type 'qd_message_t' remain allocated at shutdown: 3 
> (SUPPRESSED)
> 70: alloc.c: Items of type 'qd_message_content_t' remain allocated at 
> shutdown: 2 (SUPPRESSED)
> 70: alloc.c: Items of type 'qdr_delivery_t' remain allocated at shutdown: 1 
> (SUPPRESSED)
> 70: alloc.c: Items of type 'qdr_delivery_cleanup_t' remain allocated at 
> shutdown: 2
> 70: Leak: 2021-04-16 13:40:34.959155 + type: qdr_delivery_cleanup_t 
> address: 0x6112b190
> 70: /lib/x86_64-linux-gnu/libasan.so.5(+0x6cd30) [0x7f3ce3bcbd30]
> 70: 
> /home/runner/work/qpid-dispatch/qpid-dispatch/qpid-dispatch/build/src/libqpid-dispatch.so(qd_alloc+0x37b)
>  [0x7f3ce336563b]
> 70: 
> /home/runner/work/qpid-dispatch/qpid-dispatch/qpid-dispatch/build/src/libqpid-dispatch.so(+0x680580)
>  [0x7f3ce34ae580]
> 70: 
> /home/runner/work/qpid-dispatch/qpid-dispatch/qpid-dispatch/build/src/libqpid-dispatch.so(qdr_link_cleanup_deliveries_CT+0xd76)
>  [0x7f3ce348d266]
> 70: 
> /home/runner/work/qpid-dispatch/qpid-dispatch/qpid-dispatch/build/src/libqpid-dispatch.so(qdr_core_free+0x1772)
>  [0x7f3ce3503c42]
> 70: 
> /home/runner/work/qpid-dispatch/qpid-dispatch/qpid-dispatch/build/src/libqpid-dispatch.so(qd_router_free+0xf2)
>  [0x7f3ce3574df2]
> 70: 
> /home/runner/work/qpid-dispatch/qpid-dispatch/qpid-dispatch/build/src/libqpid-dispatch.so(qd_dispatch_free+0x242)
>  [0x7f3ce33db9e2]
> 70: qdrouterd(+0x61d4) [0x559fcb6591d4]
> 70: qdrouterd(main+0x231) [0x559fcb658ce1]
> 70: /lib/x86_64-linux-gnu/libc.so.6(__libc_start_main+0xf3) [0x7f3ce22d30b3]
> 70: 
> 70: Leak: 2021-04-16 13:40:34.959240 + type: qdr_delivery_cleanup_t 
> address: 0x6112bcd0
> 70: /lib/x86_64-linux-gnu/libasan.so.5(+0x6cd30) [0x7f3ce3bcbd30]
> 70: 
> /home/runner/work/qpid-dispatch/qpid-dispatch/qpid-dispatch/build/src/libqpid-dispatch.so(qd_alloc+0x37b)
>  [0x7f3ce336563b]
> 70: 
> /home/runner/work/qpid-dispatch/qpid-dispatch/qpid-dispatch/build/src/libqpid-dispatch.so(+0x680580)
>  [0x7f3ce34ae580]
> 70: 
> /home/runner/work/qpid-dispatch/qpid-dispatch/qpid-dispatch/build/src/libqpid-dispatch.so(qdr_link_cleanup_deliveries_CT+0xd76)
>  [0x7f3ce348d266]
> 70: 
> /home/runner/work/qpid-dispatch/qpid-dispatch/qpid-dispatch/build/src/libqpid-dispatch.so(qdr_core_free+0x1772)
>  [0x7f3ce3503c42]
> 70: 
> /home/runner/work/qpid-dispatch/qpid-dispatch/qpid-dispatch/build/src/libqpid-dispatch.so(qd_router_free+0xf2)
>  [0x7f3ce3574df2]
> 70: 
> /home/runner/work/qpid-dispatch/qpid-dispatch/qpid-dispatch/build/src/libqpid-dispatch.so(qd_dispatch_free+0x242)
>  [0x7f3ce33db9e2]
> 70: qdrouterd(+0x61d4) [0x559fcb6591d4]
> 70: qdrouterd(main+0x231) [0x559fcb658ce1]
> 70: /lib/x86_64-linux-gnu/libc.so.6(__libc_start_main+0xf3) [0x7f3ce22d30b3]
> 70:  {noformat}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: dev-unsubscr...@qpid.apache.org
For additional commands, e-mail: dev-h...@qpid.apache.org

[jira] [Created] (DISPATCH-2058) qdr_delivery_cleanup_t leak in system_tests_grpc

2021-04-16 Thread Ganesh Murthy (Jira)
Ganesh Murthy created DISPATCH-2058:
---

 Summary: qdr_delivery_cleanup_t leak in system_tests_grpc
 Key: DISPATCH-2058
 URL: https://issues.apache.org/jira/browse/DISPATCH-2058
 Project: Qpid Dispatch
  Issue Type: Test
  Components: Tests
Reporter: Ganesh Murthy
Assignee: Ganesh Murthy
 Fix For: 1.16.0


{noformat}
test 70
  Start 70: system_tests_grpc70: Test command: /usr/bin/bwrap "--bind" "/" 
"/" "--unshare-net" "--dev" "/dev" "--die-with-parent" "--" 
"/opt/hostedtoolcache/Python/3.7.10/x64/bin/python3.7" 
"/home/runner/work/qpid-dispatch/qpid-dispatch/qpid-dispatch/build/tests/run.py"
 "-m" "unittest" "-v" "system_tests_grpc"
70: Test timeout computed to be: 600
70: test_grpc_01_unary (system_tests_grpc.GrpcServiceMethodsTest)
70: Validates unary request and response message ... ok
70: test_grpc_02_bidirectional_stream (system_tests_grpc.GrpcServiceMethodsTest)
70: Validates bidirectional streaming request and response messages ... ok
70: test_grpc_03_server_stream (system_tests_grpc.GrpcServiceMethodsTest)
70: Validates server streaming response messages ... ok
70: test_grpc_04_client_stream (system_tests_grpc.GrpcServiceMethodsTest)
70: Validates client streaming request messages ... ok
70: 
70: Router QDR output file:
70: 
70: ERROR: Aborted due to unexpected alloc pool leak of type 
'qdr_delivery_cleanup_t'
70: 
70: 
70: 
70: Router QDR debug dump file:
70: 
70: alloc.c: Items of type 'qd_buffer_t' remain allocated at shutdown: 5 
(SUPPRESSED)
70: alloc.c: Items of type 'qd_message_t' remain allocated at shutdown: 3 
(SUPPRESSED)
70: alloc.c: Items of type 'qd_message_content_t' remain allocated at shutdown: 
2 (SUPPRESSED)
70: alloc.c: Items of type 'qdr_delivery_t' remain allocated at shutdown: 1 
(SUPPRESSED)
70: alloc.c: Items of type 'qdr_delivery_cleanup_t' remain allocated at 
shutdown: 2
70: Leak: 2021-04-16 13:40:34.959155 + type: qdr_delivery_cleanup_t 
address: 0x6112b190
70: /lib/x86_64-linux-gnu/libasan.so.5(+0x6cd30) [0x7f3ce3bcbd30]
70: 
/home/runner/work/qpid-dispatch/qpid-dispatch/qpid-dispatch/build/src/libqpid-dispatch.so(qd_alloc+0x37b)
 [0x7f3ce336563b]
70: 
/home/runner/work/qpid-dispatch/qpid-dispatch/qpid-dispatch/build/src/libqpid-dispatch.so(+0x680580)
 [0x7f3ce34ae580]
70: 
/home/runner/work/qpid-dispatch/qpid-dispatch/qpid-dispatch/build/src/libqpid-dispatch.so(qdr_link_cleanup_deliveries_CT+0xd76)
 [0x7f3ce348d266]
70: 
/home/runner/work/qpid-dispatch/qpid-dispatch/qpid-dispatch/build/src/libqpid-dispatch.so(qdr_core_free+0x1772)
 [0x7f3ce3503c42]
70: 
/home/runner/work/qpid-dispatch/qpid-dispatch/qpid-dispatch/build/src/libqpid-dispatch.so(qd_router_free+0xf2)
 [0x7f3ce3574df2]
70: 
/home/runner/work/qpid-dispatch/qpid-dispatch/qpid-dispatch/build/src/libqpid-dispatch.so(qd_dispatch_free+0x242)
 [0x7f3ce33db9e2]
70: qdrouterd(+0x61d4) [0x559fcb6591d4]
70: qdrouterd(main+0x231) [0x559fcb658ce1]
70: /lib/x86_64-linux-gnu/libc.so.6(__libc_start_main+0xf3) [0x7f3ce22d30b3]
70: 
70: Leak: 2021-04-16 13:40:34.959240 + type: qdr_delivery_cleanup_t 
address: 0x6112bcd0
70: /lib/x86_64-linux-gnu/libasan.so.5(+0x6cd30) [0x7f3ce3bcbd30]
70: 
/home/runner/work/qpid-dispatch/qpid-dispatch/qpid-dispatch/build/src/libqpid-dispatch.so(qd_alloc+0x37b)
 [0x7f3ce336563b]
70: 
/home/runner/work/qpid-dispatch/qpid-dispatch/qpid-dispatch/build/src/libqpid-dispatch.so(+0x680580)
 [0x7f3ce34ae580]
70: 
/home/runner/work/qpid-dispatch/qpid-dispatch/qpid-dispatch/build/src/libqpid-dispatch.so(qdr_link_cleanup_deliveries_CT+0xd76)
 [0x7f3ce348d266]
70: 
/home/runner/work/qpid-dispatch/qpid-dispatch/qpid-dispatch/build/src/libqpid-dispatch.so(qdr_core_free+0x1772)
 [0x7f3ce3503c42]
70: 
/home/runner/work/qpid-dispatch/qpid-dispatch/qpid-dispatch/build/src/libqpid-dispatch.so(qd_router_free+0xf2)
 [0x7f3ce3574df2]
70: 
/home/runner/work/qpid-dispatch/qpid-dispatch/qpid-dispatch/build/src/libqpid-dispatch.so(qd_dispatch_free+0x242)
 [0x7f3ce33db9e2]
70: qdrouterd(+0x61d4) [0x559fcb6591d4]
70: qdrouterd(main+0x231) [0x559fcb658ce1]
70: /lib/x86_64-linux-gnu/libc.so.6(__libc_start_main+0xf3) [0x7f3ce22d30b3]
70:  {noformat}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: dev-unsubscr...@qpid.apache.org
For additional commands, e-mail: dev-h...@qpid.apache.org



[jira] [Commented] (DISPATCH-2048) system_tests_router_mesh failure due to leak of qd_deferred_call_t

2021-04-16 Thread ASF GitHub Bot (Jira)


[ 
https://issues.apache.org/jira/browse/DISPATCH-2048?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17323846#comment-17323846
 ] 

ASF GitHub Bot commented on DISPATCH-2048:
--

ganeshmurthy closed pull request #1127:
URL: https://github.com/apache/qpid-dispatch/pull/1127


   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


> system_tests_router_mesh failure due to leak of qd_deferred_call_t
> --
>
> Key: DISPATCH-2048
> URL: https://issues.apache.org/jira/browse/DISPATCH-2048
> Project: Qpid Dispatch
>  Issue Type: Test
>  Components: Tests
>Affects Versions: 1.15.0
>Reporter: Ganesh Murthy
>Priority: Major
>
> {noformat}
>  63: Test command: /usr/bin/python 
> "/home/travis/build/apache/qpid-dispatch/build/tests/run.py" "-m" "unittest" 
> "-v" "system_tests_router_mesh"
> 63: Test timeout computed to be: 600
> 63: test_01_rx_failover_clean (system_tests_router_mesh.ThreeRouterTest) ... 
> ok
> 63: test_02_rx_failover_dirty (system_tests_router_mesh.ThreeRouterTest) ... 
> ok
> 63: test_03_unavailable_link_attach 
> (system_tests_router_mesh.ThreeRouterTest) ... ok
> 63: test_04_unavailable_anonymous_link_attach 
> (system_tests_router_mesh.ThreeRouterTest) ... ok
> 63: test_05_unavailable_anonymous_link_send 
> (system_tests_router_mesh.ThreeRouterTest) ... ok
> 63: test_06_parallel_priority (system_tests_router_mesh.ThreeRouterTest) ... 
> ERROR
> 63: ERROR
> 63: 
> 63: Router RouterC output file:
> 63: ERROR: Aborted due to unexpected alloc pool leak of type 
> 'qd_deferred_call_t'
> 63: 
> 63: 
> 63: 
> 63: Router RouterC debug dump file:
> 63: 
> 63: alloc.c: Items of type 'qd_iterator_t' remain allocated at shutdown: 80 
> (SUPPRESSED)
> 63: alloc.c: Items of type 'qd_timer_t' remain allocated at shutdown: 1 
> (SUPPRESSED)
> 63: alloc.c: Items of type 'qd_bitmask_t' remain allocated at shutdown: 8 
> (SUPPRESSED)
> 63: alloc.c: Items of type 'qd_buffer_t' remain allocated at shutdown: 287 
> (SUPPRESSED)
> 63: alloc.c: Items of type 'qd_parsed_field_t' remain allocated at shutdown: 
> 32 (SUPPRESSED)
> 63: alloc.c: Items of type 'qd_connector_t' remain allocated at shutdown: 1 
> (SUPPRESSED)
> 63: alloc.c: Items of type 'qd_message_t' remain allocated at shutdown: 8 
> (SUPPRESSED)
> 63: alloc.c: Items of type 'qd_message_content_t' remain allocated at 
> shutdown: 8 (SUPPRESSED)
> 63: alloc.c: Items of type 'qdr_delivery_t' remain allocated at shutdown: 8 
> (SUPPRESSED)
> 63: alloc.c: Items of type 'qd_link_ref_t' remain allocated at shutdown: 8 
> (SUPPRESSED)
> 63: alloc.c: Items of type 'qd_deferred_call_t' remain allocated at shutdown: 
> 15
> 63: Leak: 2021-04-15 03:43:23.586342 + type: qd_deferred_call_t address: 
> 0x6110001b7d10
> 63: /usr/lib/x86_64-linux-gnu/libasan.so.2(+0x4a077) [0x7f3be716c077]
> 63: 
> /home/travis/build/apache/qpid-dispatch/build/src/libqpid-dispatch.so(qd_alloc+0xc70)
>  [0x7f3be68ee7a0]
> 63: 
> /home/travis/build/apache/qpid-dispatch/build/src/libqpid-dispatch.so(new_qd_deferred_call_t+0x26)
>  [0x7f3be6a9]
> 63: 
> /home/travis/build/apache/qpid-dispatch/build/src/libqpid-dispatch.so(qd_connection_invoke_deferred+0x25)
>  [0x7f3be6aaa479]
> 63: 
> /home/travis/build/apache/qpid-dispatch/build/src/libqpid-dispatch.so(+0x53431e)
>  [0x7f3be6a9231e]
> 63: 
> /home/travis/build/apache/qpid-dispatch/build/src/libqpid-dispatch.so(qdr_connection_process+0x2720)
>  [0x7f3be69b31d4]
> 63: 
> /home/travis/build/apache/qpid-dispatch/build/src/libqpid-dispatch.so(+0x52a9c7)
>  [0x7f3be6a889c7]
> 63: 
> /home/travis/build/apache/qpid-dispatch/build/src/libqpid-dispatch.so(+0x3b631c)
>  [0x7f3be691431c]
> 63: 
> /home/travis/build/apache/qpid-dispatch/build/src/libqpid-dispatch.so(qd_container_handle_event+0x223b)
>  [0x7f3be6917d6a]
> 63: 
> /home/travis/build/apache/qpid-dispatch/build/src/libqpid-dispatch.so(+0x54621b)
>  [0x7f3be6aa421b]
> 63: 
> 63: Leak: 2021-04-15 03:43:23.586520 + type: qd_deferred_call_t address: 
> 0x61100014b9d0
> 63: /usr/lib/x86_64-linux-gnu/libasan.so.2(+0x4a077) [0x7f3be716c077]
> 63: 
> /home/travis/build/apache/qpid-dispatch/build/src/libqpid-dispatch.so(qd_alloc+0xc70)
>  [0x7f3be68ee7a0]
> 63: 
> /home/travis/build/apache/qpid-dispatch/build/src/libqpid-dispatch.so(new_qd_deferred_call_t+0x26)
>  [0x7f3be6a9]
> 63: 
> /home/travis/build/apache/qpid-dispatch/build/src/libqpid-dispatch.so(qd_connection_invoke_deferred+0x25)
>  [0x7f3be6aaa479]
> 63: 
> /home/travis/build/apache/qpid-dispatch/build/src/libqpid-dispatch.so(+0x53431e)
>  [0x7f3be6a9231e]
> 63: 
> 

[jira] [Commented] (DISPATCH-2050) fields for received delivery state are not relayed

2021-04-16 Thread ASF GitHub Bot (Jira)


[ 
https://issues.apache.org/jira/browse/DISPATCH-2050?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17323845#comment-17323845
 ] 

ASF GitHub Bot commented on DISPATCH-2050:
--

gemmellr commented on pull request #1126:
URL: https://github.com/apache/qpid-dispatch/pull/1126#issuecomment-821205534


   Obligatory note about tests to help avoid future breakage...especially given 
the repeated issues in this specific area.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


> fields for received delivery state are not relayed 
> ---
>
> Key: DISPATCH-2050
> URL: https://issues.apache.org/jira/browse/DISPATCH-2050
> Project: Qpid Dispatch
>  Issue Type: Bug
>Affects Versions: 1.15.0
>Reporter: Gordon Sim
>Assignee: Gordon Sim
>Priority: Major
> Fix For: 1.16.0
>
>
> If a disposition is sent with the RECEIVED state and values for 
> section_number and section_offset, those field values are lost in the 
> disposition as relayed back to the sender.
>  
> E.g.
> {noformat}
> [0x55b6a79f6e50]: AMQP:FRAME:0 -> @open(16) 
> [container-id="558d2cff-3e5a-46c6-903e-47d21a4f0101", hostname="localhost", 
> channel-max=32767]
> [0x55b6a79f6e50]: AMQP:FRAME:0 -> @begin(17) [next-outgoing-id=0, 
> incoming-window=2147483647, outgoing-window=2147483647]
> [0x55b6a79f6e50]: AMQP:FRAME:0 -> @attach(18) 
> [name="558d2cff-3e5a-46c6-903e-47d21a4f0101-test", handle=0, role=true, 
> snd-settle-mode=2, rcv-settle-mode=0, source=@source(40) [address="test", 
> durable=0, timeout=0, dynamic=false], target=@target(41) [durable=0, 
> timeout=0, dynamic=false], initial-delivery-count=0, max-message-size=0]
> [0x55b6a79f6e50]: AMQP:FRAME:0 -> @attach(18) 
> [name="558d2cff-3e5a-46c6-903e-47d21a4f0101-test", handle=1, role=false, 
> snd-settle-mode=2, rcv-settle-mode=0, source=@source(40) [durable=0, 
> timeout=0, dynamic=false], target=@target(41) [address="test", durable=0, 
> timeout=0, dynamic=false], initial-delivery-count=0, max-message-size=0]
> [0x55b6a79f6e50]: AMQP:FRAME:0 -> @flow(19) [incoming-window=2147483647, 
> next-outgoing-id=0, outgoing-window=2147483647, handle=0, delivery-count=0, 
> link-credit=10, drain=false]
> [0x55b6a79f6e50]: AMQP:FRAME:  <- AMQP
> [0x55b6a79f6e50]: AMQP:FRAME:0 <- @open(16) 
> [container-id="Standalone_V+zMPIsTle+Urdn", max-frame-size=16384, 
> channel-max=32767, idle-time-out=8000, 
> offered-capabilities=@PN_SYMBOL[:"ANONYMOUS-RELAY", :"qd.streaming-links"], 
> desired-capabilities=@PN_SYMBOL[:"ANONYMOUS-RELAY", :"qd.streaming-links"], 
> properties={:product="qpid-dispatch-router", :version="1.16.0-SNAPSHOT", 
> :"qd.conn-id"=5}]
> [0x55b6a79f6e50]: AMQP:FRAME:0 <- @begin(17) [remote-channel=0, 
> next-outgoing-id=0, incoming-window=2147483647, outgoing-window=2147483647]
> [0x55b6a79f6e50]: AMQP:FRAME:0 <- @attach(18) 
> [name="558d2cff-3e5a-46c6-903e-47d21a4f0101-test", handle=0, role=false, 
> snd-settle-mode=2, rcv-settle-mode=0, source=@source(40) [address="test", 
> durable=0, expiry-policy=:"session-end", timeout=0, dynamic=false], 
> target=@target(41) [durable=0, expiry-policy=:"session-end", timeout=0, 
> dynamic=false], initial-delivery-count=0, max-message-size=0]
> [0x55b6a79f6e50]: AMQP:FRAME:0 <- @attach(18) 
> [name="558d2cff-3e5a-46c6-903e-47d21a4f0101-test", handle=1, role=true, 
> snd-settle-mode=2, rcv-settle-mode=0, source=@source(40) [durable=0, 
> expiry-policy=:"session-end", timeout=0, dynamic=false], target=@target(41) 
> [address="test", durable=0, expiry-policy=:"session-end", timeout=0, 
> dynamic=false], initial-delivery-count=0, max-message-size=0]
> [0x55b6a79f6e50]: AMQP:FRAME:0 <- @flow(19) [next-incoming-id=0, 
> incoming-window=2147483647, next-outgoing-id=0, outgoing-window=2147483647, 
> handle=1, delivery-count=0, link-credit=250, drain=false]
> [0x55b6a79f6e50]: AMQP:FRAME:0 -> @transfer(20) [handle=1, delivery-id=0, 
> delivery-tag=b"1", message-format=0] (25) "\x00SpE\x00SsE\x00Sw\xa1\x0cHello 
> World!"
> [0x55b6a79f6e50]: AMQP:FRAME:0 <- @transfer(20) [handle=0, delivery-id=0, 
> delivery-tag=b"\x04\x00\x00\x00\x00\x00\x00\x00", message-format=0] (25) 
> "\x00SpE\x00SsE\x00Sw\xa1\x0cHello World!"
> Got delivery: Hello World!
> [0x55b6a79f6e50]: AMQP:FRAME:0 -> @disposition(21) [role=true, first=0, 
> state=@received(35) [section-number=10, section-offset=5]]
> [0x55b6a79f6e50]: AMQP:FRAME:0 <- @disposition(21) [role=true, first=0, 
> state=@received(35) [section-number=0, section-offset=0]]
> {noformat}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [qpid-dispatch] gemmellr commented on pull request #1126: DISPATCH-2050: propagate section-number and section-offset back to sender

2021-04-16 Thread GitBox


gemmellr commented on pull request #1126:
URL: https://github.com/apache/qpid-dispatch/pull/1126#issuecomment-821205534


   Obligatory note about tests to help avoid future breakage...especially given 
the repeated issues in this specific area.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: dev-unsubscr...@qpid.apache.org
For additional commands, e-mail: dev-h...@qpid.apache.org



[GitHub] [qpid-dispatch] ganeshmurthy closed pull request #1127: DISPATCH-2048: Added code to clean out the qdr_delivery_cleanup_t on …

2021-04-16 Thread GitBox


ganeshmurthy closed pull request #1127:
URL: https://github.com/apache/qpid-dispatch/pull/1127


   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: dev-unsubscr...@qpid.apache.org
For additional commands, e-mail: dev-h...@qpid.apache.org



[jira] [Comment Edited] (QPIDJMS-535) Looping failure after SASL negotiation with Artemis v2.16

2021-04-16 Thread David Martin (Jira)


[ 
https://issues.apache.org/jira/browse/QPIDJMS-535?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17323840#comment-17323840
 ] 

David Martin edited comment on QPIDJMS-535 at 4/16/21, 2:09 PM:


Encyclopedic memory :) OK I'm not too concerned about it anyway because it only 
affects functional test environments. Thanks


was (Author: indysilver):
Encyclopedic memory :) OK I'm not too concerned about it anyway because it only 
affects test environments. Thanks

> Looping failure after SASL negotiation with Artemis v2.16
> -
>
> Key: QPIDJMS-535
> URL: https://issues.apache.org/jira/browse/QPIDJMS-535
> Project: Qpid JMS
>  Issue Type: Bug
>Affects Versions: 0.58.0
>Reporter: David Martin
>Priority: Minor
>
> This is not a production issue for us so tagging it as minor. Encountered 
> during some destructive testing, killing brokers, clients, seeing that 
> connections re-establish ok.
> It sometimes happens when re-establishing a connection to an artemis broker 
> hosted in kubernetes which has been recycled by deleting the pod.
> {noformat}
> 2021-04-09 11:48:13.940 DEBUG 11152 --- [.84.10.4:30672]] proton.trace
>  : IN: CH[28] : Flow{nextIncomingId=1, 
> incomingWindow=2147483647, nextOutgoingId=1, outgoingWindow=2147483647, 
> handle=0, deliveryCount=14000, linkCredit=0, available=null, drain=true, 
> echo=false, properties=null}
> 2021-04-09 11:48:14.278  INFO 11152 --- [.84.10.4:30672]] 
> o.a.qpid.jms.provider.amqp.AmqpProvider  : Transport failed: An existing 
> connection was forcibly closed by the remote host
> 2021-04-09 11:48:14.324 DEBUG 11152 --- [.84.10.4:30672]] 
> io.netty.buffer.PoolThreadCache  : Freed 18 thread-local buffer(s) 
> from thread: AmqpProvider :(1):[amqp://10.84.10.4:30672]
> 2021-04-09 11:48:36.778 DEBUG 11152 --- [.84.10.4:30672]] 
> o.a.qpid.jms.sasl.SaslMechanismFinder: Best match for SASL auth was: 
> SASL-PLAIN
> 2021-04-09 11:48:36.862 DEBUG 11152 --- [.84.10.4:30672]] 
> o.a.qpid.proton.engine.impl.SaslImpl : Handled outcome: SaslImpl 
> [_outcome=PN_SASL_OK, state=PN_SASL_PASS, done=true, role=CLIENT]
> 2021-04-09 11:48:36.936 DEBUG 11152 --- [.84.10.4:30672]] proton.trace
>  : IN: CH[0] : Open{ containerId='', hostname='null', 
> maxFrameSize=131072, channelMax=65535, idleTimeOut=null, 
> outgoingLocales=null, incomingLocales=null, offeredCapabilities=null, 
> desiredCapabilities=null, 
> properties={amqp:connection-establishment-failed=true}}
> 2021-04-09 11:48:36.937 DEBUG 11152 --- [.84.10.4:30672]] proton.trace
>  : IN: CH[0] : 
> Close{error=Error{condition=amqp:invalid-field, description='null', 
> info={invalid-field=container-id}}}
> 2021-04-09 11:48:36.940 DEBUG 11152 --- [.84.10.4:30672]] 
> io.netty.buffer.PoolThreadCache  : Freed 3 thread-local buffer(s) 
> from thread: AmqpProvider :(3):[amqp://10.84.10.4:30672]
> 2021-04-09 11:48:37.128 DEBUG 11152 --- [.84.10.4:30672]] 
> o.a.qpid.jms.sasl.SaslMechanismFinder: Best match for SASL auth was: 
> SASL-PLAIN
> 2021-04-09 11:48:37.200 DEBUG 11152 --- [.84.10.4:30672]] 
> o.a.qpid.proton.engine.impl.SaslImpl : Handled outcome: SaslImpl 
> [_outcome=PN_SASL_OK, state=PN_SASL_PASS, done=true, role=CLIENT]
> 2021-04-09 11:48:37.274 DEBUG 11152 --- [.84.10.4:30672]] proton.trace
>  : IN: CH[0] : Open{ containerId='', hostname='null', 
> maxFrameSize=131072, channelMax=65535, idleTimeOut=null, 
> outgoingLocales=null, incomingLocales=null, offeredCapabilities=null, 
> desiredCapabilities=null, 
> properties={amqp:connection-establishment-failed=true}}
> 2021-04-09 11:48:37.274 DEBUG 11152 --- [.84.10.4:30672]] proton.trace
>  : IN: CH[0] : 
> Close{error=Error{condition=amqp:invalid-field, description='null', 
> info={invalid-field=container-id}}}
> 2021-04-09 11:48:37.276 DEBUG 11152 --- [.84.10.4:30672]] 
> io.netty.buffer.PoolThreadCache  : Freed 3 thread-local buffer(s) 
> from thread: AmqpProvider :(4):[amqp://10.84.10.4:30672]
> 2021-04-09 11:48:37.486 DEBUG 11152 --- [.84.10.4:30672]] 
> o.a.qpid.jms.sasl.SaslMechanismFinder: Best match for SASL auth was: 
> SASL-PLAIN
> 2021-04-09 11:48:37.557 DEBUG 11152 --- [.84.10.4:30672]] 
> o.a.qpid.proton.engine.impl.SaslImpl : Handled outcome: SaslImpl 
> [_outcome=PN_SASL_OK, state=PN_SASL_PASS, done=true, role=CLIENT]
> 2021-04-09 11:48:37.629 DEBUG 11152 --- [.84.10.4:30672]] proton.trace
>  : IN: CH[0] : Open{ containerId='', hostname='null', 
> maxFrameSize=131072, channelMax=65535, idleTimeOut=null, 
> outgoingLocales=null, incomingLocales=null, offeredCapabilities=null, 
> desiredCapabilities=null, 
> 

[jira] [Updated] (DISPATCH-2050) fields for received delivery state are not relayed

2021-04-16 Thread Robbie Gemmell (Jira)


 [ 
https://issues.apache.org/jira/browse/DISPATCH-2050?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Robbie Gemmell updated DISPATCH-2050:
-
Fix Version/s: 1.16.0
Affects Version/s: 1.15.0

Adding affects/fix versions

> fields for received delivery state are not relayed 
> ---
>
> Key: DISPATCH-2050
> URL: https://issues.apache.org/jira/browse/DISPATCH-2050
> Project: Qpid Dispatch
>  Issue Type: Bug
>Affects Versions: 1.15.0
>Reporter: Gordon Sim
>Priority: Major
> Fix For: 1.16.0
>
>
> If a disposition is sent with the RECEIVED state and values for 
> section_number and section_offset, those field values are lost in the 
> disposition as relayed back to the sender.
>  
> E.g.
> {noformat}
> [0x55b6a79f6e50]: AMQP:FRAME:0 -> @open(16) 
> [container-id="558d2cff-3e5a-46c6-903e-47d21a4f0101", hostname="localhost", 
> channel-max=32767]
> [0x55b6a79f6e50]: AMQP:FRAME:0 -> @begin(17) [next-outgoing-id=0, 
> incoming-window=2147483647, outgoing-window=2147483647]
> [0x55b6a79f6e50]: AMQP:FRAME:0 -> @attach(18) 
> [name="558d2cff-3e5a-46c6-903e-47d21a4f0101-test", handle=0, role=true, 
> snd-settle-mode=2, rcv-settle-mode=0, source=@source(40) [address="test", 
> durable=0, timeout=0, dynamic=false], target=@target(41) [durable=0, 
> timeout=0, dynamic=false], initial-delivery-count=0, max-message-size=0]
> [0x55b6a79f6e50]: AMQP:FRAME:0 -> @attach(18) 
> [name="558d2cff-3e5a-46c6-903e-47d21a4f0101-test", handle=1, role=false, 
> snd-settle-mode=2, rcv-settle-mode=0, source=@source(40) [durable=0, 
> timeout=0, dynamic=false], target=@target(41) [address="test", durable=0, 
> timeout=0, dynamic=false], initial-delivery-count=0, max-message-size=0]
> [0x55b6a79f6e50]: AMQP:FRAME:0 -> @flow(19) [incoming-window=2147483647, 
> next-outgoing-id=0, outgoing-window=2147483647, handle=0, delivery-count=0, 
> link-credit=10, drain=false]
> [0x55b6a79f6e50]: AMQP:FRAME:  <- AMQP
> [0x55b6a79f6e50]: AMQP:FRAME:0 <- @open(16) 
> [container-id="Standalone_V+zMPIsTle+Urdn", max-frame-size=16384, 
> channel-max=32767, idle-time-out=8000, 
> offered-capabilities=@PN_SYMBOL[:"ANONYMOUS-RELAY", :"qd.streaming-links"], 
> desired-capabilities=@PN_SYMBOL[:"ANONYMOUS-RELAY", :"qd.streaming-links"], 
> properties={:product="qpid-dispatch-router", :version="1.16.0-SNAPSHOT", 
> :"qd.conn-id"=5}]
> [0x55b6a79f6e50]: AMQP:FRAME:0 <- @begin(17) [remote-channel=0, 
> next-outgoing-id=0, incoming-window=2147483647, outgoing-window=2147483647]
> [0x55b6a79f6e50]: AMQP:FRAME:0 <- @attach(18) 
> [name="558d2cff-3e5a-46c6-903e-47d21a4f0101-test", handle=0, role=false, 
> snd-settle-mode=2, rcv-settle-mode=0, source=@source(40) [address="test", 
> durable=0, expiry-policy=:"session-end", timeout=0, dynamic=false], 
> target=@target(41) [durable=0, expiry-policy=:"session-end", timeout=0, 
> dynamic=false], initial-delivery-count=0, max-message-size=0]
> [0x55b6a79f6e50]: AMQP:FRAME:0 <- @attach(18) 
> [name="558d2cff-3e5a-46c6-903e-47d21a4f0101-test", handle=1, role=true, 
> snd-settle-mode=2, rcv-settle-mode=0, source=@source(40) [durable=0, 
> expiry-policy=:"session-end", timeout=0, dynamic=false], target=@target(41) 
> [address="test", durable=0, expiry-policy=:"session-end", timeout=0, 
> dynamic=false], initial-delivery-count=0, max-message-size=0]
> [0x55b6a79f6e50]: AMQP:FRAME:0 <- @flow(19) [next-incoming-id=0, 
> incoming-window=2147483647, next-outgoing-id=0, outgoing-window=2147483647, 
> handle=1, delivery-count=0, link-credit=250, drain=false]
> [0x55b6a79f6e50]: AMQP:FRAME:0 -> @transfer(20) [handle=1, delivery-id=0, 
> delivery-tag=b"1", message-format=0] (25) "\x00SpE\x00SsE\x00Sw\xa1\x0cHello 
> World!"
> [0x55b6a79f6e50]: AMQP:FRAME:0 <- @transfer(20) [handle=0, delivery-id=0, 
> delivery-tag=b"\x04\x00\x00\x00\x00\x00\x00\x00", message-format=0] (25) 
> "\x00SpE\x00SsE\x00Sw\xa1\x0cHello World!"
> Got delivery: Hello World!
> [0x55b6a79f6e50]: AMQP:FRAME:0 -> @disposition(21) [role=true, first=0, 
> state=@received(35) [section-number=10, section-offset=5]]
> [0x55b6a79f6e50]: AMQP:FRAME:0 <- @disposition(21) [role=true, first=0, 
> state=@received(35) [section-number=0, section-offset=0]]
> {noformat}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: dev-unsubscr...@qpid.apache.org
For additional commands, e-mail: dev-h...@qpid.apache.org



[jira] [Assigned] (DISPATCH-2050) fields for received delivery state are not relayed

2021-04-16 Thread Robbie Gemmell (Jira)


 [ 
https://issues.apache.org/jira/browse/DISPATCH-2050?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Robbie Gemmell reassigned DISPATCH-2050:


Assignee: Gordon Sim

> fields for received delivery state are not relayed 
> ---
>
> Key: DISPATCH-2050
> URL: https://issues.apache.org/jira/browse/DISPATCH-2050
> Project: Qpid Dispatch
>  Issue Type: Bug
>Affects Versions: 1.15.0
>Reporter: Gordon Sim
>Assignee: Gordon Sim
>Priority: Major
> Fix For: 1.16.0
>
>
> If a disposition is sent with the RECEIVED state and values for 
> section_number and section_offset, those field values are lost in the 
> disposition as relayed back to the sender.
>  
> E.g.
> {noformat}
> [0x55b6a79f6e50]: AMQP:FRAME:0 -> @open(16) 
> [container-id="558d2cff-3e5a-46c6-903e-47d21a4f0101", hostname="localhost", 
> channel-max=32767]
> [0x55b6a79f6e50]: AMQP:FRAME:0 -> @begin(17) [next-outgoing-id=0, 
> incoming-window=2147483647, outgoing-window=2147483647]
> [0x55b6a79f6e50]: AMQP:FRAME:0 -> @attach(18) 
> [name="558d2cff-3e5a-46c6-903e-47d21a4f0101-test", handle=0, role=true, 
> snd-settle-mode=2, rcv-settle-mode=0, source=@source(40) [address="test", 
> durable=0, timeout=0, dynamic=false], target=@target(41) [durable=0, 
> timeout=0, dynamic=false], initial-delivery-count=0, max-message-size=0]
> [0x55b6a79f6e50]: AMQP:FRAME:0 -> @attach(18) 
> [name="558d2cff-3e5a-46c6-903e-47d21a4f0101-test", handle=1, role=false, 
> snd-settle-mode=2, rcv-settle-mode=0, source=@source(40) [durable=0, 
> timeout=0, dynamic=false], target=@target(41) [address="test", durable=0, 
> timeout=0, dynamic=false], initial-delivery-count=0, max-message-size=0]
> [0x55b6a79f6e50]: AMQP:FRAME:0 -> @flow(19) [incoming-window=2147483647, 
> next-outgoing-id=0, outgoing-window=2147483647, handle=0, delivery-count=0, 
> link-credit=10, drain=false]
> [0x55b6a79f6e50]: AMQP:FRAME:  <- AMQP
> [0x55b6a79f6e50]: AMQP:FRAME:0 <- @open(16) 
> [container-id="Standalone_V+zMPIsTle+Urdn", max-frame-size=16384, 
> channel-max=32767, idle-time-out=8000, 
> offered-capabilities=@PN_SYMBOL[:"ANONYMOUS-RELAY", :"qd.streaming-links"], 
> desired-capabilities=@PN_SYMBOL[:"ANONYMOUS-RELAY", :"qd.streaming-links"], 
> properties={:product="qpid-dispatch-router", :version="1.16.0-SNAPSHOT", 
> :"qd.conn-id"=5}]
> [0x55b6a79f6e50]: AMQP:FRAME:0 <- @begin(17) [remote-channel=0, 
> next-outgoing-id=0, incoming-window=2147483647, outgoing-window=2147483647]
> [0x55b6a79f6e50]: AMQP:FRAME:0 <- @attach(18) 
> [name="558d2cff-3e5a-46c6-903e-47d21a4f0101-test", handle=0, role=false, 
> snd-settle-mode=2, rcv-settle-mode=0, source=@source(40) [address="test", 
> durable=0, expiry-policy=:"session-end", timeout=0, dynamic=false], 
> target=@target(41) [durable=0, expiry-policy=:"session-end", timeout=0, 
> dynamic=false], initial-delivery-count=0, max-message-size=0]
> [0x55b6a79f6e50]: AMQP:FRAME:0 <- @attach(18) 
> [name="558d2cff-3e5a-46c6-903e-47d21a4f0101-test", handle=1, role=true, 
> snd-settle-mode=2, rcv-settle-mode=0, source=@source(40) [durable=0, 
> expiry-policy=:"session-end", timeout=0, dynamic=false], target=@target(41) 
> [address="test", durable=0, expiry-policy=:"session-end", timeout=0, 
> dynamic=false], initial-delivery-count=0, max-message-size=0]
> [0x55b6a79f6e50]: AMQP:FRAME:0 <- @flow(19) [next-incoming-id=0, 
> incoming-window=2147483647, next-outgoing-id=0, outgoing-window=2147483647, 
> handle=1, delivery-count=0, link-credit=250, drain=false]
> [0x55b6a79f6e50]: AMQP:FRAME:0 -> @transfer(20) [handle=1, delivery-id=0, 
> delivery-tag=b"1", message-format=0] (25) "\x00SpE\x00SsE\x00Sw\xa1\x0cHello 
> World!"
> [0x55b6a79f6e50]: AMQP:FRAME:0 <- @transfer(20) [handle=0, delivery-id=0, 
> delivery-tag=b"\x04\x00\x00\x00\x00\x00\x00\x00", message-format=0] (25) 
> "\x00SpE\x00SsE\x00Sw\xa1\x0cHello World!"
> Got delivery: Hello World!
> [0x55b6a79f6e50]: AMQP:FRAME:0 -> @disposition(21) [role=true, first=0, 
> state=@received(35) [section-number=10, section-offset=5]]
> [0x55b6a79f6e50]: AMQP:FRAME:0 <- @disposition(21) [role=true, first=0, 
> state=@received(35) [section-number=0, section-offset=0]]
> {noformat}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: dev-unsubscr...@qpid.apache.org
For additional commands, e-mail: dev-h...@qpid.apache.org



[jira] [Resolved] (DISPATCH-2050) fields for received delivery state are not relayed

2021-04-16 Thread Robbie Gemmell (Jira)


 [ 
https://issues.apache.org/jira/browse/DISPATCH-2050?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Robbie Gemmell resolved DISPATCH-2050.
--
Resolution: Fixed

> fields for received delivery state are not relayed 
> ---
>
> Key: DISPATCH-2050
> URL: https://issues.apache.org/jira/browse/DISPATCH-2050
> Project: Qpid Dispatch
>  Issue Type: Bug
>Affects Versions: 1.15.0
>Reporter: Gordon Sim
>Assignee: Gordon Sim
>Priority: Major
> Fix For: 1.16.0
>
>
> If a disposition is sent with the RECEIVED state and values for 
> section_number and section_offset, those field values are lost in the 
> disposition as relayed back to the sender.
>  
> E.g.
> {noformat}
> [0x55b6a79f6e50]: AMQP:FRAME:0 -> @open(16) 
> [container-id="558d2cff-3e5a-46c6-903e-47d21a4f0101", hostname="localhost", 
> channel-max=32767]
> [0x55b6a79f6e50]: AMQP:FRAME:0 -> @begin(17) [next-outgoing-id=0, 
> incoming-window=2147483647, outgoing-window=2147483647]
> [0x55b6a79f6e50]: AMQP:FRAME:0 -> @attach(18) 
> [name="558d2cff-3e5a-46c6-903e-47d21a4f0101-test", handle=0, role=true, 
> snd-settle-mode=2, rcv-settle-mode=0, source=@source(40) [address="test", 
> durable=0, timeout=0, dynamic=false], target=@target(41) [durable=0, 
> timeout=0, dynamic=false], initial-delivery-count=0, max-message-size=0]
> [0x55b6a79f6e50]: AMQP:FRAME:0 -> @attach(18) 
> [name="558d2cff-3e5a-46c6-903e-47d21a4f0101-test", handle=1, role=false, 
> snd-settle-mode=2, rcv-settle-mode=0, source=@source(40) [durable=0, 
> timeout=0, dynamic=false], target=@target(41) [address="test", durable=0, 
> timeout=0, dynamic=false], initial-delivery-count=0, max-message-size=0]
> [0x55b6a79f6e50]: AMQP:FRAME:0 -> @flow(19) [incoming-window=2147483647, 
> next-outgoing-id=0, outgoing-window=2147483647, handle=0, delivery-count=0, 
> link-credit=10, drain=false]
> [0x55b6a79f6e50]: AMQP:FRAME:  <- AMQP
> [0x55b6a79f6e50]: AMQP:FRAME:0 <- @open(16) 
> [container-id="Standalone_V+zMPIsTle+Urdn", max-frame-size=16384, 
> channel-max=32767, idle-time-out=8000, 
> offered-capabilities=@PN_SYMBOL[:"ANONYMOUS-RELAY", :"qd.streaming-links"], 
> desired-capabilities=@PN_SYMBOL[:"ANONYMOUS-RELAY", :"qd.streaming-links"], 
> properties={:product="qpid-dispatch-router", :version="1.16.0-SNAPSHOT", 
> :"qd.conn-id"=5}]
> [0x55b6a79f6e50]: AMQP:FRAME:0 <- @begin(17) [remote-channel=0, 
> next-outgoing-id=0, incoming-window=2147483647, outgoing-window=2147483647]
> [0x55b6a79f6e50]: AMQP:FRAME:0 <- @attach(18) 
> [name="558d2cff-3e5a-46c6-903e-47d21a4f0101-test", handle=0, role=false, 
> snd-settle-mode=2, rcv-settle-mode=0, source=@source(40) [address="test", 
> durable=0, expiry-policy=:"session-end", timeout=0, dynamic=false], 
> target=@target(41) [durable=0, expiry-policy=:"session-end", timeout=0, 
> dynamic=false], initial-delivery-count=0, max-message-size=0]
> [0x55b6a79f6e50]: AMQP:FRAME:0 <- @attach(18) 
> [name="558d2cff-3e5a-46c6-903e-47d21a4f0101-test", handle=1, role=true, 
> snd-settle-mode=2, rcv-settle-mode=0, source=@source(40) [durable=0, 
> expiry-policy=:"session-end", timeout=0, dynamic=false], target=@target(41) 
> [address="test", durable=0, expiry-policy=:"session-end", timeout=0, 
> dynamic=false], initial-delivery-count=0, max-message-size=0]
> [0x55b6a79f6e50]: AMQP:FRAME:0 <- @flow(19) [next-incoming-id=0, 
> incoming-window=2147483647, next-outgoing-id=0, outgoing-window=2147483647, 
> handle=1, delivery-count=0, link-credit=250, drain=false]
> [0x55b6a79f6e50]: AMQP:FRAME:0 -> @transfer(20) [handle=1, delivery-id=0, 
> delivery-tag=b"1", message-format=0] (25) "\x00SpE\x00SsE\x00Sw\xa1\x0cHello 
> World!"
> [0x55b6a79f6e50]: AMQP:FRAME:0 <- @transfer(20) [handle=0, delivery-id=0, 
> delivery-tag=b"\x04\x00\x00\x00\x00\x00\x00\x00", message-format=0] (25) 
> "\x00SpE\x00SsE\x00Sw\xa1\x0cHello World!"
> Got delivery: Hello World!
> [0x55b6a79f6e50]: AMQP:FRAME:0 -> @disposition(21) [role=true, first=0, 
> state=@received(35) [section-number=10, section-offset=5]]
> [0x55b6a79f6e50]: AMQP:FRAME:0 <- @disposition(21) [role=true, first=0, 
> state=@received(35) [section-number=0, section-offset=0]]
> {noformat}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: dev-unsubscr...@qpid.apache.org
For additional commands, e-mail: dev-h...@qpid.apache.org



[jira] [Commented] (QPIDJMS-535) Looping failure after SASL negotiation with Artemis v2.16

2021-04-16 Thread David Martin (Jira)


[ 
https://issues.apache.org/jira/browse/QPIDJMS-535?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17323840#comment-17323840
 ] 

David Martin commented on QPIDJMS-535:
--

Encyclopedic memory :) OK I'm not too concerned about it anyway because it only 
affects test environments. Thanks

> Looping failure after SASL negotiation with Artemis v2.16
> -
>
> Key: QPIDJMS-535
> URL: https://issues.apache.org/jira/browse/QPIDJMS-535
> Project: Qpid JMS
>  Issue Type: Bug
>Affects Versions: 0.58.0
>Reporter: David Martin
>Priority: Minor
>
> This is not a production issue for us so tagging it as minor. Encountered 
> during some destructive testing, killing brokers, clients, seeing that 
> connections re-establish ok.
> It sometimes happens when re-establishing a connection to an artemis broker 
> hosted in kubernetes which has been recycled by deleting the pod.
> {noformat}
> 2021-04-09 11:48:13.940 DEBUG 11152 --- [.84.10.4:30672]] proton.trace
>  : IN: CH[28] : Flow{nextIncomingId=1, 
> incomingWindow=2147483647, nextOutgoingId=1, outgoingWindow=2147483647, 
> handle=0, deliveryCount=14000, linkCredit=0, available=null, drain=true, 
> echo=false, properties=null}
> 2021-04-09 11:48:14.278  INFO 11152 --- [.84.10.4:30672]] 
> o.a.qpid.jms.provider.amqp.AmqpProvider  : Transport failed: An existing 
> connection was forcibly closed by the remote host
> 2021-04-09 11:48:14.324 DEBUG 11152 --- [.84.10.4:30672]] 
> io.netty.buffer.PoolThreadCache  : Freed 18 thread-local buffer(s) 
> from thread: AmqpProvider :(1):[amqp://10.84.10.4:30672]
> 2021-04-09 11:48:36.778 DEBUG 11152 --- [.84.10.4:30672]] 
> o.a.qpid.jms.sasl.SaslMechanismFinder: Best match for SASL auth was: 
> SASL-PLAIN
> 2021-04-09 11:48:36.862 DEBUG 11152 --- [.84.10.4:30672]] 
> o.a.qpid.proton.engine.impl.SaslImpl : Handled outcome: SaslImpl 
> [_outcome=PN_SASL_OK, state=PN_SASL_PASS, done=true, role=CLIENT]
> 2021-04-09 11:48:36.936 DEBUG 11152 --- [.84.10.4:30672]] proton.trace
>  : IN: CH[0] : Open{ containerId='', hostname='null', 
> maxFrameSize=131072, channelMax=65535, idleTimeOut=null, 
> outgoingLocales=null, incomingLocales=null, offeredCapabilities=null, 
> desiredCapabilities=null, 
> properties={amqp:connection-establishment-failed=true}}
> 2021-04-09 11:48:36.937 DEBUG 11152 --- [.84.10.4:30672]] proton.trace
>  : IN: CH[0] : 
> Close{error=Error{condition=amqp:invalid-field, description='null', 
> info={invalid-field=container-id}}}
> 2021-04-09 11:48:36.940 DEBUG 11152 --- [.84.10.4:30672]] 
> io.netty.buffer.PoolThreadCache  : Freed 3 thread-local buffer(s) 
> from thread: AmqpProvider :(3):[amqp://10.84.10.4:30672]
> 2021-04-09 11:48:37.128 DEBUG 11152 --- [.84.10.4:30672]] 
> o.a.qpid.jms.sasl.SaslMechanismFinder: Best match for SASL auth was: 
> SASL-PLAIN
> 2021-04-09 11:48:37.200 DEBUG 11152 --- [.84.10.4:30672]] 
> o.a.qpid.proton.engine.impl.SaslImpl : Handled outcome: SaslImpl 
> [_outcome=PN_SASL_OK, state=PN_SASL_PASS, done=true, role=CLIENT]
> 2021-04-09 11:48:37.274 DEBUG 11152 --- [.84.10.4:30672]] proton.trace
>  : IN: CH[0] : Open{ containerId='', hostname='null', 
> maxFrameSize=131072, channelMax=65535, idleTimeOut=null, 
> outgoingLocales=null, incomingLocales=null, offeredCapabilities=null, 
> desiredCapabilities=null, 
> properties={amqp:connection-establishment-failed=true}}
> 2021-04-09 11:48:37.274 DEBUG 11152 --- [.84.10.4:30672]] proton.trace
>  : IN: CH[0] : 
> Close{error=Error{condition=amqp:invalid-field, description='null', 
> info={invalid-field=container-id}}}
> 2021-04-09 11:48:37.276 DEBUG 11152 --- [.84.10.4:30672]] 
> io.netty.buffer.PoolThreadCache  : Freed 3 thread-local buffer(s) 
> from thread: AmqpProvider :(4):[amqp://10.84.10.4:30672]
> 2021-04-09 11:48:37.486 DEBUG 11152 --- [.84.10.4:30672]] 
> o.a.qpid.jms.sasl.SaslMechanismFinder: Best match for SASL auth was: 
> SASL-PLAIN
> 2021-04-09 11:48:37.557 DEBUG 11152 --- [.84.10.4:30672]] 
> o.a.qpid.proton.engine.impl.SaslImpl : Handled outcome: SaslImpl 
> [_outcome=PN_SASL_OK, state=PN_SASL_PASS, done=true, role=CLIENT]
> 2021-04-09 11:48:37.629 DEBUG 11152 --- [.84.10.4:30672]] proton.trace
>  : IN: CH[0] : Open{ containerId='', hostname='null', 
> maxFrameSize=131072, channelMax=65535, idleTimeOut=null, 
> outgoingLocales=null, incomingLocales=null, offeredCapabilities=null, 
> desiredCapabilities=null, 
> properties={amqp:connection-establishment-failed=true}}
> 2021-04-09 11:48:37.629 DEBUG 11152 --- [.84.10.4:30672]] proton.trace
>  : IN: CH[0] : 
> 

[jira] [Reopened] (DISPATCH-2050) fields for received delivery state are not relayed

2021-04-16 Thread Robbie Gemmell (Jira)


 [ 
https://issues.apache.org/jira/browse/DISPATCH-2050?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Robbie Gemmell reopened DISPATCH-2050:
--

> fields for received delivery state are not relayed 
> ---
>
> Key: DISPATCH-2050
> URL: https://issues.apache.org/jira/browse/DISPATCH-2050
> Project: Qpid Dispatch
>  Issue Type: Bug
>Reporter: Gordon Sim
>Priority: Major
>
> If a disposition is sent with the RECEIVED state and values for 
> section_number and section_offset, those field values are lost in the 
> disposition as relayed back to the sender.
>  
> E.g.
> {noformat}
> [0x55b6a79f6e50]: AMQP:FRAME:0 -> @open(16) 
> [container-id="558d2cff-3e5a-46c6-903e-47d21a4f0101", hostname="localhost", 
> channel-max=32767]
> [0x55b6a79f6e50]: AMQP:FRAME:0 -> @begin(17) [next-outgoing-id=0, 
> incoming-window=2147483647, outgoing-window=2147483647]
> [0x55b6a79f6e50]: AMQP:FRAME:0 -> @attach(18) 
> [name="558d2cff-3e5a-46c6-903e-47d21a4f0101-test", handle=0, role=true, 
> snd-settle-mode=2, rcv-settle-mode=0, source=@source(40) [address="test", 
> durable=0, timeout=0, dynamic=false], target=@target(41) [durable=0, 
> timeout=0, dynamic=false], initial-delivery-count=0, max-message-size=0]
> [0x55b6a79f6e50]: AMQP:FRAME:0 -> @attach(18) 
> [name="558d2cff-3e5a-46c6-903e-47d21a4f0101-test", handle=1, role=false, 
> snd-settle-mode=2, rcv-settle-mode=0, source=@source(40) [durable=0, 
> timeout=0, dynamic=false], target=@target(41) [address="test", durable=0, 
> timeout=0, dynamic=false], initial-delivery-count=0, max-message-size=0]
> [0x55b6a79f6e50]: AMQP:FRAME:0 -> @flow(19) [incoming-window=2147483647, 
> next-outgoing-id=0, outgoing-window=2147483647, handle=0, delivery-count=0, 
> link-credit=10, drain=false]
> [0x55b6a79f6e50]: AMQP:FRAME:  <- AMQP
> [0x55b6a79f6e50]: AMQP:FRAME:0 <- @open(16) 
> [container-id="Standalone_V+zMPIsTle+Urdn", max-frame-size=16384, 
> channel-max=32767, idle-time-out=8000, 
> offered-capabilities=@PN_SYMBOL[:"ANONYMOUS-RELAY", :"qd.streaming-links"], 
> desired-capabilities=@PN_SYMBOL[:"ANONYMOUS-RELAY", :"qd.streaming-links"], 
> properties={:product="qpid-dispatch-router", :version="1.16.0-SNAPSHOT", 
> :"qd.conn-id"=5}]
> [0x55b6a79f6e50]: AMQP:FRAME:0 <- @begin(17) [remote-channel=0, 
> next-outgoing-id=0, incoming-window=2147483647, outgoing-window=2147483647]
> [0x55b6a79f6e50]: AMQP:FRAME:0 <- @attach(18) 
> [name="558d2cff-3e5a-46c6-903e-47d21a4f0101-test", handle=0, role=false, 
> snd-settle-mode=2, rcv-settle-mode=0, source=@source(40) [address="test", 
> durable=0, expiry-policy=:"session-end", timeout=0, dynamic=false], 
> target=@target(41) [durable=0, expiry-policy=:"session-end", timeout=0, 
> dynamic=false], initial-delivery-count=0, max-message-size=0]
> [0x55b6a79f6e50]: AMQP:FRAME:0 <- @attach(18) 
> [name="558d2cff-3e5a-46c6-903e-47d21a4f0101-test", handle=1, role=true, 
> snd-settle-mode=2, rcv-settle-mode=0, source=@source(40) [durable=0, 
> expiry-policy=:"session-end", timeout=0, dynamic=false], target=@target(41) 
> [address="test", durable=0, expiry-policy=:"session-end", timeout=0, 
> dynamic=false], initial-delivery-count=0, max-message-size=0]
> [0x55b6a79f6e50]: AMQP:FRAME:0 <- @flow(19) [next-incoming-id=0, 
> incoming-window=2147483647, next-outgoing-id=0, outgoing-window=2147483647, 
> handle=1, delivery-count=0, link-credit=250, drain=false]
> [0x55b6a79f6e50]: AMQP:FRAME:0 -> @transfer(20) [handle=1, delivery-id=0, 
> delivery-tag=b"1", message-format=0] (25) "\x00SpE\x00SsE\x00Sw\xa1\x0cHello 
> World!"
> [0x55b6a79f6e50]: AMQP:FRAME:0 <- @transfer(20) [handle=0, delivery-id=0, 
> delivery-tag=b"\x04\x00\x00\x00\x00\x00\x00\x00", message-format=0] (25) 
> "\x00SpE\x00SsE\x00Sw\xa1\x0cHello World!"
> Got delivery: Hello World!
> [0x55b6a79f6e50]: AMQP:FRAME:0 -> @disposition(21) [role=true, first=0, 
> state=@received(35) [section-number=10, section-offset=5]]
> [0x55b6a79f6e50]: AMQP:FRAME:0 <- @disposition(21) [role=true, first=0, 
> state=@received(35) [section-number=0, section-offset=0]]
> {noformat}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: dev-unsubscr...@qpid.apache.org
For additional commands, e-mail: dev-h...@qpid.apache.org



[jira] [Commented] (QPIDJMS-535) Looping failure after SASL negotiation with Artemis v2.16

2021-04-16 Thread Robbie Gemmell (Jira)


[ 
https://issues.apache.org/jira/browse/QPIDJMS-535?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17323832#comment-17323832
 ] 

Robbie Gemmell commented on QPIDJMS-535:


Took me a while to find but I believe ARTEMIS-2547 is the one I was remembering 
about..it would seem you may have run into something similar in a different way.

> Looping failure after SASL negotiation with Artemis v2.16
> -
>
> Key: QPIDJMS-535
> URL: https://issues.apache.org/jira/browse/QPIDJMS-535
> Project: Qpid JMS
>  Issue Type: Bug
>Affects Versions: 0.58.0
>Reporter: David Martin
>Priority: Minor
>
> This is not a production issue for us so tagging it as minor. Encountered 
> during some destructive testing, killing brokers, clients, seeing that 
> connections re-establish ok.
> It sometimes happens when re-establishing a connection to an artemis broker 
> hosted in kubernetes which has been recycled by deleting the pod.
> {noformat}
> 2021-04-09 11:48:13.940 DEBUG 11152 --- [.84.10.4:30672]] proton.trace
>  : IN: CH[28] : Flow{nextIncomingId=1, 
> incomingWindow=2147483647, nextOutgoingId=1, outgoingWindow=2147483647, 
> handle=0, deliveryCount=14000, linkCredit=0, available=null, drain=true, 
> echo=false, properties=null}
> 2021-04-09 11:48:14.278  INFO 11152 --- [.84.10.4:30672]] 
> o.a.qpid.jms.provider.amqp.AmqpProvider  : Transport failed: An existing 
> connection was forcibly closed by the remote host
> 2021-04-09 11:48:14.324 DEBUG 11152 --- [.84.10.4:30672]] 
> io.netty.buffer.PoolThreadCache  : Freed 18 thread-local buffer(s) 
> from thread: AmqpProvider :(1):[amqp://10.84.10.4:30672]
> 2021-04-09 11:48:36.778 DEBUG 11152 --- [.84.10.4:30672]] 
> o.a.qpid.jms.sasl.SaslMechanismFinder: Best match for SASL auth was: 
> SASL-PLAIN
> 2021-04-09 11:48:36.862 DEBUG 11152 --- [.84.10.4:30672]] 
> o.a.qpid.proton.engine.impl.SaslImpl : Handled outcome: SaslImpl 
> [_outcome=PN_SASL_OK, state=PN_SASL_PASS, done=true, role=CLIENT]
> 2021-04-09 11:48:36.936 DEBUG 11152 --- [.84.10.4:30672]] proton.trace
>  : IN: CH[0] : Open{ containerId='', hostname='null', 
> maxFrameSize=131072, channelMax=65535, idleTimeOut=null, 
> outgoingLocales=null, incomingLocales=null, offeredCapabilities=null, 
> desiredCapabilities=null, 
> properties={amqp:connection-establishment-failed=true}}
> 2021-04-09 11:48:36.937 DEBUG 11152 --- [.84.10.4:30672]] proton.trace
>  : IN: CH[0] : 
> Close{error=Error{condition=amqp:invalid-field, description='null', 
> info={invalid-field=container-id}}}
> 2021-04-09 11:48:36.940 DEBUG 11152 --- [.84.10.4:30672]] 
> io.netty.buffer.PoolThreadCache  : Freed 3 thread-local buffer(s) 
> from thread: AmqpProvider :(3):[amqp://10.84.10.4:30672]
> 2021-04-09 11:48:37.128 DEBUG 11152 --- [.84.10.4:30672]] 
> o.a.qpid.jms.sasl.SaslMechanismFinder: Best match for SASL auth was: 
> SASL-PLAIN
> 2021-04-09 11:48:37.200 DEBUG 11152 --- [.84.10.4:30672]] 
> o.a.qpid.proton.engine.impl.SaslImpl : Handled outcome: SaslImpl 
> [_outcome=PN_SASL_OK, state=PN_SASL_PASS, done=true, role=CLIENT]
> 2021-04-09 11:48:37.274 DEBUG 11152 --- [.84.10.4:30672]] proton.trace
>  : IN: CH[0] : Open{ containerId='', hostname='null', 
> maxFrameSize=131072, channelMax=65535, idleTimeOut=null, 
> outgoingLocales=null, incomingLocales=null, offeredCapabilities=null, 
> desiredCapabilities=null, 
> properties={amqp:connection-establishment-failed=true}}
> 2021-04-09 11:48:37.274 DEBUG 11152 --- [.84.10.4:30672]] proton.trace
>  : IN: CH[0] : 
> Close{error=Error{condition=amqp:invalid-field, description='null', 
> info={invalid-field=container-id}}}
> 2021-04-09 11:48:37.276 DEBUG 11152 --- [.84.10.4:30672]] 
> io.netty.buffer.PoolThreadCache  : Freed 3 thread-local buffer(s) 
> from thread: AmqpProvider :(4):[amqp://10.84.10.4:30672]
> 2021-04-09 11:48:37.486 DEBUG 11152 --- [.84.10.4:30672]] 
> o.a.qpid.jms.sasl.SaslMechanismFinder: Best match for SASL auth was: 
> SASL-PLAIN
> 2021-04-09 11:48:37.557 DEBUG 11152 --- [.84.10.4:30672]] 
> o.a.qpid.proton.engine.impl.SaslImpl : Handled outcome: SaslImpl 
> [_outcome=PN_SASL_OK, state=PN_SASL_PASS, done=true, role=CLIENT]
> 2021-04-09 11:48:37.629 DEBUG 11152 --- [.84.10.4:30672]] proton.trace
>  : IN: CH[0] : Open{ containerId='', hostname='null', 
> maxFrameSize=131072, channelMax=65535, idleTimeOut=null, 
> outgoingLocales=null, incomingLocales=null, offeredCapabilities=null, 
> desiredCapabilities=null, 
> properties={amqp:connection-establishment-failed=true}}
> 2021-04-09 11:48:37.629 DEBUG 11152 --- [.84.10.4:30672]] proton.trace
>  : IN: CH[0] 

[jira] [Resolved] (DISPATCH-2050) fields for received delivery state are not relayed

2021-04-16 Thread Gordon Sim (Jira)


 [ 
https://issues.apache.org/jira/browse/DISPATCH-2050?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Gordon Sim resolved DISPATCH-2050.
--
Resolution: Fixed

> fields for received delivery state are not relayed 
> ---
>
> Key: DISPATCH-2050
> URL: https://issues.apache.org/jira/browse/DISPATCH-2050
> Project: Qpid Dispatch
>  Issue Type: Bug
>Reporter: Gordon Sim
>Priority: Major
>
> If a disposition is sent with the RECEIVED state and values for 
> section_number and section_offset, those field values are lost in the 
> disposition as relayed back to the sender.
>  
> E.g.
> {noformat}
> [0x55b6a79f6e50]: AMQP:FRAME:0 -> @open(16) 
> [container-id="558d2cff-3e5a-46c6-903e-47d21a4f0101", hostname="localhost", 
> channel-max=32767]
> [0x55b6a79f6e50]: AMQP:FRAME:0 -> @begin(17) [next-outgoing-id=0, 
> incoming-window=2147483647, outgoing-window=2147483647]
> [0x55b6a79f6e50]: AMQP:FRAME:0 -> @attach(18) 
> [name="558d2cff-3e5a-46c6-903e-47d21a4f0101-test", handle=0, role=true, 
> snd-settle-mode=2, rcv-settle-mode=0, source=@source(40) [address="test", 
> durable=0, timeout=0, dynamic=false], target=@target(41) [durable=0, 
> timeout=0, dynamic=false], initial-delivery-count=0, max-message-size=0]
> [0x55b6a79f6e50]: AMQP:FRAME:0 -> @attach(18) 
> [name="558d2cff-3e5a-46c6-903e-47d21a4f0101-test", handle=1, role=false, 
> snd-settle-mode=2, rcv-settle-mode=0, source=@source(40) [durable=0, 
> timeout=0, dynamic=false], target=@target(41) [address="test", durable=0, 
> timeout=0, dynamic=false], initial-delivery-count=0, max-message-size=0]
> [0x55b6a79f6e50]: AMQP:FRAME:0 -> @flow(19) [incoming-window=2147483647, 
> next-outgoing-id=0, outgoing-window=2147483647, handle=0, delivery-count=0, 
> link-credit=10, drain=false]
> [0x55b6a79f6e50]: AMQP:FRAME:  <- AMQP
> [0x55b6a79f6e50]: AMQP:FRAME:0 <- @open(16) 
> [container-id="Standalone_V+zMPIsTle+Urdn", max-frame-size=16384, 
> channel-max=32767, idle-time-out=8000, 
> offered-capabilities=@PN_SYMBOL[:"ANONYMOUS-RELAY", :"qd.streaming-links"], 
> desired-capabilities=@PN_SYMBOL[:"ANONYMOUS-RELAY", :"qd.streaming-links"], 
> properties={:product="qpid-dispatch-router", :version="1.16.0-SNAPSHOT", 
> :"qd.conn-id"=5}]
> [0x55b6a79f6e50]: AMQP:FRAME:0 <- @begin(17) [remote-channel=0, 
> next-outgoing-id=0, incoming-window=2147483647, outgoing-window=2147483647]
> [0x55b6a79f6e50]: AMQP:FRAME:0 <- @attach(18) 
> [name="558d2cff-3e5a-46c6-903e-47d21a4f0101-test", handle=0, role=false, 
> snd-settle-mode=2, rcv-settle-mode=0, source=@source(40) [address="test", 
> durable=0, expiry-policy=:"session-end", timeout=0, dynamic=false], 
> target=@target(41) [durable=0, expiry-policy=:"session-end", timeout=0, 
> dynamic=false], initial-delivery-count=0, max-message-size=0]
> [0x55b6a79f6e50]: AMQP:FRAME:0 <- @attach(18) 
> [name="558d2cff-3e5a-46c6-903e-47d21a4f0101-test", handle=1, role=true, 
> snd-settle-mode=2, rcv-settle-mode=0, source=@source(40) [durable=0, 
> expiry-policy=:"session-end", timeout=0, dynamic=false], target=@target(41) 
> [address="test", durable=0, expiry-policy=:"session-end", timeout=0, 
> dynamic=false], initial-delivery-count=0, max-message-size=0]
> [0x55b6a79f6e50]: AMQP:FRAME:0 <- @flow(19) [next-incoming-id=0, 
> incoming-window=2147483647, next-outgoing-id=0, outgoing-window=2147483647, 
> handle=1, delivery-count=0, link-credit=250, drain=false]
> [0x55b6a79f6e50]: AMQP:FRAME:0 -> @transfer(20) [handle=1, delivery-id=0, 
> delivery-tag=b"1", message-format=0] (25) "\x00SpE\x00SsE\x00Sw\xa1\x0cHello 
> World!"
> [0x55b6a79f6e50]: AMQP:FRAME:0 <- @transfer(20) [handle=0, delivery-id=0, 
> delivery-tag=b"\x04\x00\x00\x00\x00\x00\x00\x00", message-format=0] (25) 
> "\x00SpE\x00SsE\x00Sw\xa1\x0cHello World!"
> Got delivery: Hello World!
> [0x55b6a79f6e50]: AMQP:FRAME:0 -> @disposition(21) [role=true, first=0, 
> state=@received(35) [section-number=10, section-offset=5]]
> [0x55b6a79f6e50]: AMQP:FRAME:0 <- @disposition(21) [role=true, first=0, 
> state=@received(35) [section-number=0, section-offset=0]]
> {noformat}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: dev-unsubscr...@qpid.apache.org
For additional commands, e-mail: dev-h...@qpid.apache.org



[jira] [Commented] (DISPATCH-2050) fields for received delivery state are not relayed

2021-04-16 Thread ASF GitHub Bot (Jira)


[ 
https://issues.apache.org/jira/browse/DISPATCH-2050?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17323819#comment-17323819
 ] 

ASF GitHub Bot commented on DISPATCH-2050:
--

grs closed pull request #1126:
URL: https://github.com/apache/qpid-dispatch/pull/1126


   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


> fields for received delivery state are not relayed 
> ---
>
> Key: DISPATCH-2050
> URL: https://issues.apache.org/jira/browse/DISPATCH-2050
> Project: Qpid Dispatch
>  Issue Type: Bug
>Reporter: Gordon Sim
>Priority: Major
>
> If a disposition is sent with the RECEIVED state and values for 
> section_number and section_offset, those field values are lost in the 
> disposition as relayed back to the sender.
>  
> E.g.
> {noformat}
> [0x55b6a79f6e50]: AMQP:FRAME:0 -> @open(16) 
> [container-id="558d2cff-3e5a-46c6-903e-47d21a4f0101", hostname="localhost", 
> channel-max=32767]
> [0x55b6a79f6e50]: AMQP:FRAME:0 -> @begin(17) [next-outgoing-id=0, 
> incoming-window=2147483647, outgoing-window=2147483647]
> [0x55b6a79f6e50]: AMQP:FRAME:0 -> @attach(18) 
> [name="558d2cff-3e5a-46c6-903e-47d21a4f0101-test", handle=0, role=true, 
> snd-settle-mode=2, rcv-settle-mode=0, source=@source(40) [address="test", 
> durable=0, timeout=0, dynamic=false], target=@target(41) [durable=0, 
> timeout=0, dynamic=false], initial-delivery-count=0, max-message-size=0]
> [0x55b6a79f6e50]: AMQP:FRAME:0 -> @attach(18) 
> [name="558d2cff-3e5a-46c6-903e-47d21a4f0101-test", handle=1, role=false, 
> snd-settle-mode=2, rcv-settle-mode=0, source=@source(40) [durable=0, 
> timeout=0, dynamic=false], target=@target(41) [address="test", durable=0, 
> timeout=0, dynamic=false], initial-delivery-count=0, max-message-size=0]
> [0x55b6a79f6e50]: AMQP:FRAME:0 -> @flow(19) [incoming-window=2147483647, 
> next-outgoing-id=0, outgoing-window=2147483647, handle=0, delivery-count=0, 
> link-credit=10, drain=false]
> [0x55b6a79f6e50]: AMQP:FRAME:  <- AMQP
> [0x55b6a79f6e50]: AMQP:FRAME:0 <- @open(16) 
> [container-id="Standalone_V+zMPIsTle+Urdn", max-frame-size=16384, 
> channel-max=32767, idle-time-out=8000, 
> offered-capabilities=@PN_SYMBOL[:"ANONYMOUS-RELAY", :"qd.streaming-links"], 
> desired-capabilities=@PN_SYMBOL[:"ANONYMOUS-RELAY", :"qd.streaming-links"], 
> properties={:product="qpid-dispatch-router", :version="1.16.0-SNAPSHOT", 
> :"qd.conn-id"=5}]
> [0x55b6a79f6e50]: AMQP:FRAME:0 <- @begin(17) [remote-channel=0, 
> next-outgoing-id=0, incoming-window=2147483647, outgoing-window=2147483647]
> [0x55b6a79f6e50]: AMQP:FRAME:0 <- @attach(18) 
> [name="558d2cff-3e5a-46c6-903e-47d21a4f0101-test", handle=0, role=false, 
> snd-settle-mode=2, rcv-settle-mode=0, source=@source(40) [address="test", 
> durable=0, expiry-policy=:"session-end", timeout=0, dynamic=false], 
> target=@target(41) [durable=0, expiry-policy=:"session-end", timeout=0, 
> dynamic=false], initial-delivery-count=0, max-message-size=0]
> [0x55b6a79f6e50]: AMQP:FRAME:0 <- @attach(18) 
> [name="558d2cff-3e5a-46c6-903e-47d21a4f0101-test", handle=1, role=true, 
> snd-settle-mode=2, rcv-settle-mode=0, source=@source(40) [durable=0, 
> expiry-policy=:"session-end", timeout=0, dynamic=false], target=@target(41) 
> [address="test", durable=0, expiry-policy=:"session-end", timeout=0, 
> dynamic=false], initial-delivery-count=0, max-message-size=0]
> [0x55b6a79f6e50]: AMQP:FRAME:0 <- @flow(19) [next-incoming-id=0, 
> incoming-window=2147483647, next-outgoing-id=0, outgoing-window=2147483647, 
> handle=1, delivery-count=0, link-credit=250, drain=false]
> [0x55b6a79f6e50]: AMQP:FRAME:0 -> @transfer(20) [handle=1, delivery-id=0, 
> delivery-tag=b"1", message-format=0] (25) "\x00SpE\x00SsE\x00Sw\xa1\x0cHello 
> World!"
> [0x55b6a79f6e50]: AMQP:FRAME:0 <- @transfer(20) [handle=0, delivery-id=0, 
> delivery-tag=b"\x04\x00\x00\x00\x00\x00\x00\x00", message-format=0] (25) 
> "\x00SpE\x00SsE\x00Sw\xa1\x0cHello World!"
> Got delivery: Hello World!
> [0x55b6a79f6e50]: AMQP:FRAME:0 -> @disposition(21) [role=true, first=0, 
> state=@received(35) [section-number=10, section-offset=5]]
> [0x55b6a79f6e50]: AMQP:FRAME:0 <- @disposition(21) [role=true, first=0, 
> state=@received(35) [section-number=0, section-offset=0]]
> {noformat}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: dev-unsubscr...@qpid.apache.org
For additional commands, e-mail: dev-h...@qpid.apache.org



[jira] [Commented] (DISPATCH-2048) system_tests_router_mesh failure due to leak of qd_deferred_call_t

2021-04-16 Thread ASF GitHub Bot (Jira)


[ 
https://issues.apache.org/jira/browse/DISPATCH-2048?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17323823#comment-17323823
 ] 

ASF GitHub Bot commented on DISPATCH-2048:
--

kgiusti commented on a change in pull request #1127:
URL: https://github.com/apache/qpid-dispatch/pull/1127#discussion_r614846246



##
File path: src/router_core/router_core.c
##
@@ -896,6 +914,27 @@ void 
qdr_del_subscription_ref_CT(qdr_subscription_ref_list_t *list, qdr_subscrip
 }
 
 
+static void qdr_general_handler_cleanup(void *context)
+{
+qdr_core_t  *core = (qdr_core_t*) context;
+qdr_general_work_list_t  work_list;
+qdr_general_work_t  *work;
+
+sys_mutex_lock(core->work_lock);
+DEQ_MOVE(core->work_list, work_list);
+sys_mutex_unlock(core->work_lock);
+
+work = DEQ_HEAD(work_list);
+while (work) {
+DEQ_REMOVE_HEAD(work_list);
+work->handler(core, work);
+qdr_do_message_to_addr_free(core, work);

Review comment:
   Isn't it possible that this will double free if work->handler is 
qdr_do_message_to_addr_free()?




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


> system_tests_router_mesh failure due to leak of qd_deferred_call_t
> --
>
> Key: DISPATCH-2048
> URL: https://issues.apache.org/jira/browse/DISPATCH-2048
> Project: Qpid Dispatch
>  Issue Type: Test
>  Components: Tests
>Affects Versions: 1.15.0
>Reporter: Ganesh Murthy
>Priority: Major
>
> {noformat}
>  63: Test command: /usr/bin/python 
> "/home/travis/build/apache/qpid-dispatch/build/tests/run.py" "-m" "unittest" 
> "-v" "system_tests_router_mesh"
> 63: Test timeout computed to be: 600
> 63: test_01_rx_failover_clean (system_tests_router_mesh.ThreeRouterTest) ... 
> ok
> 63: test_02_rx_failover_dirty (system_tests_router_mesh.ThreeRouterTest) ... 
> ok
> 63: test_03_unavailable_link_attach 
> (system_tests_router_mesh.ThreeRouterTest) ... ok
> 63: test_04_unavailable_anonymous_link_attach 
> (system_tests_router_mesh.ThreeRouterTest) ... ok
> 63: test_05_unavailable_anonymous_link_send 
> (system_tests_router_mesh.ThreeRouterTest) ... ok
> 63: test_06_parallel_priority (system_tests_router_mesh.ThreeRouterTest) ... 
> ERROR
> 63: ERROR
> 63: 
> 63: Router RouterC output file:
> 63: ERROR: Aborted due to unexpected alloc pool leak of type 
> 'qd_deferred_call_t'
> 63: 
> 63: 
> 63: 
> 63: Router RouterC debug dump file:
> 63: 
> 63: alloc.c: Items of type 'qd_iterator_t' remain allocated at shutdown: 80 
> (SUPPRESSED)
> 63: alloc.c: Items of type 'qd_timer_t' remain allocated at shutdown: 1 
> (SUPPRESSED)
> 63: alloc.c: Items of type 'qd_bitmask_t' remain allocated at shutdown: 8 
> (SUPPRESSED)
> 63: alloc.c: Items of type 'qd_buffer_t' remain allocated at shutdown: 287 
> (SUPPRESSED)
> 63: alloc.c: Items of type 'qd_parsed_field_t' remain allocated at shutdown: 
> 32 (SUPPRESSED)
> 63: alloc.c: Items of type 'qd_connector_t' remain allocated at shutdown: 1 
> (SUPPRESSED)
> 63: alloc.c: Items of type 'qd_message_t' remain allocated at shutdown: 8 
> (SUPPRESSED)
> 63: alloc.c: Items of type 'qd_message_content_t' remain allocated at 
> shutdown: 8 (SUPPRESSED)
> 63: alloc.c: Items of type 'qdr_delivery_t' remain allocated at shutdown: 8 
> (SUPPRESSED)
> 63: alloc.c: Items of type 'qd_link_ref_t' remain allocated at shutdown: 8 
> (SUPPRESSED)
> 63: alloc.c: Items of type 'qd_deferred_call_t' remain allocated at shutdown: 
> 15
> 63: Leak: 2021-04-15 03:43:23.586342 + type: qd_deferred_call_t address: 
> 0x6110001b7d10
> 63: /usr/lib/x86_64-linux-gnu/libasan.so.2(+0x4a077) [0x7f3be716c077]
> 63: 
> /home/travis/build/apache/qpid-dispatch/build/src/libqpid-dispatch.so(qd_alloc+0xc70)
>  [0x7f3be68ee7a0]
> 63: 
> /home/travis/build/apache/qpid-dispatch/build/src/libqpid-dispatch.so(new_qd_deferred_call_t+0x26)
>  [0x7f3be6a9]
> 63: 
> /home/travis/build/apache/qpid-dispatch/build/src/libqpid-dispatch.so(qd_connection_invoke_deferred+0x25)
>  [0x7f3be6aaa479]
> 63: 
> /home/travis/build/apache/qpid-dispatch/build/src/libqpid-dispatch.so(+0x53431e)
>  [0x7f3be6a9231e]
> 63: 
> /home/travis/build/apache/qpid-dispatch/build/src/libqpid-dispatch.so(qdr_connection_process+0x2720)
>  [0x7f3be69b31d4]
> 63: 
> /home/travis/build/apache/qpid-dispatch/build/src/libqpid-dispatch.so(+0x52a9c7)
>  [0x7f3be6a889c7]
> 63: 
> /home/travis/build/apache/qpid-dispatch/build/src/libqpid-dispatch.so(+0x3b631c)
>  [0x7f3be691431c]
> 63: 
> /home/travis/build/apache/qpid-dispatch/build/src/libqpid-dispatch.so(qd_container_handle_event+0x223b)
>  

[GitHub] [qpid-dispatch] kgiusti commented on a change in pull request #1127: DISPATCH-2048: Added code to clean out the qdr_delivery_cleanup_t on …

2021-04-16 Thread GitBox


kgiusti commented on a change in pull request #1127:
URL: https://github.com/apache/qpid-dispatch/pull/1127#discussion_r614846246



##
File path: src/router_core/router_core.c
##
@@ -896,6 +914,27 @@ void 
qdr_del_subscription_ref_CT(qdr_subscription_ref_list_t *list, qdr_subscrip
 }
 
 
+static void qdr_general_handler_cleanup(void *context)
+{
+qdr_core_t  *core = (qdr_core_t*) context;
+qdr_general_work_list_t  work_list;
+qdr_general_work_t  *work;
+
+sys_mutex_lock(core->work_lock);
+DEQ_MOVE(core->work_list, work_list);
+sys_mutex_unlock(core->work_lock);
+
+work = DEQ_HEAD(work_list);
+while (work) {
+DEQ_REMOVE_HEAD(work_list);
+work->handler(core, work);
+qdr_do_message_to_addr_free(core, work);

Review comment:
   Isn't it possible that this will double free if work->handler is 
qdr_do_message_to_addr_free()?




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: dev-unsubscr...@qpid.apache.org
For additional commands, e-mail: dev-h...@qpid.apache.org



[jira] [Comment Edited] (QPIDJMS-535) Looping failure after SASL negotiation with Artemis v2.16

2021-04-16 Thread Robbie Gemmell (Jira)


[ 
https://issues.apache.org/jira/browse/QPIDJMS-535?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17323822#comment-17323822
 ] 

Robbie Gemmell edited comment on QPIDJMS-535 at 4/16/21, 1:39 PM:
--

The connection is being closed/refused by the broker due to the broker saying 
the connection has failed, due to the container-id already being in use by an 
existing connection. Either it is, or is being mistakenly considered so by the 
broker. Its certainly not a client bug. I recall a similar issue being raised 
for Artemis previously.


was (Author: gemmellr):
The connection is being closed/refused by the broker to the broker saying the 
connection has failed, due to the container-id already being in use by an 
existing connection. Either it is, or is being mistakenly considered so by the 
broker. Its certainly not a client bug. I recall a similar issue being raised 
for Artemis previously.

> Looping failure after SASL negotiation with Artemis v2.16
> -
>
> Key: QPIDJMS-535
> URL: https://issues.apache.org/jira/browse/QPIDJMS-535
> Project: Qpid JMS
>  Issue Type: Bug
>Affects Versions: 0.58.0
>Reporter: David Martin
>Priority: Minor
>
> This is not a production issue for us so tagging it as minor. Encountered 
> during some destructive testing, killing brokers, clients, seeing that 
> connections re-establish ok.
> It sometimes happens when re-establishing a connection to an artemis broker 
> hosted in kubernetes which has been recycled by deleting the pod.
> {noformat}
> 2021-04-09 11:48:13.940 DEBUG 11152 --- [.84.10.4:30672]] proton.trace
>  : IN: CH[28] : Flow{nextIncomingId=1, 
> incomingWindow=2147483647, nextOutgoingId=1, outgoingWindow=2147483647, 
> handle=0, deliveryCount=14000, linkCredit=0, available=null, drain=true, 
> echo=false, properties=null}
> 2021-04-09 11:48:14.278  INFO 11152 --- [.84.10.4:30672]] 
> o.a.qpid.jms.provider.amqp.AmqpProvider  : Transport failed: An existing 
> connection was forcibly closed by the remote host
> 2021-04-09 11:48:14.324 DEBUG 11152 --- [.84.10.4:30672]] 
> io.netty.buffer.PoolThreadCache  : Freed 18 thread-local buffer(s) 
> from thread: AmqpProvider :(1):[amqp://10.84.10.4:30672]
> 2021-04-09 11:48:36.778 DEBUG 11152 --- [.84.10.4:30672]] 
> o.a.qpid.jms.sasl.SaslMechanismFinder: Best match for SASL auth was: 
> SASL-PLAIN
> 2021-04-09 11:48:36.862 DEBUG 11152 --- [.84.10.4:30672]] 
> o.a.qpid.proton.engine.impl.SaslImpl : Handled outcome: SaslImpl 
> [_outcome=PN_SASL_OK, state=PN_SASL_PASS, done=true, role=CLIENT]
> 2021-04-09 11:48:36.936 DEBUG 11152 --- [.84.10.4:30672]] proton.trace
>  : IN: CH[0] : Open{ containerId='', hostname='null', 
> maxFrameSize=131072, channelMax=65535, idleTimeOut=null, 
> outgoingLocales=null, incomingLocales=null, offeredCapabilities=null, 
> desiredCapabilities=null, 
> properties={amqp:connection-establishment-failed=true}}
> 2021-04-09 11:48:36.937 DEBUG 11152 --- [.84.10.4:30672]] proton.trace
>  : IN: CH[0] : 
> Close{error=Error{condition=amqp:invalid-field, description='null', 
> info={invalid-field=container-id}}}
> 2021-04-09 11:48:36.940 DEBUG 11152 --- [.84.10.4:30672]] 
> io.netty.buffer.PoolThreadCache  : Freed 3 thread-local buffer(s) 
> from thread: AmqpProvider :(3):[amqp://10.84.10.4:30672]
> 2021-04-09 11:48:37.128 DEBUG 11152 --- [.84.10.4:30672]] 
> o.a.qpid.jms.sasl.SaslMechanismFinder: Best match for SASL auth was: 
> SASL-PLAIN
> 2021-04-09 11:48:37.200 DEBUG 11152 --- [.84.10.4:30672]] 
> o.a.qpid.proton.engine.impl.SaslImpl : Handled outcome: SaslImpl 
> [_outcome=PN_SASL_OK, state=PN_SASL_PASS, done=true, role=CLIENT]
> 2021-04-09 11:48:37.274 DEBUG 11152 --- [.84.10.4:30672]] proton.trace
>  : IN: CH[0] : Open{ containerId='', hostname='null', 
> maxFrameSize=131072, channelMax=65535, idleTimeOut=null, 
> outgoingLocales=null, incomingLocales=null, offeredCapabilities=null, 
> desiredCapabilities=null, 
> properties={amqp:connection-establishment-failed=true}}
> 2021-04-09 11:48:37.274 DEBUG 11152 --- [.84.10.4:30672]] proton.trace
>  : IN: CH[0] : 
> Close{error=Error{condition=amqp:invalid-field, description='null', 
> info={invalid-field=container-id}}}
> 2021-04-09 11:48:37.276 DEBUG 11152 --- [.84.10.4:30672]] 
> io.netty.buffer.PoolThreadCache  : Freed 3 thread-local buffer(s) 
> from thread: AmqpProvider :(4):[amqp://10.84.10.4:30672]
> 2021-04-09 11:48:37.486 DEBUG 11152 --- [.84.10.4:30672]] 
> o.a.qpid.jms.sasl.SaslMechanismFinder: Best match for SASL auth was: 
> SASL-PLAIN
> 2021-04-09 11:48:37.557 DEBUG 11152 --- [.84.10.4:30672]] 
> o.a.qpid.proton.engine.impl.SaslImpl   

[jira] [Resolved] (QPIDJMS-535) Looping failure after SASL negotiation with Artemis v2.16

2021-04-16 Thread Robbie Gemmell (Jira)


 [ 
https://issues.apache.org/jira/browse/QPIDJMS-535?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Robbie Gemmell resolved QPIDJMS-535.

Resolution: Not A Bug

> Looping failure after SASL negotiation with Artemis v2.16
> -
>
> Key: QPIDJMS-535
> URL: https://issues.apache.org/jira/browse/QPIDJMS-535
> Project: Qpid JMS
>  Issue Type: Bug
>Affects Versions: 0.58.0
>Reporter: David Martin
>Priority: Minor
>
> This is not a production issue for us so tagging it as minor. Encountered 
> during some destructive testing, killing brokers, clients, seeing that 
> connections re-establish ok.
> It sometimes happens when re-establishing a connection to an artemis broker 
> hosted in kubernetes which has been recycled by deleting the pod.
> {noformat}
> 2021-04-09 11:48:13.940 DEBUG 11152 --- [.84.10.4:30672]] proton.trace
>  : IN: CH[28] : Flow{nextIncomingId=1, 
> incomingWindow=2147483647, nextOutgoingId=1, outgoingWindow=2147483647, 
> handle=0, deliveryCount=14000, linkCredit=0, available=null, drain=true, 
> echo=false, properties=null}
> 2021-04-09 11:48:14.278  INFO 11152 --- [.84.10.4:30672]] 
> o.a.qpid.jms.provider.amqp.AmqpProvider  : Transport failed: An existing 
> connection was forcibly closed by the remote host
> 2021-04-09 11:48:14.324 DEBUG 11152 --- [.84.10.4:30672]] 
> io.netty.buffer.PoolThreadCache  : Freed 18 thread-local buffer(s) 
> from thread: AmqpProvider :(1):[amqp://10.84.10.4:30672]
> 2021-04-09 11:48:36.778 DEBUG 11152 --- [.84.10.4:30672]] 
> o.a.qpid.jms.sasl.SaslMechanismFinder: Best match for SASL auth was: 
> SASL-PLAIN
> 2021-04-09 11:48:36.862 DEBUG 11152 --- [.84.10.4:30672]] 
> o.a.qpid.proton.engine.impl.SaslImpl : Handled outcome: SaslImpl 
> [_outcome=PN_SASL_OK, state=PN_SASL_PASS, done=true, role=CLIENT]
> 2021-04-09 11:48:36.936 DEBUG 11152 --- [.84.10.4:30672]] proton.trace
>  : IN: CH[0] : Open{ containerId='', hostname='null', 
> maxFrameSize=131072, channelMax=65535, idleTimeOut=null, 
> outgoingLocales=null, incomingLocales=null, offeredCapabilities=null, 
> desiredCapabilities=null, 
> properties={amqp:connection-establishment-failed=true}}
> 2021-04-09 11:48:36.937 DEBUG 11152 --- [.84.10.4:30672]] proton.trace
>  : IN: CH[0] : 
> Close{error=Error{condition=amqp:invalid-field, description='null', 
> info={invalid-field=container-id}}}
> 2021-04-09 11:48:36.940 DEBUG 11152 --- [.84.10.4:30672]] 
> io.netty.buffer.PoolThreadCache  : Freed 3 thread-local buffer(s) 
> from thread: AmqpProvider :(3):[amqp://10.84.10.4:30672]
> 2021-04-09 11:48:37.128 DEBUG 11152 --- [.84.10.4:30672]] 
> o.a.qpid.jms.sasl.SaslMechanismFinder: Best match for SASL auth was: 
> SASL-PLAIN
> 2021-04-09 11:48:37.200 DEBUG 11152 --- [.84.10.4:30672]] 
> o.a.qpid.proton.engine.impl.SaslImpl : Handled outcome: SaslImpl 
> [_outcome=PN_SASL_OK, state=PN_SASL_PASS, done=true, role=CLIENT]
> 2021-04-09 11:48:37.274 DEBUG 11152 --- [.84.10.4:30672]] proton.trace
>  : IN: CH[0] : Open{ containerId='', hostname='null', 
> maxFrameSize=131072, channelMax=65535, idleTimeOut=null, 
> outgoingLocales=null, incomingLocales=null, offeredCapabilities=null, 
> desiredCapabilities=null, 
> properties={amqp:connection-establishment-failed=true}}
> 2021-04-09 11:48:37.274 DEBUG 11152 --- [.84.10.4:30672]] proton.trace
>  : IN: CH[0] : 
> Close{error=Error{condition=amqp:invalid-field, description='null', 
> info={invalid-field=container-id}}}
> 2021-04-09 11:48:37.276 DEBUG 11152 --- [.84.10.4:30672]] 
> io.netty.buffer.PoolThreadCache  : Freed 3 thread-local buffer(s) 
> from thread: AmqpProvider :(4):[amqp://10.84.10.4:30672]
> 2021-04-09 11:48:37.486 DEBUG 11152 --- [.84.10.4:30672]] 
> o.a.qpid.jms.sasl.SaslMechanismFinder: Best match for SASL auth was: 
> SASL-PLAIN
> 2021-04-09 11:48:37.557 DEBUG 11152 --- [.84.10.4:30672]] 
> o.a.qpid.proton.engine.impl.SaslImpl : Handled outcome: SaslImpl 
> [_outcome=PN_SASL_OK, state=PN_SASL_PASS, done=true, role=CLIENT]
> 2021-04-09 11:48:37.629 DEBUG 11152 --- [.84.10.4:30672]] proton.trace
>  : IN: CH[0] : Open{ containerId='', hostname='null', 
> maxFrameSize=131072, channelMax=65535, idleTimeOut=null, 
> outgoingLocales=null, incomingLocales=null, offeredCapabilities=null, 
> desiredCapabilities=null, 
> properties={amqp:connection-establishment-failed=true}}
> 2021-04-09 11:48:37.629 DEBUG 11152 --- [.84.10.4:30672]] proton.trace
>  : IN: CH[0] : 
> Close{error=Error{condition=amqp:invalid-field, description='null', 
> info={invalid-field=container-id}}}
> 2021-04-09 11:48:37.631 DEBUG 11152 --- [.84.10.4:30672]] 
> 

[jira] [Commented] (QPIDJMS-535) Looping failure after SASL negotiation with Artemis v2.16

2021-04-16 Thread Robbie Gemmell (Jira)


[ 
https://issues.apache.org/jira/browse/QPIDJMS-535?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17323822#comment-17323822
 ] 

Robbie Gemmell commented on QPIDJMS-535:


The connection is being closed/refused by the broker to the broker saying the 
connection has failed, due to the container-id already being in use by an 
existing connection. Either it is, or is being mistakenly considered so by the 
broker. Its certainly not a client bug. I recall a similar issue being raised 
for Artemis previously.

> Looping failure after SASL negotiation with Artemis v2.16
> -
>
> Key: QPIDJMS-535
> URL: https://issues.apache.org/jira/browse/QPIDJMS-535
> Project: Qpid JMS
>  Issue Type: Bug
>Affects Versions: 0.58.0
>Reporter: David Martin
>Priority: Minor
>
> This is not a production issue for us so tagging it as minor. Encountered 
> during some destructive testing, killing brokers, clients, seeing that 
> connections re-establish ok.
> It sometimes happens when re-establishing a connection to an artemis broker 
> hosted in kubernetes which has been recycled by deleting the pod.
> {noformat}
> 2021-04-09 11:48:13.940 DEBUG 11152 --- [.84.10.4:30672]] proton.trace
>  : IN: CH[28] : Flow{nextIncomingId=1, 
> incomingWindow=2147483647, nextOutgoingId=1, outgoingWindow=2147483647, 
> handle=0, deliveryCount=14000, linkCredit=0, available=null, drain=true, 
> echo=false, properties=null}
> 2021-04-09 11:48:14.278  INFO 11152 --- [.84.10.4:30672]] 
> o.a.qpid.jms.provider.amqp.AmqpProvider  : Transport failed: An existing 
> connection was forcibly closed by the remote host
> 2021-04-09 11:48:14.324 DEBUG 11152 --- [.84.10.4:30672]] 
> io.netty.buffer.PoolThreadCache  : Freed 18 thread-local buffer(s) 
> from thread: AmqpProvider :(1):[amqp://10.84.10.4:30672]
> 2021-04-09 11:48:36.778 DEBUG 11152 --- [.84.10.4:30672]] 
> o.a.qpid.jms.sasl.SaslMechanismFinder: Best match for SASL auth was: 
> SASL-PLAIN
> 2021-04-09 11:48:36.862 DEBUG 11152 --- [.84.10.4:30672]] 
> o.a.qpid.proton.engine.impl.SaslImpl : Handled outcome: SaslImpl 
> [_outcome=PN_SASL_OK, state=PN_SASL_PASS, done=true, role=CLIENT]
> 2021-04-09 11:48:36.936 DEBUG 11152 --- [.84.10.4:30672]] proton.trace
>  : IN: CH[0] : Open{ containerId='', hostname='null', 
> maxFrameSize=131072, channelMax=65535, idleTimeOut=null, 
> outgoingLocales=null, incomingLocales=null, offeredCapabilities=null, 
> desiredCapabilities=null, 
> properties={amqp:connection-establishment-failed=true}}
> 2021-04-09 11:48:36.937 DEBUG 11152 --- [.84.10.4:30672]] proton.trace
>  : IN: CH[0] : 
> Close{error=Error{condition=amqp:invalid-field, description='null', 
> info={invalid-field=container-id}}}
> 2021-04-09 11:48:36.940 DEBUG 11152 --- [.84.10.4:30672]] 
> io.netty.buffer.PoolThreadCache  : Freed 3 thread-local buffer(s) 
> from thread: AmqpProvider :(3):[amqp://10.84.10.4:30672]
> 2021-04-09 11:48:37.128 DEBUG 11152 --- [.84.10.4:30672]] 
> o.a.qpid.jms.sasl.SaslMechanismFinder: Best match for SASL auth was: 
> SASL-PLAIN
> 2021-04-09 11:48:37.200 DEBUG 11152 --- [.84.10.4:30672]] 
> o.a.qpid.proton.engine.impl.SaslImpl : Handled outcome: SaslImpl 
> [_outcome=PN_SASL_OK, state=PN_SASL_PASS, done=true, role=CLIENT]
> 2021-04-09 11:48:37.274 DEBUG 11152 --- [.84.10.4:30672]] proton.trace
>  : IN: CH[0] : Open{ containerId='', hostname='null', 
> maxFrameSize=131072, channelMax=65535, idleTimeOut=null, 
> outgoingLocales=null, incomingLocales=null, offeredCapabilities=null, 
> desiredCapabilities=null, 
> properties={amqp:connection-establishment-failed=true}}
> 2021-04-09 11:48:37.274 DEBUG 11152 --- [.84.10.4:30672]] proton.trace
>  : IN: CH[0] : 
> Close{error=Error{condition=amqp:invalid-field, description='null', 
> info={invalid-field=container-id}}}
> 2021-04-09 11:48:37.276 DEBUG 11152 --- [.84.10.4:30672]] 
> io.netty.buffer.PoolThreadCache  : Freed 3 thread-local buffer(s) 
> from thread: AmqpProvider :(4):[amqp://10.84.10.4:30672]
> 2021-04-09 11:48:37.486 DEBUG 11152 --- [.84.10.4:30672]] 
> o.a.qpid.jms.sasl.SaslMechanismFinder: Best match for SASL auth was: 
> SASL-PLAIN
> 2021-04-09 11:48:37.557 DEBUG 11152 --- [.84.10.4:30672]] 
> o.a.qpid.proton.engine.impl.SaslImpl : Handled outcome: SaslImpl 
> [_outcome=PN_SASL_OK, state=PN_SASL_PASS, done=true, role=CLIENT]
> 2021-04-09 11:48:37.629 DEBUG 11152 --- [.84.10.4:30672]] proton.trace
>  : IN: CH[0] : Open{ containerId='', hostname='null', 
> maxFrameSize=131072, channelMax=65535, idleTimeOut=null, 
> outgoingLocales=null, incomingLocales=null, offeredCapabilities=null, 
> desiredCapabilities=null, 

[GitHub] [qpid-dispatch] grs closed pull request #1126: DISPATCH-2050: propagate section-number and section-offset back to sender

2021-04-16 Thread GitBox


grs closed pull request #1126:
URL: https://github.com/apache/qpid-dispatch/pull/1126


   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: dev-unsubscr...@qpid.apache.org
For additional commands, e-mail: dev-h...@qpid.apache.org



[jira] [Assigned] (DISPATCH-2054) SEGV in qdr_link_cleanup_deliveries_CT during system_tests_multi_tenancy

2021-04-16 Thread Ganesh Murthy (Jira)


 [ 
https://issues.apache.org/jira/browse/DISPATCH-2054?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ganesh Murthy reassigned DISPATCH-2054:
---

Assignee: Ganesh Murthy

> SEGV in qdr_link_cleanup_deliveries_CT during system_tests_multi_tenancy
> 
>
> Key: DISPATCH-2054
> URL: https://issues.apache.org/jira/browse/DISPATCH-2054
> Project: Qpid Dispatch
>  Issue Type: Bug
>Affects Versions: 1.16.0
>Reporter: Jiri Daněk
>Assignee: Ganesh Murthy
>Priority: Major
>
> https://travis-ci.com/github/apache/qpid-dispatch/jobs/498884078#L6273
> {noformat}
> 37: ASAN:SIGSEGV
> 37: =
> 37: ==14106==ERROR: AddressSanitizer: SEGV on unknown address 0x 
> (pc 0x7fd36213a988 bp 0x7fff4d586480 sp 0x7fff4d5862c0 T0)
> 37: #0 0x7fd36213a987 in qdr_link_cleanup_deliveries_CT 
> /home/travis/build/apache/qpid-dispatch/src/router_core/connections.c:864
> 37: #1 0x7fd36219b36e in qdr_core_free 
> /home/travis/build/apache/qpid-dispatch/src/router_core/router_core.c:228
> 37: #2 0x7fd362213008 in qd_router_free 
> /home/travis/build/apache/qpid-dispatch/src/router_node.c:2148
> 37: #3 0x7fd3620a27a1 in qd_dispatch_free 
> /home/travis/build/apache/qpid-dispatch/src/dispatch.c:371
> 37: #4 0x401dd2 in main_process 
> /home/travis/build/apache/qpid-dispatch/router/src/main.c:119
> 37: #5 0x403917 in main 
> /home/travis/build/apache/qpid-dispatch/router/src/main.c:369
> 37: #6 0x7fd360a0a82f in __libc_start_main 
> (/lib/x86_64-linux-gnu/libc.so.6+0x2082f)
> 37: #7 0x401ac8 in _start 
> (/home/travis/build/apache/qpid-dispatch/build/router/qdrouterd+0x401ac8)
> 37: 
> 37: AddressSanitizer can not provide additional info.
> 37: SUMMARY: AddressSanitizer: SEGV 
> /home/travis/build/apache/qpid-dispatch/src/router_core/connections.c:864 
> qdr_link_cleanup_deliveries_CT
> 37: ==14106==ABORTING
> {noformat}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: dev-unsubscr...@qpid.apache.org
For additional commands, e-mail: dev-h...@qpid.apache.org



[jira] [Commented] (DISPATCH-2054) SEGV in qdr_link_cleanup_deliveries_CT during system_tests_multi_tenancy

2021-04-16 Thread ASF GitHub Bot (Jira)


[ 
https://issues.apache.org/jira/browse/DISPATCH-2054?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17323813#comment-17323813
 ] 

ASF GitHub Bot commented on DISPATCH-2054:
--

asfgit closed pull request #1128:
URL: https://github.com/apache/qpid-dispatch/pull/1128


   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


> SEGV in qdr_link_cleanup_deliveries_CT during system_tests_multi_tenancy
> 
>
> Key: DISPATCH-2054
> URL: https://issues.apache.org/jira/browse/DISPATCH-2054
> Project: Qpid Dispatch
>  Issue Type: Bug
>Affects Versions: 1.16.0
>Reporter: Jiri Daněk
>Priority: Major
>
> https://travis-ci.com/github/apache/qpid-dispatch/jobs/498884078#L6273
> {noformat}
> 37: ASAN:SIGSEGV
> 37: =
> 37: ==14106==ERROR: AddressSanitizer: SEGV on unknown address 0x 
> (pc 0x7fd36213a988 bp 0x7fff4d586480 sp 0x7fff4d5862c0 T0)
> 37: #0 0x7fd36213a987 in qdr_link_cleanup_deliveries_CT 
> /home/travis/build/apache/qpid-dispatch/src/router_core/connections.c:864
> 37: #1 0x7fd36219b36e in qdr_core_free 
> /home/travis/build/apache/qpid-dispatch/src/router_core/router_core.c:228
> 37: #2 0x7fd362213008 in qd_router_free 
> /home/travis/build/apache/qpid-dispatch/src/router_node.c:2148
> 37: #3 0x7fd3620a27a1 in qd_dispatch_free 
> /home/travis/build/apache/qpid-dispatch/src/dispatch.c:371
> 37: #4 0x401dd2 in main_process 
> /home/travis/build/apache/qpid-dispatch/router/src/main.c:119
> 37: #5 0x403917 in main 
> /home/travis/build/apache/qpid-dispatch/router/src/main.c:369
> 37: #6 0x7fd360a0a82f in __libc_start_main 
> (/lib/x86_64-linux-gnu/libc.so.6+0x2082f)
> 37: #7 0x401ac8 in _start 
> (/home/travis/build/apache/qpid-dispatch/build/router/qdrouterd+0x401ac8)
> 37: 
> 37: AddressSanitizer can not provide additional info.
> 37: SUMMARY: AddressSanitizer: SEGV 
> /home/travis/build/apache/qpid-dispatch/src/router_core/connections.c:864 
> qdr_link_cleanup_deliveries_CT
> 37: ==14106==ABORTING
> {noformat}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: dev-unsubscr...@qpid.apache.org
For additional commands, e-mail: dev-h...@qpid.apache.org



[jira] [Resolved] (DISPATCH-2054) SEGV in qdr_link_cleanup_deliveries_CT during system_tests_multi_tenancy

2021-04-16 Thread Ganesh Murthy (Jira)


 [ 
https://issues.apache.org/jira/browse/DISPATCH-2054?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ganesh Murthy resolved DISPATCH-2054.
-
Fix Version/s: 1.16.0
   Resolution: Fixed

> SEGV in qdr_link_cleanup_deliveries_CT during system_tests_multi_tenancy
> 
>
> Key: DISPATCH-2054
> URL: https://issues.apache.org/jira/browse/DISPATCH-2054
> Project: Qpid Dispatch
>  Issue Type: Bug
>Affects Versions: 1.16.0
>Reporter: Jiri Daněk
>Assignee: Ganesh Murthy
>Priority: Major
> Fix For: 1.16.0
>
>
> https://travis-ci.com/github/apache/qpid-dispatch/jobs/498884078#L6273
> {noformat}
> 37: ASAN:SIGSEGV
> 37: =
> 37: ==14106==ERROR: AddressSanitizer: SEGV on unknown address 0x 
> (pc 0x7fd36213a988 bp 0x7fff4d586480 sp 0x7fff4d5862c0 T0)
> 37: #0 0x7fd36213a987 in qdr_link_cleanup_deliveries_CT 
> /home/travis/build/apache/qpid-dispatch/src/router_core/connections.c:864
> 37: #1 0x7fd36219b36e in qdr_core_free 
> /home/travis/build/apache/qpid-dispatch/src/router_core/router_core.c:228
> 37: #2 0x7fd362213008 in qd_router_free 
> /home/travis/build/apache/qpid-dispatch/src/router_node.c:2148
> 37: #3 0x7fd3620a27a1 in qd_dispatch_free 
> /home/travis/build/apache/qpid-dispatch/src/dispatch.c:371
> 37: #4 0x401dd2 in main_process 
> /home/travis/build/apache/qpid-dispatch/router/src/main.c:119
> 37: #5 0x403917 in main 
> /home/travis/build/apache/qpid-dispatch/router/src/main.c:369
> 37: #6 0x7fd360a0a82f in __libc_start_main 
> (/lib/x86_64-linux-gnu/libc.so.6+0x2082f)
> 37: #7 0x401ac8 in _start 
> (/home/travis/build/apache/qpid-dispatch/build/router/qdrouterd+0x401ac8)
> 37: 
> 37: AddressSanitizer can not provide additional info.
> 37: SUMMARY: AddressSanitizer: SEGV 
> /home/travis/build/apache/qpid-dispatch/src/router_core/connections.c:864 
> qdr_link_cleanup_deliveries_CT
> 37: ==14106==ABORTING
> {noformat}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: dev-unsubscr...@qpid.apache.org
For additional commands, e-mail: dev-h...@qpid.apache.org



[GitHub] [qpid-dispatch] asfgit closed pull request #1128: DISPATCH-2054: Set tracking_addr on deliveries to zero since those ad…

2021-04-16 Thread GitBox


asfgit closed pull request #1128:
URL: https://github.com/apache/qpid-dispatch/pull/1128


   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: dev-unsubscr...@qpid.apache.org
For additional commands, e-mail: dev-h...@qpid.apache.org



[jira] [Commented] (DISPATCH-2054) SEGV in qdr_link_cleanup_deliveries_CT during system_tests_multi_tenancy

2021-04-16 Thread ASF subversion and git services (Jira)


[ 
https://issues.apache.org/jira/browse/DISPATCH-2054?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17323812#comment-17323812
 ] 

ASF subversion and git services commented on DISPATCH-2054:
---

Commit 37a6ca1fbe486d1eddb7360fc0b73a367b4ea01d in qpid-dispatch's branch 
refs/heads/main from Ganesh Murthy
[ https://gitbox.apache.org/repos/asf?p=qpid-dispatch.git;h=37a6ca1 ]

DISPATCH-2054: Set tracking_addr on deliveries to zero since those addresses 
have already been freed. This closes #1128.


> SEGV in qdr_link_cleanup_deliveries_CT during system_tests_multi_tenancy
> 
>
> Key: DISPATCH-2054
> URL: https://issues.apache.org/jira/browse/DISPATCH-2054
> Project: Qpid Dispatch
>  Issue Type: Bug
>Affects Versions: 1.16.0
>Reporter: Jiri Daněk
>Priority: Major
>
> https://travis-ci.com/github/apache/qpid-dispatch/jobs/498884078#L6273
> {noformat}
> 37: ASAN:SIGSEGV
> 37: =
> 37: ==14106==ERROR: AddressSanitizer: SEGV on unknown address 0x 
> (pc 0x7fd36213a988 bp 0x7fff4d586480 sp 0x7fff4d5862c0 T0)
> 37: #0 0x7fd36213a987 in qdr_link_cleanup_deliveries_CT 
> /home/travis/build/apache/qpid-dispatch/src/router_core/connections.c:864
> 37: #1 0x7fd36219b36e in qdr_core_free 
> /home/travis/build/apache/qpid-dispatch/src/router_core/router_core.c:228
> 37: #2 0x7fd362213008 in qd_router_free 
> /home/travis/build/apache/qpid-dispatch/src/router_node.c:2148
> 37: #3 0x7fd3620a27a1 in qd_dispatch_free 
> /home/travis/build/apache/qpid-dispatch/src/dispatch.c:371
> 37: #4 0x401dd2 in main_process 
> /home/travis/build/apache/qpid-dispatch/router/src/main.c:119
> 37: #5 0x403917 in main 
> /home/travis/build/apache/qpid-dispatch/router/src/main.c:369
> 37: #6 0x7fd360a0a82f in __libc_start_main 
> (/lib/x86_64-linux-gnu/libc.so.6+0x2082f)
> 37: #7 0x401ac8 in _start 
> (/home/travis/build/apache/qpid-dispatch/build/router/qdrouterd+0x401ac8)
> 37: 
> 37: AddressSanitizer can not provide additional info.
> 37: SUMMARY: AddressSanitizer: SEGV 
> /home/travis/build/apache/qpid-dispatch/src/router_core/connections.c:864 
> qdr_link_cleanup_deliveries_CT
> 37: ==14106==ABORTING
> {noformat}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: dev-unsubscr...@qpid.apache.org
For additional commands, e-mail: dev-h...@qpid.apache.org



[jira] [Commented] (DISPATCH-2048) system_tests_router_mesh failure due to leak of qd_deferred_call_t

2021-04-16 Thread ASF GitHub Bot (Jira)


[ 
https://issues.apache.org/jira/browse/DISPATCH-2048?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17323810#comment-17323810
 ] 

ASF GitHub Bot commented on DISPATCH-2048:
--

kgiusti commented on a change in pull request #1127:
URL: https://github.com/apache/qpid-dispatch/pull/1127#discussion_r614837591



##
File path: src/router_core/router_core.c
##
@@ -258,6 +260,22 @@ void qdr_core_free(qdr_core_t *core)
 link = DEQ_HEAD(core->open_links);
 }
 
+//
+// Clean up any qdr_delivery_cleanup_t's that are still left in the 
core->delivery_cleanup_list
+//
+if (DEQ_SIZE(core->delivery_cleanup_list) > 0) {

Review comment:
   Also why not simply call qdr_do_message_to_addr_free() here instead of 
duplicating the code?




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


> system_tests_router_mesh failure due to leak of qd_deferred_call_t
> --
>
> Key: DISPATCH-2048
> URL: https://issues.apache.org/jira/browse/DISPATCH-2048
> Project: Qpid Dispatch
>  Issue Type: Test
>  Components: Tests
>Affects Versions: 1.15.0
>Reporter: Ganesh Murthy
>Priority: Major
>
> {noformat}
>  63: Test command: /usr/bin/python 
> "/home/travis/build/apache/qpid-dispatch/build/tests/run.py" "-m" "unittest" 
> "-v" "system_tests_router_mesh"
> 63: Test timeout computed to be: 600
> 63: test_01_rx_failover_clean (system_tests_router_mesh.ThreeRouterTest) ... 
> ok
> 63: test_02_rx_failover_dirty (system_tests_router_mesh.ThreeRouterTest) ... 
> ok
> 63: test_03_unavailable_link_attach 
> (system_tests_router_mesh.ThreeRouterTest) ... ok
> 63: test_04_unavailable_anonymous_link_attach 
> (system_tests_router_mesh.ThreeRouterTest) ... ok
> 63: test_05_unavailable_anonymous_link_send 
> (system_tests_router_mesh.ThreeRouterTest) ... ok
> 63: test_06_parallel_priority (system_tests_router_mesh.ThreeRouterTest) ... 
> ERROR
> 63: ERROR
> 63: 
> 63: Router RouterC output file:
> 63: ERROR: Aborted due to unexpected alloc pool leak of type 
> 'qd_deferred_call_t'
> 63: 
> 63: 
> 63: 
> 63: Router RouterC debug dump file:
> 63: 
> 63: alloc.c: Items of type 'qd_iterator_t' remain allocated at shutdown: 80 
> (SUPPRESSED)
> 63: alloc.c: Items of type 'qd_timer_t' remain allocated at shutdown: 1 
> (SUPPRESSED)
> 63: alloc.c: Items of type 'qd_bitmask_t' remain allocated at shutdown: 8 
> (SUPPRESSED)
> 63: alloc.c: Items of type 'qd_buffer_t' remain allocated at shutdown: 287 
> (SUPPRESSED)
> 63: alloc.c: Items of type 'qd_parsed_field_t' remain allocated at shutdown: 
> 32 (SUPPRESSED)
> 63: alloc.c: Items of type 'qd_connector_t' remain allocated at shutdown: 1 
> (SUPPRESSED)
> 63: alloc.c: Items of type 'qd_message_t' remain allocated at shutdown: 8 
> (SUPPRESSED)
> 63: alloc.c: Items of type 'qd_message_content_t' remain allocated at 
> shutdown: 8 (SUPPRESSED)
> 63: alloc.c: Items of type 'qdr_delivery_t' remain allocated at shutdown: 8 
> (SUPPRESSED)
> 63: alloc.c: Items of type 'qd_link_ref_t' remain allocated at shutdown: 8 
> (SUPPRESSED)
> 63: alloc.c: Items of type 'qd_deferred_call_t' remain allocated at shutdown: 
> 15
> 63: Leak: 2021-04-15 03:43:23.586342 + type: qd_deferred_call_t address: 
> 0x6110001b7d10
> 63: /usr/lib/x86_64-linux-gnu/libasan.so.2(+0x4a077) [0x7f3be716c077]
> 63: 
> /home/travis/build/apache/qpid-dispatch/build/src/libqpid-dispatch.so(qd_alloc+0xc70)
>  [0x7f3be68ee7a0]
> 63: 
> /home/travis/build/apache/qpid-dispatch/build/src/libqpid-dispatch.so(new_qd_deferred_call_t+0x26)
>  [0x7f3be6a9]
> 63: 
> /home/travis/build/apache/qpid-dispatch/build/src/libqpid-dispatch.so(qd_connection_invoke_deferred+0x25)
>  [0x7f3be6aaa479]
> 63: 
> /home/travis/build/apache/qpid-dispatch/build/src/libqpid-dispatch.so(+0x53431e)
>  [0x7f3be6a9231e]
> 63: 
> /home/travis/build/apache/qpid-dispatch/build/src/libqpid-dispatch.so(qdr_connection_process+0x2720)
>  [0x7f3be69b31d4]
> 63: 
> /home/travis/build/apache/qpid-dispatch/build/src/libqpid-dispatch.so(+0x52a9c7)
>  [0x7f3be6a889c7]
> 63: 
> /home/travis/build/apache/qpid-dispatch/build/src/libqpid-dispatch.so(+0x3b631c)
>  [0x7f3be691431c]
> 63: 
> /home/travis/build/apache/qpid-dispatch/build/src/libqpid-dispatch.so(qd_container_handle_event+0x223b)
>  [0x7f3be6917d6a]
> 63: 
> /home/travis/build/apache/qpid-dispatch/build/src/libqpid-dispatch.so(+0x54621b)
>  [0x7f3be6aa421b]
> 63: 
> 63: Leak: 2021-04-15 03:43:23.586520 + type: qd_deferred_call_t address: 
> 0x61100014b9d0
> 63: /usr/lib/x86_64-linux-gnu/libasan.so.2(+0x4a077) [0x7f3be716c077]
> 63: 
> 

[GitHub] [qpid-dispatch] kgiusti commented on a change in pull request #1127: DISPATCH-2048: Added code to clean out the qdr_delivery_cleanup_t on …

2021-04-16 Thread GitBox


kgiusti commented on a change in pull request #1127:
URL: https://github.com/apache/qpid-dispatch/pull/1127#discussion_r614837591



##
File path: src/router_core/router_core.c
##
@@ -258,6 +260,22 @@ void qdr_core_free(qdr_core_t *core)
 link = DEQ_HEAD(core->open_links);
 }
 
+//
+// Clean up any qdr_delivery_cleanup_t's that are still left in the 
core->delivery_cleanup_list
+//
+if (DEQ_SIZE(core->delivery_cleanup_list) > 0) {

Review comment:
   Also why not simply call qdr_do_message_to_addr_free() here instead of 
duplicating the code?




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: dev-unsubscr...@qpid.apache.org
For additional commands, e-mail: dev-h...@qpid.apache.org



[jira] [Commented] (DISPATCH-2048) system_tests_router_mesh failure due to leak of qd_deferred_call_t

2021-04-16 Thread ASF GitHub Bot (Jira)


[ 
https://issues.apache.org/jira/browse/DISPATCH-2048?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17323808#comment-17323808
 ] 

ASF GitHub Bot commented on DISPATCH-2048:
--

kgiusti commented on a change in pull request #1127:
URL: https://github.com/apache/qpid-dispatch/pull/1127#discussion_r614835424



##
File path: src/router_core/router_core.c
##
@@ -258,6 +260,22 @@ void qdr_core_free(qdr_core_t *core)
 link = DEQ_HEAD(core->open_links);
 }
 
+//
+// Clean up any qdr_delivery_cleanup_t's that are still left in the 
core->delivery_cleanup_list
+//
+if (DEQ_SIZE(core->delivery_cleanup_list) > 0) {

Review comment:
   I'm confused - this check shouldn't be necessary since if length is zero 
the loop below will not execute since DEQ_HEAD/cleanup will == 0?




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


> system_tests_router_mesh failure due to leak of qd_deferred_call_t
> --
>
> Key: DISPATCH-2048
> URL: https://issues.apache.org/jira/browse/DISPATCH-2048
> Project: Qpid Dispatch
>  Issue Type: Test
>  Components: Tests
>Affects Versions: 1.15.0
>Reporter: Ganesh Murthy
>Priority: Major
>
> {noformat}
>  63: Test command: /usr/bin/python 
> "/home/travis/build/apache/qpid-dispatch/build/tests/run.py" "-m" "unittest" 
> "-v" "system_tests_router_mesh"
> 63: Test timeout computed to be: 600
> 63: test_01_rx_failover_clean (system_tests_router_mesh.ThreeRouterTest) ... 
> ok
> 63: test_02_rx_failover_dirty (system_tests_router_mesh.ThreeRouterTest) ... 
> ok
> 63: test_03_unavailable_link_attach 
> (system_tests_router_mesh.ThreeRouterTest) ... ok
> 63: test_04_unavailable_anonymous_link_attach 
> (system_tests_router_mesh.ThreeRouterTest) ... ok
> 63: test_05_unavailable_anonymous_link_send 
> (system_tests_router_mesh.ThreeRouterTest) ... ok
> 63: test_06_parallel_priority (system_tests_router_mesh.ThreeRouterTest) ... 
> ERROR
> 63: ERROR
> 63: 
> 63: Router RouterC output file:
> 63: ERROR: Aborted due to unexpected alloc pool leak of type 
> 'qd_deferred_call_t'
> 63: 
> 63: 
> 63: 
> 63: Router RouterC debug dump file:
> 63: 
> 63: alloc.c: Items of type 'qd_iterator_t' remain allocated at shutdown: 80 
> (SUPPRESSED)
> 63: alloc.c: Items of type 'qd_timer_t' remain allocated at shutdown: 1 
> (SUPPRESSED)
> 63: alloc.c: Items of type 'qd_bitmask_t' remain allocated at shutdown: 8 
> (SUPPRESSED)
> 63: alloc.c: Items of type 'qd_buffer_t' remain allocated at shutdown: 287 
> (SUPPRESSED)
> 63: alloc.c: Items of type 'qd_parsed_field_t' remain allocated at shutdown: 
> 32 (SUPPRESSED)
> 63: alloc.c: Items of type 'qd_connector_t' remain allocated at shutdown: 1 
> (SUPPRESSED)
> 63: alloc.c: Items of type 'qd_message_t' remain allocated at shutdown: 8 
> (SUPPRESSED)
> 63: alloc.c: Items of type 'qd_message_content_t' remain allocated at 
> shutdown: 8 (SUPPRESSED)
> 63: alloc.c: Items of type 'qdr_delivery_t' remain allocated at shutdown: 8 
> (SUPPRESSED)
> 63: alloc.c: Items of type 'qd_link_ref_t' remain allocated at shutdown: 8 
> (SUPPRESSED)
> 63: alloc.c: Items of type 'qd_deferred_call_t' remain allocated at shutdown: 
> 15
> 63: Leak: 2021-04-15 03:43:23.586342 + type: qd_deferred_call_t address: 
> 0x6110001b7d10
> 63: /usr/lib/x86_64-linux-gnu/libasan.so.2(+0x4a077) [0x7f3be716c077]
> 63: 
> /home/travis/build/apache/qpid-dispatch/build/src/libqpid-dispatch.so(qd_alloc+0xc70)
>  [0x7f3be68ee7a0]
> 63: 
> /home/travis/build/apache/qpid-dispatch/build/src/libqpid-dispatch.so(new_qd_deferred_call_t+0x26)
>  [0x7f3be6a9]
> 63: 
> /home/travis/build/apache/qpid-dispatch/build/src/libqpid-dispatch.so(qd_connection_invoke_deferred+0x25)
>  [0x7f3be6aaa479]
> 63: 
> /home/travis/build/apache/qpid-dispatch/build/src/libqpid-dispatch.so(+0x53431e)
>  [0x7f3be6a9231e]
> 63: 
> /home/travis/build/apache/qpid-dispatch/build/src/libqpid-dispatch.so(qdr_connection_process+0x2720)
>  [0x7f3be69b31d4]
> 63: 
> /home/travis/build/apache/qpid-dispatch/build/src/libqpid-dispatch.so(+0x52a9c7)
>  [0x7f3be6a889c7]
> 63: 
> /home/travis/build/apache/qpid-dispatch/build/src/libqpid-dispatch.so(+0x3b631c)
>  [0x7f3be691431c]
> 63: 
> /home/travis/build/apache/qpid-dispatch/build/src/libqpid-dispatch.so(qd_container_handle_event+0x223b)
>  [0x7f3be6917d6a]
> 63: 
> /home/travis/build/apache/qpid-dispatch/build/src/libqpid-dispatch.so(+0x54621b)
>  [0x7f3be6aa421b]
> 63: 
> 63: Leak: 2021-04-15 03:43:23.586520 + type: qd_deferred_call_t address: 
> 0x61100014b9d0
> 63: 

[GitHub] [qpid-dispatch] kgiusti commented on a change in pull request #1127: DISPATCH-2048: Added code to clean out the qdr_delivery_cleanup_t on …

2021-04-16 Thread GitBox


kgiusti commented on a change in pull request #1127:
URL: https://github.com/apache/qpid-dispatch/pull/1127#discussion_r614835424



##
File path: src/router_core/router_core.c
##
@@ -258,6 +260,22 @@ void qdr_core_free(qdr_core_t *core)
 link = DEQ_HEAD(core->open_links);
 }
 
+//
+// Clean up any qdr_delivery_cleanup_t's that are still left in the 
core->delivery_cleanup_list
+//
+if (DEQ_SIZE(core->delivery_cleanup_list) > 0) {

Review comment:
   I'm confused - this check shouldn't be necessary since if length is zero 
the loop below will not execute since DEQ_HEAD/cleanup will == 0?




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: dev-unsubscr...@qpid.apache.org
For additional commands, e-mail: dev-h...@qpid.apache.org



[jira] [Commented] (PROTON-2370) [cpp] An accessor for the delivery tag

2021-04-16 Thread ASF GitHub Bot (Jira)


[ 
https://issues.apache.org/jira/browse/PROTON-2370?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17323789#comment-17323789
 ] 

ASF GitHub Bot commented on PROTON-2370:


DreamPearl edited a comment on pull request #309:
URL: https://github.com/apache/qpid-proton/pull/309#issuecomment-821153641


   > My guess: you created a new file tag.hpp and forgot to commit it?
   
   I once created this file but later deleted it, then forgot to delete it from 
include :'(Fixed now.
   
   > @DreamPearl It's probably a high time you fixed the CI failure 
https://github.com/apache/qpid-proton/pull/309/checks?check_run_id=2362084471#step:11:185
   
   I am still working on fixing tests.
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


> [cpp] An accessor for the delivery tag
> --
>
> Key: PROTON-2370
> URL: https://issues.apache.org/jira/browse/PROTON-2370
> Project: Qpid Proton
>  Issue Type: Improvement
>  Components: cpp-binding
>Reporter: Justin Ross
>Assignee: Rakhi Kumari
>Priority: Major
>
> *Assignee: Rakhi*
> Steps:
>  * Get familiar with the AMQP idea of a delivery tag
>  * Add a delivery tag accessor to the C++ delivery class
>  * Document the accessor in the C++ header
>  * Add a test for the accessor to the C++ test suite
> Resources:
>  * 
> [http://docs.oasis-open.org/amqp/core/v1.0/os/amqp-core-transport-v1.0-os.html#doc-idp438000]
>  * 
> [http://docs.oasis-open.org/amqp/core/v1.0/os/amqp-core-transport-v1.0-os.html#type-delivery-tag]
>  * 
> [https://github.com/apache/qpid-proton/blob/main/c/include/proton/delivery.h#L106]
>  * 
> [https://github.com/apache/qpid-proton/blob/main/cpp/include/proton/delivery.hpp]



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: dev-unsubscr...@qpid.apache.org
For additional commands, e-mail: dev-h...@qpid.apache.org



[GitHub] [qpid-proton] DreamPearl edited a comment on pull request #309: [WIP] PROTON-2370: [cpp] An accessor for the delivery tag

2021-04-16 Thread GitBox


DreamPearl edited a comment on pull request #309:
URL: https://github.com/apache/qpid-proton/pull/309#issuecomment-821153641


   > My guess: you created a new file tag.hpp and forgot to commit it?
   
   I once created this file but later deleted it, then forgot to delete it from 
include :'(Fixed now.
   
   > @DreamPearl It's probably a high time you fixed the CI failure 
https://github.com/apache/qpid-proton/pull/309/checks?check_run_id=2362084471#step:11:185
   
   I am still working on fixing tests.
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: dev-unsubscr...@qpid.apache.org
For additional commands, e-mail: dev-h...@qpid.apache.org



[jira] [Commented] (PROTON-2370) [cpp] An accessor for the delivery tag

2021-04-16 Thread ASF GitHub Bot (Jira)


[ 
https://issues.apache.org/jira/browse/PROTON-2370?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17323788#comment-17323788
 ] 

ASF GitHub Bot commented on PROTON-2370:


DreamPearl commented on pull request #309:
URL: https://github.com/apache/qpid-proton/pull/309#issuecomment-821153641


   > My guess: you created a new file tag.hpp and forgot to commit it?
   
   I once created this file but later deleted it, then forgot to delete it from 
include. :'(
   
   > @DreamPearl It's probably a high time you fixed the CI failure 
https://github.com/apache/qpid-proton/pull/309/checks?check_run_id=2362084471#step:11:185
   
   I am still working on fixing tests.
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


> [cpp] An accessor for the delivery tag
> --
>
> Key: PROTON-2370
> URL: https://issues.apache.org/jira/browse/PROTON-2370
> Project: Qpid Proton
>  Issue Type: Improvement
>  Components: cpp-binding
>Reporter: Justin Ross
>Assignee: Rakhi Kumari
>Priority: Major
>
> *Assignee: Rakhi*
> Steps:
>  * Get familiar with the AMQP idea of a delivery tag
>  * Add a delivery tag accessor to the C++ delivery class
>  * Document the accessor in the C++ header
>  * Add a test for the accessor to the C++ test suite
> Resources:
>  * 
> [http://docs.oasis-open.org/amqp/core/v1.0/os/amqp-core-transport-v1.0-os.html#doc-idp438000]
>  * 
> [http://docs.oasis-open.org/amqp/core/v1.0/os/amqp-core-transport-v1.0-os.html#type-delivery-tag]
>  * 
> [https://github.com/apache/qpid-proton/blob/main/c/include/proton/delivery.h#L106]
>  * 
> [https://github.com/apache/qpid-proton/blob/main/cpp/include/proton/delivery.hpp]



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: dev-unsubscr...@qpid.apache.org
For additional commands, e-mail: dev-h...@qpid.apache.org



[GitHub] [qpid-proton] DreamPearl commented on pull request #309: [WIP] PROTON-2370: [cpp] An accessor for the delivery tag

2021-04-16 Thread GitBox


DreamPearl commented on pull request #309:
URL: https://github.com/apache/qpid-proton/pull/309#issuecomment-821153641


   > My guess: you created a new file tag.hpp and forgot to commit it?
   
   I once created this file but later deleted it, then forgot to delete it from 
include. :'(
   
   > @DreamPearl It's probably a high time you fixed the CI failure 
https://github.com/apache/qpid-proton/pull/309/checks?check_run_id=2362084471#step:11:185
   
   I am still working on fixing tests.
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: dev-unsubscr...@qpid.apache.org
For additional commands, e-mail: dev-h...@qpid.apache.org



[jira] [Updated] (QPIDJMS-535) Looping failure after SASL negotiation with Artemis v2.16

2021-04-16 Thread David Martin (Jira)


 [ 
https://issues.apache.org/jira/browse/QPIDJMS-535?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

David Martin updated QPIDJMS-535:
-
Description: 
This is not a production issue for us so tagging it as minor. Encountered 
during some destructive testing, killing brokers, clients, seeing that 
connections re-establish ok.

It sometimes happens when re-establishing a connection to an artemis broker 
hosted in kubernetes which has been recycled by deleting the pod.
{noformat}
2021-04-09 11:48:13.940 DEBUG 11152 --- [.84.10.4:30672]] proton.trace  
   : IN: CH[28] : Flow{nextIncomingId=1, 
incomingWindow=2147483647, nextOutgoingId=1, outgoingWindow=2147483647, 
handle=0, deliveryCount=14000, linkCredit=0, available=null, drain=true, 
echo=false, properties=null}
2021-04-09 11:48:14.278  INFO 11152 --- [.84.10.4:30672]] 
o.a.qpid.jms.provider.amqp.AmqpProvider  : Transport failed: An existing 
connection was forcibly closed by the remote host
2021-04-09 11:48:14.324 DEBUG 11152 --- [.84.10.4:30672]] 
io.netty.buffer.PoolThreadCache  : Freed 18 thread-local buffer(s) from 
thread: AmqpProvider :(1):[amqp://10.84.10.4:30672]
2021-04-09 11:48:36.778 DEBUG 11152 --- [.84.10.4:30672]] 
o.a.qpid.jms.sasl.SaslMechanismFinder: Best match for SASL auth was: 
SASL-PLAIN
2021-04-09 11:48:36.862 DEBUG 11152 --- [.84.10.4:30672]] 
o.a.qpid.proton.engine.impl.SaslImpl : Handled outcome: SaslImpl 
[_outcome=PN_SASL_OK, state=PN_SASL_PASS, done=true, role=CLIENT]
2021-04-09 11:48:36.936 DEBUG 11152 --- [.84.10.4:30672]] proton.trace  
   : IN: CH[0] : Open{ containerId='', hostname='null', 
maxFrameSize=131072, channelMax=65535, idleTimeOut=null, outgoingLocales=null, 
incomingLocales=null, offeredCapabilities=null, desiredCapabilities=null, 
properties={amqp:connection-establishment-failed=true}}
2021-04-09 11:48:36.937 DEBUG 11152 --- [.84.10.4:30672]] proton.trace  
   : IN: CH[0] : 
Close{error=Error{condition=amqp:invalid-field, description='null', 
info={invalid-field=container-id}}}
2021-04-09 11:48:36.940 DEBUG 11152 --- [.84.10.4:30672]] 
io.netty.buffer.PoolThreadCache  : Freed 3 thread-local buffer(s) from 
thread: AmqpProvider :(3):[amqp://10.84.10.4:30672]
2021-04-09 11:48:37.128 DEBUG 11152 --- [.84.10.4:30672]] 
o.a.qpid.jms.sasl.SaslMechanismFinder: Best match for SASL auth was: 
SASL-PLAIN
2021-04-09 11:48:37.200 DEBUG 11152 --- [.84.10.4:30672]] 
o.a.qpid.proton.engine.impl.SaslImpl : Handled outcome: SaslImpl 
[_outcome=PN_SASL_OK, state=PN_SASL_PASS, done=true, role=CLIENT]
2021-04-09 11:48:37.274 DEBUG 11152 --- [.84.10.4:30672]] proton.trace  
   : IN: CH[0] : Open{ containerId='', hostname='null', 
maxFrameSize=131072, channelMax=65535, idleTimeOut=null, outgoingLocales=null, 
incomingLocales=null, offeredCapabilities=null, desiredCapabilities=null, 
properties={amqp:connection-establishment-failed=true}}
2021-04-09 11:48:37.274 DEBUG 11152 --- [.84.10.4:30672]] proton.trace  
   : IN: CH[0] : 
Close{error=Error{condition=amqp:invalid-field, description='null', 
info={invalid-field=container-id}}}
2021-04-09 11:48:37.276 DEBUG 11152 --- [.84.10.4:30672]] 
io.netty.buffer.PoolThreadCache  : Freed 3 thread-local buffer(s) from 
thread: AmqpProvider :(4):[amqp://10.84.10.4:30672]
2021-04-09 11:48:37.486 DEBUG 11152 --- [.84.10.4:30672]] 
o.a.qpid.jms.sasl.SaslMechanismFinder: Best match for SASL auth was: 
SASL-PLAIN
2021-04-09 11:48:37.557 DEBUG 11152 --- [.84.10.4:30672]] 
o.a.qpid.proton.engine.impl.SaslImpl : Handled outcome: SaslImpl 
[_outcome=PN_SASL_OK, state=PN_SASL_PASS, done=true, role=CLIENT]
2021-04-09 11:48:37.629 DEBUG 11152 --- [.84.10.4:30672]] proton.trace  
   : IN: CH[0] : Open{ containerId='', hostname='null', 
maxFrameSize=131072, channelMax=65535, idleTimeOut=null, outgoingLocales=null, 
incomingLocales=null, offeredCapabilities=null, desiredCapabilities=null, 
properties={amqp:connection-establishment-failed=true}}
2021-04-09 11:48:37.629 DEBUG 11152 --- [.84.10.4:30672]] proton.trace  
   : IN: CH[0] : 
Close{error=Error{condition=amqp:invalid-field, description='null', 
info={invalid-field=container-id}}}
2021-04-09 11:48:37.631 DEBUG 11152 --- [.84.10.4:30672]] 
io.netty.buffer.PoolThreadCache  : Freed 3 thread-local buffer(s) from 
thread: AmqpProvider :(5):[amqp://10.84.10.4:30672]
2021-04-09 11:48:37.875 DEBUG 11152 --- [.84.10.4:30672]] 
o.a.qpid.jms.sasl.SaslMechanismFinder: Best match for SASL auth was: 
SASL-PLAIN
2021-04-09 11:48:37.945 DEBUG 11152 --- [.84.10.4:30672]] 
o.a.qpid.proton.engine.impl.SaslImpl : Handled outcome: SaslImpl 
[_outcome=PN_SASL_OK, state=PN_SASL_PASS, done=true, role=CLIENT]
2021-04-09 11:48:38.015 DEBUG 11152 --- [.84.10.4:30672]] proton.trace

[jira] [Commented] (DISPATCH-2050) fields for received delivery state are not relayed

2021-04-16 Thread ASF subversion and git services (Jira)


[ 
https://issues.apache.org/jira/browse/DISPATCH-2050?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17323784#comment-17323784
 ] 

ASF subversion and git services commented on DISPATCH-2050:
---

Commit 1f4b9fba18ddfc04ab6d50fabcaf2c03aba4670a in qpid-dispatch's branch 
refs/heads/main from Gordon Sim
[ https://gitbox.apache.org/repos/asf?p=qpid-dispatch.git;h=1f4b9fb ]

DISPATCH-2050: propagate section-number and section-offset back to sender


> fields for received delivery state are not relayed 
> ---
>
> Key: DISPATCH-2050
> URL: https://issues.apache.org/jira/browse/DISPATCH-2050
> Project: Qpid Dispatch
>  Issue Type: Bug
>Reporter: Gordon Sim
>Priority: Major
>
> If a disposition is sent with the RECEIVED state and values for 
> section_number and section_offset, those field values are lost in the 
> disposition as relayed back to the sender.
>  
> E.g.
> {noformat}
> [0x55b6a79f6e50]: AMQP:FRAME:0 -> @open(16) 
> [container-id="558d2cff-3e5a-46c6-903e-47d21a4f0101", hostname="localhost", 
> channel-max=32767]
> [0x55b6a79f6e50]: AMQP:FRAME:0 -> @begin(17) [next-outgoing-id=0, 
> incoming-window=2147483647, outgoing-window=2147483647]
> [0x55b6a79f6e50]: AMQP:FRAME:0 -> @attach(18) 
> [name="558d2cff-3e5a-46c6-903e-47d21a4f0101-test", handle=0, role=true, 
> snd-settle-mode=2, rcv-settle-mode=0, source=@source(40) [address="test", 
> durable=0, timeout=0, dynamic=false], target=@target(41) [durable=0, 
> timeout=0, dynamic=false], initial-delivery-count=0, max-message-size=0]
> [0x55b6a79f6e50]: AMQP:FRAME:0 -> @attach(18) 
> [name="558d2cff-3e5a-46c6-903e-47d21a4f0101-test", handle=1, role=false, 
> snd-settle-mode=2, rcv-settle-mode=0, source=@source(40) [durable=0, 
> timeout=0, dynamic=false], target=@target(41) [address="test", durable=0, 
> timeout=0, dynamic=false], initial-delivery-count=0, max-message-size=0]
> [0x55b6a79f6e50]: AMQP:FRAME:0 -> @flow(19) [incoming-window=2147483647, 
> next-outgoing-id=0, outgoing-window=2147483647, handle=0, delivery-count=0, 
> link-credit=10, drain=false]
> [0x55b6a79f6e50]: AMQP:FRAME:  <- AMQP
> [0x55b6a79f6e50]: AMQP:FRAME:0 <- @open(16) 
> [container-id="Standalone_V+zMPIsTle+Urdn", max-frame-size=16384, 
> channel-max=32767, idle-time-out=8000, 
> offered-capabilities=@PN_SYMBOL[:"ANONYMOUS-RELAY", :"qd.streaming-links"], 
> desired-capabilities=@PN_SYMBOL[:"ANONYMOUS-RELAY", :"qd.streaming-links"], 
> properties={:product="qpid-dispatch-router", :version="1.16.0-SNAPSHOT", 
> :"qd.conn-id"=5}]
> [0x55b6a79f6e50]: AMQP:FRAME:0 <- @begin(17) [remote-channel=0, 
> next-outgoing-id=0, incoming-window=2147483647, outgoing-window=2147483647]
> [0x55b6a79f6e50]: AMQP:FRAME:0 <- @attach(18) 
> [name="558d2cff-3e5a-46c6-903e-47d21a4f0101-test", handle=0, role=false, 
> snd-settle-mode=2, rcv-settle-mode=0, source=@source(40) [address="test", 
> durable=0, expiry-policy=:"session-end", timeout=0, dynamic=false], 
> target=@target(41) [durable=0, expiry-policy=:"session-end", timeout=0, 
> dynamic=false], initial-delivery-count=0, max-message-size=0]
> [0x55b6a79f6e50]: AMQP:FRAME:0 <- @attach(18) 
> [name="558d2cff-3e5a-46c6-903e-47d21a4f0101-test", handle=1, role=true, 
> snd-settle-mode=2, rcv-settle-mode=0, source=@source(40) [durable=0, 
> expiry-policy=:"session-end", timeout=0, dynamic=false], target=@target(41) 
> [address="test", durable=0, expiry-policy=:"session-end", timeout=0, 
> dynamic=false], initial-delivery-count=0, max-message-size=0]
> [0x55b6a79f6e50]: AMQP:FRAME:0 <- @flow(19) [next-incoming-id=0, 
> incoming-window=2147483647, next-outgoing-id=0, outgoing-window=2147483647, 
> handle=1, delivery-count=0, link-credit=250, drain=false]
> [0x55b6a79f6e50]: AMQP:FRAME:0 -> @transfer(20) [handle=1, delivery-id=0, 
> delivery-tag=b"1", message-format=0] (25) "\x00SpE\x00SsE\x00Sw\xa1\x0cHello 
> World!"
> [0x55b6a79f6e50]: AMQP:FRAME:0 <- @transfer(20) [handle=0, delivery-id=0, 
> delivery-tag=b"\x04\x00\x00\x00\x00\x00\x00\x00", message-format=0] (25) 
> "\x00SpE\x00SsE\x00Sw\xa1\x0cHello World!"
> Got delivery: Hello World!
> [0x55b6a79f6e50]: AMQP:FRAME:0 -> @disposition(21) [role=true, first=0, 
> state=@received(35) [section-number=10, section-offset=5]]
> [0x55b6a79f6e50]: AMQP:FRAME:0 <- @disposition(21) [role=true, first=0, 
> state=@received(35) [section-number=0, section-offset=0]]
> {noformat}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: dev-unsubscr...@qpid.apache.org
For additional commands, e-mail: dev-h...@qpid.apache.org



[jira] [Created] (QPIDJMS-535) Looping failure after SASL negotiation with Artemis v2.16

2021-04-16 Thread David Martin (Jira)
David Martin created QPIDJMS-535:


 Summary: Looping failure after SASL negotiation with Artemis v2.16
 Key: QPIDJMS-535
 URL: https://issues.apache.org/jira/browse/QPIDJMS-535
 Project: Qpid JMS
  Issue Type: Bug
Affects Versions: 0.58.0
Reporter: David Martin


This is not a production issue for us so tagging it as minor. Encountered 
during some destructive testing, killing brokers, clients, seeing that 
connections re-establish ok.

It sometimes happens when re-establishing a connection to an artemis broker 
hosted in kubernetes which has been recycled by deleting the pod.
{noformat}
2021-04-09 11:48:13.940 DEBUG 11152 --- [.84.10.4:30672]] proton.trace  
   : IN: CH[28] : Flow{nextIncomingId=1, 
incomingWindow=2147483647, nextOutgoingId=1, outgoingWindow=2147483647, 
handle=0, deliveryCount=14000, linkCredit=0, available=null, drain=true, 
echo=false, properties=null}
2021-04-09 11:48:14.278  INFO 11152 --- [.84.10.4:30672]] 
o.a.qpid.jms.provider.amqp.AmqpProvider  : Transport failed: An existing 
connection was forcibly closed by the remote host
2021-04-09 11:48:14.324 DEBUG 11152 --- [.84.10.4:30672]] 
io.netty.buffer.PoolThreadCache  : Freed 18 thread-local buffer(s) from 
thread: AmqpProvider :(1):[amqp://10.84.10.4:30672]
2021-04-09 11:48:36.778 DEBUG 11152 --- [.84.10.4:30672]] 
o.a.qpid.jms.sasl.SaslMechanismFinder: Best match for SASL auth was: 
SASL-PLAIN
2021-04-09 11:48:36.862 DEBUG 11152 --- [.84.10.4:30672]] 
o.a.qpid.proton.engine.impl.SaslImpl : Handled outcome: SaslImpl 
[_outcome=PN_SASL_OK, state=PN_SASL_PASS, done=true, role=CLIENT]
2021-04-09 11:48:36.936 DEBUG 11152 --- [.84.10.4:30672]] proton.trace  
   : IN: CH[0] : Open{ containerId='', hostname='null', 
maxFrameSize=131072, channelMax=65535, idleTimeOut=null, outgoingLocales=null, 
incomingLocales=null, offeredCapabilities=null, desiredCapabilities=null, 
properties={amqp:connection-establishment-failed=true}}
2021-04-09 11:48:36.937 DEBUG 11152 --- [.84.10.4:30672]] proton.trace  
   : IN: CH[0] : 
Close{error=Error{condition=amqp:invalid-field, description='null', 
info={invalid-field=container-id}}}
2021-04-09 11:48:36.940 DEBUG 11152 --- [.84.10.4:30672]] 
io.netty.buffer.PoolThreadCache  : Freed 3 thread-local buffer(s) from 
thread: AmqpProvider :(3):[amqp://10.84.10.4:30672]
2021-04-09 11:48:37.128 DEBUG 11152 --- [.84.10.4:30672]] 
o.a.qpid.jms.sasl.SaslMechanismFinder: Best match for SASL auth was: 
SASL-PLAIN
2021-04-09 11:48:37.200 DEBUG 11152 --- [.84.10.4:30672]] 
o.a.qpid.proton.engine.impl.SaslImpl : Handled outcome: SaslImpl 
[_outcome=PN_SASL_OK, state=PN_SASL_PASS, done=true, role=CLIENT]
2021-04-09 11:48:37.274 DEBUG 11152 --- [.84.10.4:30672]] proton.trace  
   : IN: CH[0] : Open{ containerId='', hostname='null', 
maxFrameSize=131072, channelMax=65535, idleTimeOut=null, outgoingLocales=null, 
incomingLocales=null, offeredCapabilities=null, desiredCapabilities=null, 
properties={amqp:connection-establishment-failed=true}}
2021-04-09 11:48:37.274 DEBUG 11152 --- [.84.10.4:30672]] proton.trace  
   : IN: CH[0] : 
Close{error=Error{condition=amqp:invalid-field, description='null', 
info={invalid-field=container-id}}}
2021-04-09 11:48:37.276 DEBUG 11152 --- [.84.10.4:30672]] 
io.netty.buffer.PoolThreadCache  : Freed 3 thread-local buffer(s) from 
thread: AmqpProvider :(4):[amqp://10.84.10.4:30672]
2021-04-09 11:48:37.486 DEBUG 11152 --- [.84.10.4:30672]] 
o.a.qpid.jms.sasl.SaslMechanismFinder: Best match for SASL auth was: 
SASL-PLAIN
2021-04-09 11:48:37.557 DEBUG 11152 --- [.84.10.4:30672]] 
o.a.qpid.proton.engine.impl.SaslImpl : Handled outcome: SaslImpl 
[_outcome=PN_SASL_OK, state=PN_SASL_PASS, done=true, role=CLIENT]
2021-04-09 11:48:37.629 DEBUG 11152 --- [.84.10.4:30672]] proton.trace  
   : IN: CH[0] : Open{ containerId='', hostname='null', 
maxFrameSize=131072, channelMax=65535, idleTimeOut=null, outgoingLocales=null, 
incomingLocales=null, offeredCapabilities=null, desiredCapabilities=null, 
properties={amqp:connection-establishment-failed=true}}
2021-04-09 11:48:37.629 DEBUG 11152 --- [.84.10.4:30672]] proton.trace  
   : IN: CH[0] : 
Close{error=Error{condition=amqp:invalid-field, description='null', 
info={invalid-field=container-id}}}
2021-04-09 11:48:37.631 DEBUG 11152 --- [.84.10.4:30672]] 
io.netty.buffer.PoolThreadCache  : Freed 3 thread-local buffer(s) from 
thread: AmqpProvider :(5):[amqp://10.84.10.4:30672]
2021-04-09 11:48:37.875 DEBUG 11152 --- [.84.10.4:30672]] 
o.a.qpid.jms.sasl.SaslMechanismFinder: Best match for SASL auth was: 
SASL-PLAIN
2021-04-09 11:48:37.945 DEBUG 11152 --- [.84.10.4:30672]] 
o.a.qpid.proton.engine.impl.SaslImpl : 

[jira] [Commented] (PROTON-2370) [cpp] An accessor for the delivery tag

2021-04-16 Thread ASF GitHub Bot (Jira)


[ 
https://issues.apache.org/jira/browse/PROTON-2370?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17323769#comment-17323769
 ] 

ASF GitHub Bot commented on PROTON-2370:


jiridanek commented on pull request #309:
URL: https://github.com/apache/qpid-proton/pull/309#issuecomment-821132679


   @DreamPearl It's probably a high time you fixed the CI failure 
https://github.com/apache/qpid-proton/pull/309/checks?check_run_id=2362084471#step:11:185
   
   ```
   /home/runner/work/qpid-proton/qpid-proton/cpp/src/delivery.cpp:26:10: fatal 
error: proton/tag.hpp: No such file or directory
  26 | #include "proton/tag.hpp"
 |  ^~~~
   compilation terminated.
   make[2]: *** [cpp/CMakeFiles/qpid-proton-cpp.dir/build.make:244: 
cpp/CMakeFiles/qpid-proton-cpp.dir/src/delivery.cpp.o] Error 1
   make[1]: *** [CMakeFiles/Makefile2:2274: 
cpp/CMakeFiles/qpid-proton-cpp.dir/all] Error 2
   make: *** [Makefile:146: all] Error 2
   Error: Process completed with exit code 2.
   ```
   
   My guess: you created a new file tag.hpp and forgot to commit it?


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


> [cpp] An accessor for the delivery tag
> --
>
> Key: PROTON-2370
> URL: https://issues.apache.org/jira/browse/PROTON-2370
> Project: Qpid Proton
>  Issue Type: Improvement
>  Components: cpp-binding
>Reporter: Justin Ross
>Assignee: Rakhi Kumari
>Priority: Major
>
> *Assignee: Rakhi*
> Steps:
>  * Get familiar with the AMQP idea of a delivery tag
>  * Add a delivery tag accessor to the C++ delivery class
>  * Document the accessor in the C++ header
>  * Add a test for the accessor to the C++ test suite
> Resources:
>  * 
> [http://docs.oasis-open.org/amqp/core/v1.0/os/amqp-core-transport-v1.0-os.html#doc-idp438000]
>  * 
> [http://docs.oasis-open.org/amqp/core/v1.0/os/amqp-core-transport-v1.0-os.html#type-delivery-tag]
>  * 
> [https://github.com/apache/qpid-proton/blob/main/c/include/proton/delivery.h#L106]
>  * 
> [https://github.com/apache/qpid-proton/blob/main/cpp/include/proton/delivery.hpp]



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: dev-unsubscr...@qpid.apache.org
For additional commands, e-mail: dev-h...@qpid.apache.org



[GitHub] [qpid-proton] jiridanek commented on pull request #309: [WIP] PROTON-2370: [cpp] An accessor for the delivery tag

2021-04-16 Thread GitBox


jiridanek commented on pull request #309:
URL: https://github.com/apache/qpid-proton/pull/309#issuecomment-821132679


   @DreamPearl It's probably a high time you fixed the CI failure 
https://github.com/apache/qpid-proton/pull/309/checks?check_run_id=2362084471#step:11:185
   
   ```
   /home/runner/work/qpid-proton/qpid-proton/cpp/src/delivery.cpp:26:10: fatal 
error: proton/tag.hpp: No such file or directory
  26 | #include "proton/tag.hpp"
 |  ^~~~
   compilation terminated.
   make[2]: *** [cpp/CMakeFiles/qpid-proton-cpp.dir/build.make:244: 
cpp/CMakeFiles/qpid-proton-cpp.dir/src/delivery.cpp.o] Error 1
   make[1]: *** [CMakeFiles/Makefile2:2274: 
cpp/CMakeFiles/qpid-proton-cpp.dir/all] Error 2
   make: *** [Makefile:146: all] Error 2
   Error: Process completed with exit code 2.
   ```
   
   My guess: you created a new file tag.hpp and forgot to commit it?


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: dev-unsubscr...@qpid.apache.org
For additional commands, e-mail: dev-h...@qpid.apache.org



[jira] [Commented] (PROTON-2370) [cpp] An accessor for the delivery tag

2021-04-16 Thread ASF GitHub Bot (Jira)


[ 
https://issues.apache.org/jira/browse/PROTON-2370?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17323766#comment-17323766
 ] 

ASF GitHub Bot commented on PROTON-2370:


DreamPearl commented on a change in pull request #309:
URL: https://github.com/apache/qpid-proton/pull/309#discussion_r614789081



##
File path: cpp/src/sender.cpp
##
@@ -64,10 +64,17 @@ namespace {
 uint64_t tag_counter = 0;
 }
 
-tracker sender::send(const message ) {
+tracker sender::send(const message ,const binary ) {
+pn_delivery_t *dlv;
+if(!tag.empty())
+{
+dlv = pn_delivery(pn_object(), pn_dtag((std::string(tag)).c_str(), 
tag.size()));

Review comment:
   Done. Thanks!




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


> [cpp] An accessor for the delivery tag
> --
>
> Key: PROTON-2370
> URL: https://issues.apache.org/jira/browse/PROTON-2370
> Project: Qpid Proton
>  Issue Type: Improvement
>  Components: cpp-binding
>Reporter: Justin Ross
>Assignee: Rakhi Kumari
>Priority: Major
>
> *Assignee: Rakhi*
> Steps:
>  * Get familiar with the AMQP idea of a delivery tag
>  * Add a delivery tag accessor to the C++ delivery class
>  * Document the accessor in the C++ header
>  * Add a test for the accessor to the C++ test suite
> Resources:
>  * 
> [http://docs.oasis-open.org/amqp/core/v1.0/os/amqp-core-transport-v1.0-os.html#doc-idp438000]
>  * 
> [http://docs.oasis-open.org/amqp/core/v1.0/os/amqp-core-transport-v1.0-os.html#type-delivery-tag]
>  * 
> [https://github.com/apache/qpid-proton/blob/main/c/include/proton/delivery.h#L106]
>  * 
> [https://github.com/apache/qpid-proton/blob/main/cpp/include/proton/delivery.hpp]



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: dev-unsubscr...@qpid.apache.org
For additional commands, e-mail: dev-h...@qpid.apache.org



[GitHub] [qpid-proton] DreamPearl commented on a change in pull request #309: [WIP] PROTON-2370: [cpp] An accessor for the delivery tag

2021-04-16 Thread GitBox


DreamPearl commented on a change in pull request #309:
URL: https://github.com/apache/qpid-proton/pull/309#discussion_r614789081



##
File path: cpp/src/sender.cpp
##
@@ -64,10 +64,17 @@ namespace {
 uint64_t tag_counter = 0;
 }
 
-tracker sender::send(const message ) {
+tracker sender::send(const message ,const binary ) {
+pn_delivery_t *dlv;
+if(!tag.empty())
+{
+dlv = pn_delivery(pn_object(), pn_dtag((std::string(tag)).c_str(), 
tag.size()));

Review comment:
   Done. Thanks!




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: dev-unsubscr...@qpid.apache.org
For additional commands, e-mail: dev-h...@qpid.apache.org



  1   2   >