Hi Ying, No problem, I'm using outlook too... Please see my answers correspondingly: 1. When you mention "this proposal", do you mean the single patch or the whole series since these features are actually separated and not dependent together...? Anyway, there have been some issues with the lab here, so I just tested this new feature on KVM/QEMU nodes using the virtio_net driver, with 4 vCPUs and only one TX/RX queue enabled. Also, the real-time kernel is not patched yet... If you have a better environment, may I ask you to help verify this?
Anyhow, if I could catch up your concerns in the last meeting, it was mainly related to the amount of packet retransmissions that could panic the NIC or kernel, so not really scalable? If so, in theoretically, it should not be a problem since we have already had the following mechanisms to control it: - Link window (e.g. max 50 outstanding/retransmitted packets); - Retransmission restricting timer on individual packets (e.g. within 10ms, if a new retransmission request comes it will be ignored...); - The priority queue for packet retransmissions (that is unlikely congested); Or do you have any other concerns, so please clarify? 2. Yes, in the commit it has mentioned about the "bandwidth limit on broadcast" but it can be invisible to user. One obvious thing is probably through broadcast statistics (so there is a need for the other patch for the broadcast rcv link stats) that users can see the sender trying to make a lot of (re-)transmissions, but it doesn't really work, the receiver gets only a few... Ok, I will make this clear by repeating some performance tests. 3. Hmm, this totally was my mistake... I removed it when merging/separating the patches for this series ☹. In a premature patch, it was: @@ -2425,7 +2426,7 @@ int tipc_link_bc_ack_rcv(struct tipc_link *r, u16 acked, u16 gap, return 0; trace_tipc_link_bc_ack(r, r->acked, acked, &l->transmq); - tipc_link_advance_transmq(l, r, acked, gap, ga, xmitq, &unused, &rc); + tipc_link_advance_transmq(l, r, acked, gap, ga, retrq, &unused, &rc); tipc_link_advance_backlog(l, xmitq); if (unlikely(!skb_queue_empty(&l->wakeupq))) Thanks a lot for your finding, I will update this to the series! BR/Tuong -----Original Message----- From: Xue, Ying <ying....@windriver.com> Sent: Monday, April 6, 2020 1:20 PM To: Tuong Tong Lien <tuong.t.l...@dektech.com.au>; jma...@redhat.com; ma...@donjonn.com; tipc-discussion@lists.sourceforge.net Cc: tipc-dek <tipc-...@dektech.com.au> Subject: RE: [PATCH RFC 3/4] tipc: enable broadcast retrans via unicast Hi Tuong, Sorry, I have to use outlook client to reply to your email, which will make the email messed a bit. Please see my following comments: == [Ying] 1. Did you ever conduct comprehensive verification about this proposal? What kinds of test environment did you use in your testing? For example, how many TIPC physical nodes were gotten involved into your testing? Did the NICs used during your testing support multiqueue feature? How many cores were there on one your used physical TIPC machine? In addition, if possible, I suggest you could try to enable RT_PREEMPT kernel to measure what throughput results we would get. == In some environment, broadcast traffic is suppressed at high rate (i.e. a kind of bandwidth limit setting). When it is applied, TIPC broadcast can still run successfully. However, when it comes to a high load, some packets will be dropped first and TIPC tries to retransmit them but the packet retransmission is intentionally broadcast too, so making things worse and not helpful at all. This commit enables the broadcast retransmission via unicast which only retransmits packets to the specific peer that has really reported a gap i.e. not broadcasting to all nodes in the cluster, so will prevent from being suppressed, and also reduce some overheads on the other peers due to duplicates, finally improve the overall TIPC broadcast performance. Note: the functionality can be turned on/off via the sysctl file: echo 1 > /proc/sys/net/tipc/bc_retruni echo 0 > /proc/sys/net/tipc/bc_retruni Default is '0', i.e. the broadcast retransmission still works as usual. == [Ying] 2. Actually I had a similar idea before, so I also think the broadcast performance might be significantly improved through this proposal, but we act as TIPC developers, we should explicitly tell users what condition they should enable this option and what condition they should disable it, otherwise, users have no idea at all about when to enable this option or when to disable this option. So, please give more performance data obtained in different test conditions. If this patch can give broadcast performance a clear benefit under any test condition, ideally we completely remove this option. Otherwise, at least we can tell users when to enable this option. == Signed-off-by: Tuong Lien <tuong.t.l...@dektech.com.au> int tipc_link_bc_ack_rcv(struct tipc_link *r, u16 acked, u16 gap, struct tipc_gap_ack_blks *ga, - struct sk_buff_head *xmitq) + struct sk_buff_head *xmitq, + struct sk_buff_head *retrq) { struct tipc_link *l = r->bc_sndlink; bool unused = false; == 3. [Ying] Sorry, I felt a bit confused. One new "retrq" parameter was introduced, but I didn't find where it was used in this function. Can you please explain how the new parameter works? == Thanks, Ying @@ -2460,7 +2461,8 @@ int tipc_link_bc_nack_rcv(struct tipc_link *l, struct sk_buff *skb, return 0; if (dnode == tipc_own_addr(l->net)) { - rc = tipc_link_bc_ack_rcv(l, acked, to - acked, NULL, xmitq); + rc = tipc_link_bc_ack_rcv(l, acked, to - acked, NULL, xmitq, + xmitq); l->stats.recv_nacks++; return rc; } diff --git a/net/tipc/link.h b/net/tipc/link.h index 0a0fa7350722..4d0768cf91d5 100644 --- a/net/tipc/link.h +++ b/net/tipc/link.h @@ -147,7 +147,8 @@ u16 tipc_get_gap_ack_blks(struct tipc_gap_ack_blks **ga, struct tipc_link *l, struct tipc_msg *hdr, bool uc); int tipc_link_bc_ack_rcv(struct tipc_link *l, u16 acked, u16 gap, struct tipc_gap_ack_blks *ga, - struct sk_buff_head *xmitq); + struct sk_buff_head *xmitq, + struct sk_buff_head *retrq); void tipc_link_build_bc_sync_msg(struct tipc_link *l, struct sk_buff_head *xmitq); void tipc_link_bc_init_rcv(struct tipc_link *l, struct tipc_msg *hdr); diff --git a/net/tipc/node.c b/net/tipc/node.c index eb6b62de81a7..917ad3920fac 100644 --- a/net/tipc/node.c +++ b/net/tipc/node.c @@ -1771,7 +1771,7 @@ static void tipc_node_bc_sync_rcv(struct tipc_node *n, struct tipc_msg *hdr, struct tipc_link *ucl; int rc; - rc = tipc_bcast_sync_rcv(n->net, n->bc_entry.link, hdr); + rc = tipc_bcast_sync_rcv(n->net, n->bc_entry.link, hdr, xmitq); if (rc & TIPC_LINK_DOWN_EVT) { tipc_node_reset_links(n); diff --git a/net/tipc/sysctl.c b/net/tipc/sysctl.c index 58ab3d6dcdce..97a6264a2993 100644 --- a/net/tipc/sysctl.c +++ b/net/tipc/sysctl.c @@ -36,7 +36,7 @@ #include "core.h" #include "trace.h" #include "crypto.h" - +#include "bcast.h" #include <linux/sysctl.h> static struct ctl_table_header *tipc_ctl_hdr; @@ -75,6 +75,13 @@ static struct ctl_table tipc_table[] = { .extra1 = SYSCTL_ONE, }, #endif + { + .procname = "bc_retruni", + .data = &sysctl_tipc_bc_retruni, + .maxlen = sizeof(sysctl_tipc_bc_retruni), + .mode = 0644, + .proc_handler = proc_doulongvec_minmax, + }, {} }; -- 2.13.7 _______________________________________________ tipc-discussion mailing list tipc-discussion@lists.sourceforge.net https://lists.sourceforge.net/lists/listinfo/tipc-discussion