> +/* link_bc_retr eval()- check if the indicated range can be
> +retransmitted now
> + * - Adjust permitted range if there is overlap with previous
> +retransmission  */ static bool link_bc_retr_eval(struct tipc_link *l,
> +u16 *from, u16 *to) {
> +     unsigned long elapsed = jiffies_to_msecs(jiffies - l->prev_retr);
> +
> +     if (less(*to, *from))
> +             return false;
> +
> +     /* New retransmission request */
> +     if ((elapsed > TIPC_BC_RETR_LIMIT) ||
> +         less(*to, l->prev_from) || more(*from, l->prev_to)) {
> +             l->prev_from = *from;
> +             l->prev_to = *to;
> +             l->prev_retr = jiffies;
> +             return true;
> +     }
> +
>
> [Ying] In the my understanding, the statement above seems to be changed as 
> below:
>
> if ((elapsed > TIPC_BC_RETR_LIMIT) && (less(*to, l->prev_from) || more(*from, 
> l->prev_to)))
>
> Otherwise, I think the rate of retransmission may be still very high 
> especially when packet disorder frequently happens.
> As it's very normal that BCast retransmission range requested from different 
> nodes is not same, I guess the rate is not well suppressed in above condition.
>
> Maybe, we can use the method which is being effective in tipc_sk_enqueue() to 
> limit the rate like:

The condition indicates that the two retransmission requests are 
completely disjoint, e.g., 10-12 and 13-15.

[Ying]Understood, and it makes sense.

I don't see any reason why 
we should wait with retransmitting 13-15 in this case, as it is obvious 
that somebody is missing it, and we haven't retransmitted it before.

[Ying] Most of cases I observed on bcast before, is that after NACK is just 
submitted to request sender to retransmit missed packets, the requested packets 
quickly arrives soon. But tragically sender still retransmits the requested 
packets through bcast later, leading to waste network bandwidth.  Especially 
when packets are received in disorder, it will further worsen the situation, 
which may be one of reasons why we see many duplicated bcast message appearing 
on network wire. To avoid this case, it's better to a bit postpone the 
retransmission of messages on sender, on the contrary, we also can restrain the 
retransmission request on receiver side. In other words, when receiver detects 
that there is a gap at the first time, it's better to delay a bit time to 
submit NACK message.

Moreover, I have a more aggressive idea: now that NACK is delivered through 
unicast, why not use unicast to retransmit bcast messages. This is because what 
packets missed rarely happens in practice. So when bcast messages are 
retransmitted through bcast, these message for most of nodes are redundant, 
which probably means it's unnecessary to send them through bcast. 

Regards,
Ying

But I do understand it can be problematic if re receive repeated 
disjoint nacks, e.g., 10-12, 13-15, 10-12, 13-15 etc. I will make a test 
and see if there is anything to gain from keeping a "history" of the 
retransmissions, so that we never repeat the same packets inside the 
same interval. I'll take a look.

///jon

>
> tipc_sk_enqueue()
> {
> ..
>          while (skb_queue_len(inputq)) {
>                  if (unlikely(time_after_eq(jiffies, time_limit)))
>                          return;
> ..
> }
>
> And we don’t need to consider the retransmission range.
>
> Regards,
> Ying
>
> +     /* Inside range of previous retransmit */
> +     if (!less(*from, l->prev_from) && !more(*to, l->prev_to))
> +             return false;
> +
> +     /* Fully or partially outside previous range => exclude overlap */
> +     if (less(*from, l->prev_from)) {
> +             *to = l->prev_from - 1;
> +             l->prev_from = *from;
> +     }
> +     if (more(*to, l->prev_to)) {
> +             *from = l->prev_to + 1;
> +             l->prev_to = *to;
> +     }
> +     l->prev_retr = jiffies;
> +     return true;
> +}
> +
>   /* tipc_link_bc_sync_rcv - update rcv link according to peer's send state
>    */
>   int tipc_link_bc_sync_rcv(struct tipc_link *l, struct tipc_msg *hdr,
>                         struct sk_buff_head *xmitq)
>   {
> +     struct tipc_link *snd_l = l->bc_sndlink;
>       u16 peers_snd_nxt = msg_bc_snd_nxt(hdr);
>       u16 from = msg_bcast_ack(hdr) + 1;
>       u16 to = from + msg_bc_gap(hdr) - 1;
> @@ -1613,14 +1655,14 @@ int tipc_link_bc_sync_rcv(struct tipc_link *l, struct 
> tipc_msg *hdr,
>       if (!l->bc_peer_is_up)
>               return rc;
>   
> +     l->stats.recv_nacks++;
> +
>       /* Ignore if peers_snd_nxt goes beyond receive window */
>       if (more(peers_snd_nxt, l->rcv_nxt + l->window))
>               return rc;
>   
> -     if (!less(to, from)) {
> -             rc = tipc_link_retrans(l->bc_sndlink, from, to, xmitq);
> -             l->stats.recv_nacks++;
> -     }
> +     if (link_bc_retr_eval(snd_l, &from, &to))
> +             rc = tipc_link_retrans(snd_l, from, to, xmitq);
>   
>       l->snd_nxt = peers_snd_nxt;
>       if (link_bc_rcv_gap(l))
> --
> 2.7.4
>

------------------------------------------------------------------------------
_______________________________________________
tipc-discussion mailing list
tipc-discussion@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/tipc-discussion

Reply via email to