On 2017年01月01日 01:31, David Miller wrote:
From: Jason Wang <jasow...@redhat.com>
Date: Fri, 30 Dec 2016 13:20:51 +0800

@@ -1283,10 +1314,15 @@ static ssize_t tun_get_user(struct tun_struct *tun, 
struct tun_file *tfile,
        skb_probe_transport_header(skb, 0);
rxhash = skb_get_hash(skb);
+
  #ifndef CONFIG_4KSTACKS
-       local_bh_disable();
-       netif_receive_skb(skb);
-       local_bh_enable();
+       if (!rx_batched) {
+               local_bh_disable();
+               netif_receive_skb(skb);
+               local_bh_enable();
+       } else {
+               tun_rx_batched(tfile, skb, more);
+       }
  #else
        netif_rx_ni(skb);
  #endif
If rx_batched has been set, and we are talking to clients not using
this new MSG_MORE facility (or such clients don't have multiple TX
packets to send to you, thus MSG_MORE is often clear), you are doing a
lot more work per-packet than the existing code.

You take the queue lock, you test state, you splice into a local queue
on the stack, then you walk that local stack queue to submit just one
SKB to netif_receive_skb().

I think you want to streamline this sequence in such cases so that the
cost before and after is similar if not equivalent.

Yes, so I will do a skb_queue_empty() check if !MSG_MORE and call netif_receive_skb() immediately in this case. This can save the wasted efforts.

Thanks

Reply via email to