On Tue, 2016-11-29 at 16:25 +0100, Andrey Konovalov wrote:
> This patch changes tun.c to call netif_receive_skb instead of netif_rx
> when a packet is received. The difference between the two is that netif_rx
> queues the packet into the backlog, and netif_receive_skb proccesses the
> packet in the current context.
> 
> This patch is required for syzkaller [1] to collect coverage from packet
> receive paths, when a packet being received through tun (syzkaller collects
> coverage per process in the process context).
> 
> A similar patch was introduced back in 2010 [2, 3], but the author found
> out that the patch doesn't help with the task he had in mind (for cgroups
> to shape network traffic based on the original process) and decided not to
> go further with it. The main concern back then was about possible stack
> exhaustion with 4K stacks, but CONFIG_4KSTACKS was removed and stacks are
> 8K now.

Acked-by: Eric Dumazet <eduma...@google.com>

We're using a similar patch written by Peter , let me copy here part of
his changelog, since main motivation was speed improvement at that
time :

commit 29aa09f47d43e93327a706cd835a37012ccc5b9e
Author: Peter Klausler <p...@google.com>
Date:   Fri Mar 29 17:08:02 2013 -0400

    net-tun: Add netif_rx_ni_immediate() variant to speed up tun/tap.
    
    Speed up packet reception from (i.e., writes to) a tun/tap
    device by adding an alternative netif_rx_ni_immediate()
    interface that invokes netif_receive_skb() immediately rather
    than enqueueing the packet in the backlog queue and then driving
    its processing with do_softirq().  Forced queueing as a consequence
    of an RPS CPU mask will still work as expected.
    
    This change speeds up my closed-loop single-stream tap/OVS benchmark
    by about 23%, from 700k packets/second to 867k packets/second.
    



Reply via email to