Background: cpumap moves the SKB allocation out of the driver code, and instead allocate it on the remote CPU, and invokes the regular kernel network stack with the newly allocated SKB.
The idea behind the XDP CPU redirect feature, is to use XDP as a load-balancer step in-front of regular kernel network stack. But the current sample code does not provide a good example of this. Part of the reason is that, I have implemented this as part of Suricata XDP load-balancer. Given this is the most frequent feature request I get. This patchset implement the same XDP load-balancing as Suricata does, which is a symmetric hash based on the IP-pairs + L4-protocol. The expected setup for the use-case is to reduce the number of NIC RX queues via ethtool (as XDP can handle more per core), and via smp_affinity assign these RX queues to a set of CPUs, which will be handling RX packets. The CPUs that runs the regular network stack is supplied to the sample xdp_redirect_cpu tool by specifying the --cpu option multiple times on the cmdline. I do note that cpumap SKB creation is not feature complete yet, and more work is coming. E.g. given GRO is not implemented yet, do expect TCP workloads to be slower. My measurements do indicate UDP workloads are faster. --- Jesper Dangaard Brouer (2): samples/bpf: add Paul Hsieh's (LGPL 2.1) hash function SuperFastHash samples/bpf: xdp_redirect_cpu load balance like Suricata samples/bpf/hash_func01.h | 55 +++++++++++++++++++ samples/bpf/xdp_redirect_cpu_kern.c | 103 +++++++++++++++++++++++++++++++++++ samples/bpf/xdp_redirect_cpu_user.c | 4 + 3 files changed, 160 insertions(+), 2 deletions(-) create mode 100644 samples/bpf/hash_func01.h --