From: Davide Caratti <dcara...@redhat.com>
Date: Wed, 11 Jul 2018 16:04:48 +0200

> the data path of act_skbedit can be faster if we avoid using spinlocks:
>  - patch 1 converts act_skbedit statistics to use per-cpu counters
>  - patch 2 lets act_skbedit use RCU to read/update its configuration 
> 
> test procedure (using pktgen from https://github.com/netoptimizer):
> 
>  # ip link add name eth1 type dummy
>  # ip link set dev eth1 up
>  # tc qdisc add dev eth1 clsact
>  # tc filter add dev eth1 egress matchall action skbedit priority c1a0:c1a0
>  # for c in 1 2 4 ; do
>  > ./pktgen_bench_xmit_mode_queue_xmit.sh -v -s 64 -t $c -n 5000000 -i eth1
>  > done
> 
> test results (avg. pps/thread)
> 
>   $c | before patch |  after patch | improvement
>  ----+--------------+--------------+------------
>    1 | 3917464 ± 3% | 4000458 ± 3% |  irrelevant
>    2 | 3455367 ± 4% | 3953076 ± 1% |        +14%
>    4 | 2496594 ± 2% | 3801123 ± 3% |        +52%
> 
> v2: rebased on latest net-next

Series applied, thank you.

Reply via email to