On Thu, Sep 15, 2016 at 04:09:26PM -0700, Eric Dumazet wrote: > On Thu, 2016-09-15 at 19:11 -0300, Thadeu Lima de Souza Cascardo wrote: > > Instead of using flow stats per NUMA node, use it per CPU. When using > > megaflows, the stats lock can be a bottleneck in scalability. > > > > On a E5-2690 12-core system, usual throughput went from ~4Mpps to > > ~15Mpps when forwarding between two 40GbE ports with a single flow > > configured on the datapath. > > > > This has been tested on a system with possible CPUs 0-7,16-23. After > > module removal, there were no corruption on the slab cache. > > > > Signed-off-by: Thadeu Lima de Souza Cascardo <casca...@redhat.com> > > Cc: pravin shelar <pshe...@ovn.org> > > --- > > > + /* We open code this to make sure cpu 0 is always considered */ > > + for (cpu = 0; cpu < nr_cpu_ids; cpu = cpumask_next(cpu, > > cpu_possible_mask)) > > + if (flow->stats[cpu]) > > kmem_cache_free(flow_stats_cache, > > - (struct flow_stats __force > > *)flow->stats[node]); > > + (struct flow_stats __force > > *)flow->stats[cpu]); > > kmem_cache_free(flow_cache, flow); > > } > > > > @@ -757,7 +749,7 @@ int ovs_flow_init(void) > > BUILD_BUG_ON(sizeof(struct sw_flow_key) % sizeof(long)); > > > > flow_cache = kmem_cache_create("sw_flow", sizeof(struct sw_flow) > > - + (nr_node_ids > > + + (nr_cpu_ids > > * sizeof(struct flow_stats *)), > > 0, 0, NULL); > > if (flow_cache == NULL) > > Well, if you switch to percpu stats, better use normal > alloc_percpu(struct flow_stats) > > The code was dealing with per node allocation so could not use existing > helper. > > No need to keep this forever.
The problem is that the alloc_percpu uses a global spinlock and that affects some workloads on OVS that creates lots of flows, as described in commit 9ac56358dec1a5aa7f4275a42971f55fad1f7f35 ("datapath: Per NUMA node flow stats."). This problem would not happen on this version as the flow allocation does not suffer from the same scalability problem as when using alloc_percpu. Cascardo. _______________________________________________ dev mailing list dev@openvswitch.org http://openvswitch.org/mailman/listinfo/dev