On Mon, Aug 10, 2020 at 3:18 PM Xunlei Pang <xlp...@linux.alibaba.com> wrote:
>
> v1->v2:
> - Improved changelog and variable naming for PATCH 1~2.
> - PATCH3 adds per-cpu counter to avoid performance regression
>   in concurrent __slab_free().
>
> [Testing]
> On my 32-cpu 2-socket physical machine:
> Intel(R) Xeon(R) CPU E5-2650 v2 @ 2.60GHz
> perf stat --null --repeat 10 -- hackbench 20 thread 20000
>
> == original, no patched
>       19.211637055 seconds time elapsed                                       
>    ( +-  0.57% )
>
> == patched with patch1~2
>  Performance counter stats for 'hackbench 20 thread 20000' (10 runs):
>
>       21.731833146 seconds time elapsed                                       
>    ( +-  0.17% )
>
> == patched with patch1~3
>  Performance counter stats for 'hackbench 20 thread 20000' (10 runs):
>
>       19.112106847 seconds time elapsed                                       
>    ( +-  0.64% )
>
>
> Xunlei Pang (3):
>   mm/slub: Introduce two counters for partial objects
>   mm/slub: Get rid of count_partial()
>   mm/slub: Use percpu partial free counter
>
>  mm/slab.h |   2 +
>  mm/slub.c | 124 
> +++++++++++++++++++++++++++++++++++++++++++-------------------
>  2 files changed, 89 insertions(+), 37 deletions(-)

We probably need to wrap the counters under CONFIG_SLUB_DEBUG because
AFAICT all the code that uses them is also wrapped under it.

An alternative approach for this patch would be to somehow make the
lock in count_partial() more granular, but I don't know how feasible
that actually is.

Anyway, I am OK with this approach:

Reviewed-by: Pekka Enberg <penb...@kernel.org>

You still need to convince Christoph, though, because he had
objections over this approach.

- Pekka

Reply via email to