From: Abel Wu
The two conditions are mutually exclusive and gcc compiler will
optimise this into if-else-like pattern. Given that the majority
of free_slowpath is free_frozen, let's provide some hint to the
compilers.
Tests (perf bench sched messaging -g 20 -l 40, executed 10x
after reboot)
From: Abel Wu
Hide cpu partial related sysfs entries when !CONFIG_SLUB_CPU_PARTIAL to
avoid confusion.
Signed-off-by: Abel Wu
---
mm/slub.c | 56 +++
1 file changed, 32 insertions(+), 24 deletions(-)
diff --git a/mm/slub.c b/mm/slub.c
index
From: Abel Wu
The ALLOC_SLOWPATH statistics is missing in bulk allocation now.
Fix it by doing statistics in alloc slow path.
Signed-off-by: Abel Wu
---
mm/slub.c | 3 ++-
1 file changed, 2 insertions(+), 1 deletion(-)
diff --git a/mm/slub.c b/mm/slub.c
index df93a5a0e9a4..5d89e4064f83
From: Abel Wu
The commit below is incomplete, as it didn't handle the add_full() part.
commit a4d3f8916c65 ("slub: remove useless kmem_cache_debug() before
remove_full()")
This patch checks for SLAB_STORE_USER instead of kmem_cache_debug(),
since that should be the only context in which we
From: Abel Wu
The commit below is incomplete, as it didn't handle the add_full() part.
commit a4d3f8916c65 ("slub: remove useless kmem_cache_debug() before
remove_full()")
Signed-off-by: Abel Wu
---
mm/slub.c | 4 +++-
1 file changed, 3 insertions(+), 1 deletion(-)
diff --git a/mm/slub.c
5 matches
Mail list logo