From: Kent Overstreet <[email protected]> It seems we need to be more forceful with the compiler on this one. This is done for performance reasons only.
Signed-off-by: Kent Overstreet <[email protected]> Signed-off-by: Suren Baghdasaryan <[email protected]> Reviewed-by: Kees Cook <[email protected]> Reviewed-by: Pasha Tatashin <[email protected]> Reviewed-by: Vlastimil Babka <[email protected]> --- mm/slub.c | 6 +++--- 1 file changed, 3 insertions(+), 3 deletions(-) diff --git a/mm/slub.c b/mm/slub.c index 1bb2a93cf7b6..bc9f40889834 100644 --- a/mm/slub.c +++ b/mm/slub.c @@ -2106,9 +2106,9 @@ bool slab_free_hook(struct kmem_cache *s, void *x, bool init) return !kasan_slab_free(s, x, init); } -static inline bool slab_free_freelist_hook(struct kmem_cache *s, - void **head, void **tail, - int *cnt) +static __fastpath_inline +bool slab_free_freelist_hook(struct kmem_cache *s, void **head, void **tail, + int *cnt) { void *object; -- 2.44.0.291.gc1ea87d7ee-goog
