On Wed, Feb 21, 2024 at 2:41 PM Suren Baghdasaryan <[email protected]> wrote:
>
> From: Kent Overstreet <[email protected]>
>
> It seems we need to be more forceful with the compiler on this one.
> This is done for performance reasons only.
>
> Signed-off-by: Kent Overstreet <[email protected]>
> Signed-off-by: Suren Baghdasaryan <[email protected]>
> Reviewed-by: Kees Cook <[email protected]>
> ---
> mm/slub.c | 2 +-
> 1 file changed, 1 insertion(+), 1 deletion(-)
>
> diff --git a/mm/slub.c b/mm/slub.c
> index 2ef88bbf56a3..d31b03a8d9d5 100644
> --- a/mm/slub.c
> +++ b/mm/slub.c
> @@ -2121,7 +2121,7 @@ bool slab_free_hook(struct kmem_cache *s, void *x, bool
> init)
> return !kasan_slab_free(s, x, init);
> }
>
> -static inline bool slab_free_freelist_hook(struct kmem_cache *s,
> +static __always_inline bool slab_free_freelist_hook(struct kmem_cache *s,
__fastpath_inline seems to me more appropriate here. It prioritizes
memory vs performance.
> void **head, void **tail,
> int *cnt)
> {
> --
> 2.44.0.rc0.258.g7320e95886-goog
>