On 3/6/24 19:24, Suren Baghdasaryan wrote:
> From: Kent Overstreet <[email protected]>
> 
> It seems we need to be more forceful with the compiler on this one.
> This is done for performance reasons only.
> 
> Signed-off-by: Kent Overstreet <[email protected]>
> Signed-off-by: Suren Baghdasaryan <[email protected]>
> Reviewed-by: Kees Cook <[email protected]>
> Reviewed-by: Pasha Tatashin <[email protected]>

Reviewed-by: Vlastimil Babka <[email protected]>

> ---
>  mm/slub.c | 6 +++---
>  1 file changed, 3 insertions(+), 3 deletions(-)
> 
> diff --git a/mm/slub.c b/mm/slub.c
> index 2ef88bbf56a3..0f3369f6188b 100644
> --- a/mm/slub.c
> +++ b/mm/slub.c
> @@ -2121,9 +2121,9 @@ bool slab_free_hook(struct kmem_cache *s, void *x, bool 
> init)
>       return !kasan_slab_free(s, x, init);
>  }
>  
> -static inline bool slab_free_freelist_hook(struct kmem_cache *s,
> -                                        void **head, void **tail,
> -                                        int *cnt)
> +static __fastpath_inline
> +bool slab_free_freelist_hook(struct kmem_cache *s, void **head, void **tail,
> +                          int *cnt)
>  {
>  
>       void *object;

Reply via email to