On Mon, Nov 09, 2020 at 12:33:46PM +0100, Vlastimil Babka wrote:
> On 11/8/20 7:57 AM, Mike Rapoport wrote:
> > --- a/mm/slab.c
> > +++ b/mm/slab.c
> > @@ -1428,21 +1428,19 @@ static bool is_debug_pagealloc_cache(struct 
> > kmem_cache *cachep)
> >     return false;
> >   }
> > -#ifdef CONFIG_DEBUG_PAGEALLOC
> >   static void slab_kernel_map(struct kmem_cache *cachep, void *objp, int 
> > map)
> >   {
> >     if (!is_debug_pagealloc_cache(cachep))
> >             return;
> 
> Hmm, I didn't notice earlier, sorry.
> The is_debug_pagealloc_cache() above includes a
> debug_pagealloc_enabled_static() check, so it should be fine to use
> __kernel_map_pages() directly below. Otherwise we generate two static key
> checks for the same key needlessly.

Ok, I'll revert slab changes.

> > -   kernel_map_pages(virt_to_page(objp), cachep->size / PAGE_SIZE, map);
> > +   if (map)
> > +           debug_pagealloc_map_pages(virt_to_page(objp),
> > +                                     cachep->size / PAGE_SIZE);
> > +   else
> > +           debug_pagealloc_unmap_pages(virt_to_page(objp),
> > +                                       cachep->size / PAGE_SIZE);
> >   }
> > -#else
> > -static inline void slab_kernel_map(struct kmem_cache *cachep, void *objp,
> > -                           int map) {}
> > -
> > -#endif
> > -
> >   static void poison_obj(struct kmem_cache *cachep, void *addr, unsigned 
> > char val)
> >   {
> >     int size = cachep->object_size;
> > @@ -2062,7 +2060,7 @@ int __kmem_cache_create(struct kmem_cache *cachep, 
> > slab_flags_t flags)
> >   #if DEBUG
> >     /*
> > -    * If we're going to use the generic kernel_map_pages()
> > +    * If we're going to use the generic debug_pagealloc_map_pages()
> >      * poisoning, then it's going to smash the contents of
> >      * the redzone and userword anyhow, so switch them off.
> >      */
> > 
> 

-- 
Sincerely yours,
Mike.

Reply via email to