Re: [PATCH v3 sl-b 1/6] mm: Add mem_dump_obj() to print source of memory block

2020-12-11 Thread Paul E. McKenney
On Fri, Dec 11, 2020 at 03:58:51PM +0900, Joonsoo Kim wrote:
> On Thu, Dec 10, 2020 at 07:42:27PM -0800, Paul E. McKenney wrote:
> > On Thu, Dec 10, 2020 at 07:33:59PM -0800, Paul E. McKenney wrote:
> > > On Fri, Dec 11, 2020 at 11:22:10AM +0900, Joonsoo Kim wrote:
> > > > On Thu, Dec 10, 2020 at 05:19:58PM -0800, paul...@kernel.org wrote:
> > > > > From: "Paul E. McKenney" 
> > 
> > [ . . . ]
> > 
> > > > We can get some infos even if CONFIG_SLUB_DEBUG isn't defined.
> > > > Please move them out.
> > > 
> > > I guess since I worry about CONFIG_MMU=n it only makes sense to also
> > > worry about CONFIG_SLUB_DEBUG=n.  Fix update.
> > 
> > Like this?  (Patch on top of the series, to be folded into the first one.)
> 
> Yes!
> 
> Acked-by: Joonsoo Kim 

Applied, and thank you again for the review and feedback!

Suggestions on where to route these?  Left to my own devices, they
go via -rcu in the v5.12 merge window.

Thanx, Paul


Re: [PATCH v3 sl-b 1/6] mm: Add mem_dump_obj() to print source of memory block

2020-12-10 Thread Joonsoo Kim
On Thu, Dec 10, 2020 at 07:42:27PM -0800, Paul E. McKenney wrote:
> On Thu, Dec 10, 2020 at 07:33:59PM -0800, Paul E. McKenney wrote:
> > On Fri, Dec 11, 2020 at 11:22:10AM +0900, Joonsoo Kim wrote:
> > > On Thu, Dec 10, 2020 at 05:19:58PM -0800, paul...@kernel.org wrote:
> > > > From: "Paul E. McKenney" 
> 
> [ . . . ]
> 
> > > We can get some infos even if CONFIG_SLUB_DEBUG isn't defined.
> > > Please move them out.
> > 
> > I guess since I worry about CONFIG_MMU=n it only makes sense to also
> > worry about CONFIG_SLUB_DEBUG=n.  Fix update.
> 
> Like this?  (Patch on top of the series, to be folded into the first one.)

Yes!

Acked-by: Joonsoo Kim 

Thanks.


Re: [PATCH v3 sl-b 1/6] mm: Add mem_dump_obj() to print source of memory block

2020-12-10 Thread Joonsoo Kim
On Thu, Dec 10, 2020 at 07:33:59PM -0800, Paul E. McKenney wrote:
> On Fri, Dec 11, 2020 at 11:22:10AM +0900, Joonsoo Kim wrote:
> > On Thu, Dec 10, 2020 at 05:19:58PM -0800, paul...@kernel.org wrote:
> > > From: "Paul E. McKenney" 
> > > 
> > > There are kernel facilities such as per-CPU reference counts that give
> > > error messages in generic handlers or callbacks, whose messages are
> > > unenlightening.  In the case of per-CPU reference-count underflow, this
> > > is not a problem when creating a new use of this facility because in that
> > > case the bug is almost certainly in the code implementing that new use.
> > > However, trouble arises when deploying across many systems, which might
> > > exercise corner cases that were not seen during development and testing.
> > > Here, it would be really nice to get some kind of hint as to which of
> > > several uses the underflow was caused by.
> > > 
> > > This commit therefore exposes a mem_dump_obj() function that takes
> > > a pointer to memory (which must still be allocated if it has been
> > > dynamically allocated) and prints available information on where that
> > > memory came from.  This pointer can reference the middle of the block as
> > > well as the beginning of the block, as needed by things like RCU callback
> > > functions and timer handlers that might not know where the beginning of
> > > the memory block is.  These functions and handlers can use mem_dump_obj()
> > > to print out better hints as to where the problem might lie.
> > > 
> > > The information printed can depend on kernel configuration.  For example,
> > > the allocation return address can be printed only for slab and slub,
> > > and even then only when the necessary debug has been enabled.  For slab,
> > > build with CONFIG_DEBUG_SLAB=y, and either use sizes with ample space
> > > to the next power of two or use the SLAB_STORE_USER when creating the
> > > kmem_cache structure.  For slub, build with CONFIG_SLUB_DEBUG=y and
> > > boot with slub_debug=U, or pass SLAB_STORE_USER to kmem_cache_create()
> > > if more focused use is desired.  Also for slub, use CONFIG_STACKTRACE
> > > to enable printing of the allocation-time stack trace.
> > > 
> > > Cc: Christoph Lameter 
> > > Cc: Pekka Enberg 
> > > Cc: David Rientjes 
> > > Cc: Joonsoo Kim 
> > > Cc: Andrew Morton 
> > > Cc: 
> > > Reported-by: Andrii Nakryiko 
> > > [ paulmck: Convert to printing and change names per Joonsoo Kim. ]
> > > [ paulmck: Move slab definition per Stephen Rothwell and kbuild test 
> > > robot. ]
> > > [ paulmck: Handle CONFIG_MMU=n case where vmalloc() is kmalloc(). ]
> > > [ paulmck: Apply Vlastimil Babka feedback on slab.c kmem_provenance(). ]
> > > Signed-off-by: Paul E. McKenney 
> > > ---
> > >  include/linux/mm.h   |  2 ++
> > >  include/linux/slab.h |  2 ++
> > >  mm/slab.c| 20 ++
> > >  mm/slab.h| 12 +
> > >  mm/slab_common.c | 74 
> > > 
> > >  mm/slob.c|  6 +
> > >  mm/slub.c| 36 +
> > >  mm/util.c| 24 +
> > >  8 files changed, 176 insertions(+)
> > > 
> > > diff --git a/include/linux/mm.h b/include/linux/mm.h
> > > index ef360fe..1eea266 100644
> > > --- a/include/linux/mm.h
> > > +++ b/include/linux/mm.h
> > > @@ -3153,5 +3153,7 @@ unsigned long wp_shared_mapping_range(struct 
> > > address_space *mapping,
> > >  
> > >  extern int sysctl_nr_trim_pages;
> > >  
> > > +void mem_dump_obj(void *object);
> > > +
> > >  #endif /* __KERNEL__ */
> > >  #endif /* _LINUX_MM_H */
> > > diff --git a/include/linux/slab.h b/include/linux/slab.h
> > > index dd6897f..169b511 100644
> > > --- a/include/linux/slab.h
> > > +++ b/include/linux/slab.h
> > > @@ -186,6 +186,8 @@ void kfree(const void *);
> > >  void kfree_sensitive(const void *);
> > >  size_t __ksize(const void *);
> > >  size_t ksize(const void *);
> > > +bool kmem_valid_obj(void *object);
> > > +void kmem_dump_obj(void *object);
> > >  
> > >  #ifdef CONFIG_HAVE_HARDENED_USERCOPY_ALLOCATOR
> > >  void __check_heap_object(const void *ptr, unsigned long n, struct page 
> > > *page,
> > > diff --git a/mm/slab.c b/mm/slab.c
> > > index b111356..66f00ad 100644
> > > --- a/mm/slab.c
> > > +++ b/mm/slab.c
> > > @@ -3633,6 +3633,26 @@ void *__kmalloc_node_track_caller(size_t size, 
> > > gfp_t flags,
> > >  EXPORT_SYMBOL(__kmalloc_node_track_caller);
> > >  #endif /* CONFIG_NUMA */
> > >  
> > > +void kmem_obj_info(struct kmem_obj_info *kpp, void *object, struct page 
> > > *page)
> > > +{
> > > + struct kmem_cache *cachep;
> > > + unsigned int objnr;
> > > + void *objp;
> > > +
> > > + kpp->kp_ptr = object;
> > > + kpp->kp_page = page;
> > > + cachep = page->slab_cache;
> > > + kpp->kp_slab_cache = cachep;
> > > + objp = object - obj_offset(cachep);
> > > + kpp->kp_data_offset = obj_offset(cachep);
> > > + page = virt_to_head_page(objp);
> > > + objnr = 

Re: [PATCH v3 sl-b 1/6] mm: Add mem_dump_obj() to print source of memory block

2020-12-10 Thread Paul E. McKenney
On Thu, Dec 10, 2020 at 07:33:59PM -0800, Paul E. McKenney wrote:
> On Fri, Dec 11, 2020 at 11:22:10AM +0900, Joonsoo Kim wrote:
> > On Thu, Dec 10, 2020 at 05:19:58PM -0800, paul...@kernel.org wrote:
> > > From: "Paul E. McKenney" 

[ . . . ]

> > We can get some infos even if CONFIG_SLUB_DEBUG isn't defined.
> > Please move them out.
> 
> I guess since I worry about CONFIG_MMU=n it only makes sense to also
> worry about CONFIG_SLUB_DEBUG=n.  Fix update.

Like this?  (Patch on top of the series, to be folded into the first one.)

Thanx, Paul



diff --git a/mm/slub.c b/mm/slub.c
index 0459d2a..abf43f0 100644
--- a/mm/slub.c
+++ b/mm/slub.c
@@ -3920,21 +3920,24 @@ int __kmem_cache_shutdown(struct kmem_cache *s)
 
 void kmem_obj_info(struct kmem_obj_info *kpp, void *object, struct page *page)
 {
-#ifdef CONFIG_SLUB_DEBUG
void *base;
-   int i;
+   int __maybe_unused i;
unsigned int objnr;
void *objp;
void *objp0;
struct kmem_cache *s = page->slab_cache;
-   struct track *trackp;
+   struct track __maybe_unused *trackp;
 
kpp->kp_ptr = object;
kpp->kp_page = page;
kpp->kp_slab_cache = s;
base = page_address(page);
objp0 = kasan_reset_tag(object);
+#ifdef CONFIG_SLUB_DEBUG
objp = restore_red_left(s, objp0);
+#else
+   objp = objp0;
+#endif
objnr = obj_to_index(s, page, objp);
kpp->kp_data_offset = (unsigned long)((char *)objp0 - (char *)objp);
objp = base + s->size * objnr;
@@ -3942,6 +3945,7 @@ void kmem_obj_info(struct kmem_obj_info *kpp, void 
*object, struct page *page)
if (WARN_ON_ONCE(objp < base || objp >= base + page->objects * s->size 
|| (objp - base) % s->size) ||
!(s->flags & SLAB_STORE_USER))
return;
+#ifdef CONFIG_SLUB_DEBUG
trackp = get_track(s, objp, TRACK_ALLOC);
kpp->kp_ret = (void *)trackp->addr;
 #ifdef CONFIG_STACKTRACE


Re: [PATCH v3 sl-b 1/6] mm: Add mem_dump_obj() to print source of memory block

2020-12-10 Thread Paul E. McKenney
On Fri, Dec 11, 2020 at 11:22:10AM +0900, Joonsoo Kim wrote:
> On Thu, Dec 10, 2020 at 05:19:58PM -0800, paul...@kernel.org wrote:
> > From: "Paul E. McKenney" 
> > 
> > There are kernel facilities such as per-CPU reference counts that give
> > error messages in generic handlers or callbacks, whose messages are
> > unenlightening.  In the case of per-CPU reference-count underflow, this
> > is not a problem when creating a new use of this facility because in that
> > case the bug is almost certainly in the code implementing that new use.
> > However, trouble arises when deploying across many systems, which might
> > exercise corner cases that were not seen during development and testing.
> > Here, it would be really nice to get some kind of hint as to which of
> > several uses the underflow was caused by.
> > 
> > This commit therefore exposes a mem_dump_obj() function that takes
> > a pointer to memory (which must still be allocated if it has been
> > dynamically allocated) and prints available information on where that
> > memory came from.  This pointer can reference the middle of the block as
> > well as the beginning of the block, as needed by things like RCU callback
> > functions and timer handlers that might not know where the beginning of
> > the memory block is.  These functions and handlers can use mem_dump_obj()
> > to print out better hints as to where the problem might lie.
> > 
> > The information printed can depend on kernel configuration.  For example,
> > the allocation return address can be printed only for slab and slub,
> > and even then only when the necessary debug has been enabled.  For slab,
> > build with CONFIG_DEBUG_SLAB=y, and either use sizes with ample space
> > to the next power of two or use the SLAB_STORE_USER when creating the
> > kmem_cache structure.  For slub, build with CONFIG_SLUB_DEBUG=y and
> > boot with slub_debug=U, or pass SLAB_STORE_USER to kmem_cache_create()
> > if more focused use is desired.  Also for slub, use CONFIG_STACKTRACE
> > to enable printing of the allocation-time stack trace.
> > 
> > Cc: Christoph Lameter 
> > Cc: Pekka Enberg 
> > Cc: David Rientjes 
> > Cc: Joonsoo Kim 
> > Cc: Andrew Morton 
> > Cc: 
> > Reported-by: Andrii Nakryiko 
> > [ paulmck: Convert to printing and change names per Joonsoo Kim. ]
> > [ paulmck: Move slab definition per Stephen Rothwell and kbuild test robot. 
> > ]
> > [ paulmck: Handle CONFIG_MMU=n case where vmalloc() is kmalloc(). ]
> > [ paulmck: Apply Vlastimil Babka feedback on slab.c kmem_provenance(). ]
> > Signed-off-by: Paul E. McKenney 
> > ---
> >  include/linux/mm.h   |  2 ++
> >  include/linux/slab.h |  2 ++
> >  mm/slab.c| 20 ++
> >  mm/slab.h| 12 +
> >  mm/slab_common.c | 74 
> > 
> >  mm/slob.c|  6 +
> >  mm/slub.c| 36 +
> >  mm/util.c| 24 +
> >  8 files changed, 176 insertions(+)
> > 
> > diff --git a/include/linux/mm.h b/include/linux/mm.h
> > index ef360fe..1eea266 100644
> > --- a/include/linux/mm.h
> > +++ b/include/linux/mm.h
> > @@ -3153,5 +3153,7 @@ unsigned long wp_shared_mapping_range(struct 
> > address_space *mapping,
> >  
> >  extern int sysctl_nr_trim_pages;
> >  
> > +void mem_dump_obj(void *object);
> > +
> >  #endif /* __KERNEL__ */
> >  #endif /* _LINUX_MM_H */
> > diff --git a/include/linux/slab.h b/include/linux/slab.h
> > index dd6897f..169b511 100644
> > --- a/include/linux/slab.h
> > +++ b/include/linux/slab.h
> > @@ -186,6 +186,8 @@ void kfree(const void *);
> >  void kfree_sensitive(const void *);
> >  size_t __ksize(const void *);
> >  size_t ksize(const void *);
> > +bool kmem_valid_obj(void *object);
> > +void kmem_dump_obj(void *object);
> >  
> >  #ifdef CONFIG_HAVE_HARDENED_USERCOPY_ALLOCATOR
> >  void __check_heap_object(const void *ptr, unsigned long n, struct page 
> > *page,
> > diff --git a/mm/slab.c b/mm/slab.c
> > index b111356..66f00ad 100644
> > --- a/mm/slab.c
> > +++ b/mm/slab.c
> > @@ -3633,6 +3633,26 @@ void *__kmalloc_node_track_caller(size_t size, gfp_t 
> > flags,
> >  EXPORT_SYMBOL(__kmalloc_node_track_caller);
> >  #endif /* CONFIG_NUMA */
> >  
> > +void kmem_obj_info(struct kmem_obj_info *kpp, void *object, struct page 
> > *page)
> > +{
> > +   struct kmem_cache *cachep;
> > +   unsigned int objnr;
> > +   void *objp;
> > +
> > +   kpp->kp_ptr = object;
> > +   kpp->kp_page = page;
> > +   cachep = page->slab_cache;
> > +   kpp->kp_slab_cache = cachep;
> > +   objp = object - obj_offset(cachep);
> > +   kpp->kp_data_offset = obj_offset(cachep);
> > +   page = virt_to_head_page(objp);
> > +   objnr = obj_to_index(cachep, page, objp);
> > +   objp = index_to_obj(cachep, page, objnr);
> > +   kpp->kp_objp = objp;
> > +   if (DEBUG && cachep->flags & SLAB_STORE_USER)
> > +   kpp->kp_ret = *dbg_userword(cachep, objp);
> > +}
> > +
> >  /**
> >   * __do_kmalloc - allocate 

Re: [PATCH v3 sl-b 1/6] mm: Add mem_dump_obj() to print source of memory block

2020-12-10 Thread Joonsoo Kim
On Thu, Dec 10, 2020 at 05:19:58PM -0800, paul...@kernel.org wrote:
> From: "Paul E. McKenney" 
> 
> There are kernel facilities such as per-CPU reference counts that give
> error messages in generic handlers or callbacks, whose messages are
> unenlightening.  In the case of per-CPU reference-count underflow, this
> is not a problem when creating a new use of this facility because in that
> case the bug is almost certainly in the code implementing that new use.
> However, trouble arises when deploying across many systems, which might
> exercise corner cases that were not seen during development and testing.
> Here, it would be really nice to get some kind of hint as to which of
> several uses the underflow was caused by.
> 
> This commit therefore exposes a mem_dump_obj() function that takes
> a pointer to memory (which must still be allocated if it has been
> dynamically allocated) and prints available information on where that
> memory came from.  This pointer can reference the middle of the block as
> well as the beginning of the block, as needed by things like RCU callback
> functions and timer handlers that might not know where the beginning of
> the memory block is.  These functions and handlers can use mem_dump_obj()
> to print out better hints as to where the problem might lie.
> 
> The information printed can depend on kernel configuration.  For example,
> the allocation return address can be printed only for slab and slub,
> and even then only when the necessary debug has been enabled.  For slab,
> build with CONFIG_DEBUG_SLAB=y, and either use sizes with ample space
> to the next power of two or use the SLAB_STORE_USER when creating the
> kmem_cache structure.  For slub, build with CONFIG_SLUB_DEBUG=y and
> boot with slub_debug=U, or pass SLAB_STORE_USER to kmem_cache_create()
> if more focused use is desired.  Also for slub, use CONFIG_STACKTRACE
> to enable printing of the allocation-time stack trace.
> 
> Cc: Christoph Lameter 
> Cc: Pekka Enberg 
> Cc: David Rientjes 
> Cc: Joonsoo Kim 
> Cc: Andrew Morton 
> Cc: 
> Reported-by: Andrii Nakryiko 
> [ paulmck: Convert to printing and change names per Joonsoo Kim. ]
> [ paulmck: Move slab definition per Stephen Rothwell and kbuild test robot. ]
> [ paulmck: Handle CONFIG_MMU=n case where vmalloc() is kmalloc(). ]
> [ paulmck: Apply Vlastimil Babka feedback on slab.c kmem_provenance(). ]
> Signed-off-by: Paul E. McKenney 
> ---
>  include/linux/mm.h   |  2 ++
>  include/linux/slab.h |  2 ++
>  mm/slab.c| 20 ++
>  mm/slab.h| 12 +
>  mm/slab_common.c | 74 
> 
>  mm/slob.c|  6 +
>  mm/slub.c| 36 +
>  mm/util.c| 24 +
>  8 files changed, 176 insertions(+)
> 
> diff --git a/include/linux/mm.h b/include/linux/mm.h
> index ef360fe..1eea266 100644
> --- a/include/linux/mm.h
> +++ b/include/linux/mm.h
> @@ -3153,5 +3153,7 @@ unsigned long wp_shared_mapping_range(struct 
> address_space *mapping,
>  
>  extern int sysctl_nr_trim_pages;
>  
> +void mem_dump_obj(void *object);
> +
>  #endif /* __KERNEL__ */
>  #endif /* _LINUX_MM_H */
> diff --git a/include/linux/slab.h b/include/linux/slab.h
> index dd6897f..169b511 100644
> --- a/include/linux/slab.h
> +++ b/include/linux/slab.h
> @@ -186,6 +186,8 @@ void kfree(const void *);
>  void kfree_sensitive(const void *);
>  size_t __ksize(const void *);
>  size_t ksize(const void *);
> +bool kmem_valid_obj(void *object);
> +void kmem_dump_obj(void *object);
>  
>  #ifdef CONFIG_HAVE_HARDENED_USERCOPY_ALLOCATOR
>  void __check_heap_object(const void *ptr, unsigned long n, struct page *page,
> diff --git a/mm/slab.c b/mm/slab.c
> index b111356..66f00ad 100644
> --- a/mm/slab.c
> +++ b/mm/slab.c
> @@ -3633,6 +3633,26 @@ void *__kmalloc_node_track_caller(size_t size, gfp_t 
> flags,
>  EXPORT_SYMBOL(__kmalloc_node_track_caller);
>  #endif /* CONFIG_NUMA */
>  
> +void kmem_obj_info(struct kmem_obj_info *kpp, void *object, struct page 
> *page)
> +{
> + struct kmem_cache *cachep;
> + unsigned int objnr;
> + void *objp;
> +
> + kpp->kp_ptr = object;
> + kpp->kp_page = page;
> + cachep = page->slab_cache;
> + kpp->kp_slab_cache = cachep;
> + objp = object - obj_offset(cachep);
> + kpp->kp_data_offset = obj_offset(cachep);
> + page = virt_to_head_page(objp);
> + objnr = obj_to_index(cachep, page, objp);
> + objp = index_to_obj(cachep, page, objnr);
> + kpp->kp_objp = objp;
> + if (DEBUG && cachep->flags & SLAB_STORE_USER)
> + kpp->kp_ret = *dbg_userword(cachep, objp);
> +}
> +
>  /**
>   * __do_kmalloc - allocate memory
>   * @size: how many bytes of memory are required.
> diff --git a/mm/slab.h b/mm/slab.h
> index 6d7c6a5..0dc705b 100644
> --- a/mm/slab.h
> +++ b/mm/slab.h
> @@ -630,4 +630,16 @@ static inline bool slab_want_init_on_free(struct 
> kmem_cache *c)
>   return 

[PATCH v3 sl-b 1/6] mm: Add mem_dump_obj() to print source of memory block

2020-12-10 Thread paulmck
From: "Paul E. McKenney" 

There are kernel facilities such as per-CPU reference counts that give
error messages in generic handlers or callbacks, whose messages are
unenlightening.  In the case of per-CPU reference-count underflow, this
is not a problem when creating a new use of this facility because in that
case the bug is almost certainly in the code implementing that new use.
However, trouble arises when deploying across many systems, which might
exercise corner cases that were not seen during development and testing.
Here, it would be really nice to get some kind of hint as to which of
several uses the underflow was caused by.

This commit therefore exposes a mem_dump_obj() function that takes
a pointer to memory (which must still be allocated if it has been
dynamically allocated) and prints available information on where that
memory came from.  This pointer can reference the middle of the block as
well as the beginning of the block, as needed by things like RCU callback
functions and timer handlers that might not know where the beginning of
the memory block is.  These functions and handlers can use mem_dump_obj()
to print out better hints as to where the problem might lie.

The information printed can depend on kernel configuration.  For example,
the allocation return address can be printed only for slab and slub,
and even then only when the necessary debug has been enabled.  For slab,
build with CONFIG_DEBUG_SLAB=y, and either use sizes with ample space
to the next power of two or use the SLAB_STORE_USER when creating the
kmem_cache structure.  For slub, build with CONFIG_SLUB_DEBUG=y and
boot with slub_debug=U, or pass SLAB_STORE_USER to kmem_cache_create()
if more focused use is desired.  Also for slub, use CONFIG_STACKTRACE
to enable printing of the allocation-time stack trace.

Cc: Christoph Lameter 
Cc: Pekka Enberg 
Cc: David Rientjes 
Cc: Joonsoo Kim 
Cc: Andrew Morton 
Cc: 
Reported-by: Andrii Nakryiko 
[ paulmck: Convert to printing and change names per Joonsoo Kim. ]
[ paulmck: Move slab definition per Stephen Rothwell and kbuild test robot. ]
[ paulmck: Handle CONFIG_MMU=n case where vmalloc() is kmalloc(). ]
[ paulmck: Apply Vlastimil Babka feedback on slab.c kmem_provenance(). ]
Signed-off-by: Paul E. McKenney 
---
 include/linux/mm.h   |  2 ++
 include/linux/slab.h |  2 ++
 mm/slab.c| 20 ++
 mm/slab.h| 12 +
 mm/slab_common.c | 74 
 mm/slob.c|  6 +
 mm/slub.c| 36 +
 mm/util.c| 24 +
 8 files changed, 176 insertions(+)

diff --git a/include/linux/mm.h b/include/linux/mm.h
index ef360fe..1eea266 100644
--- a/include/linux/mm.h
+++ b/include/linux/mm.h
@@ -3153,5 +3153,7 @@ unsigned long wp_shared_mapping_range(struct 
address_space *mapping,
 
 extern int sysctl_nr_trim_pages;
 
+void mem_dump_obj(void *object);
+
 #endif /* __KERNEL__ */
 #endif /* _LINUX_MM_H */
diff --git a/include/linux/slab.h b/include/linux/slab.h
index dd6897f..169b511 100644
--- a/include/linux/slab.h
+++ b/include/linux/slab.h
@@ -186,6 +186,8 @@ void kfree(const void *);
 void kfree_sensitive(const void *);
 size_t __ksize(const void *);
 size_t ksize(const void *);
+bool kmem_valid_obj(void *object);
+void kmem_dump_obj(void *object);
 
 #ifdef CONFIG_HAVE_HARDENED_USERCOPY_ALLOCATOR
 void __check_heap_object(const void *ptr, unsigned long n, struct page *page,
diff --git a/mm/slab.c b/mm/slab.c
index b111356..66f00ad 100644
--- a/mm/slab.c
+++ b/mm/slab.c
@@ -3633,6 +3633,26 @@ void *__kmalloc_node_track_caller(size_t size, gfp_t 
flags,
 EXPORT_SYMBOL(__kmalloc_node_track_caller);
 #endif /* CONFIG_NUMA */
 
+void kmem_obj_info(struct kmem_obj_info *kpp, void *object, struct page *page)
+{
+   struct kmem_cache *cachep;
+   unsigned int objnr;
+   void *objp;
+
+   kpp->kp_ptr = object;
+   kpp->kp_page = page;
+   cachep = page->slab_cache;
+   kpp->kp_slab_cache = cachep;
+   objp = object - obj_offset(cachep);
+   kpp->kp_data_offset = obj_offset(cachep);
+   page = virt_to_head_page(objp);
+   objnr = obj_to_index(cachep, page, objp);
+   objp = index_to_obj(cachep, page, objnr);
+   kpp->kp_objp = objp;
+   if (DEBUG && cachep->flags & SLAB_STORE_USER)
+   kpp->kp_ret = *dbg_userword(cachep, objp);
+}
+
 /**
  * __do_kmalloc - allocate memory
  * @size: how many bytes of memory are required.
diff --git a/mm/slab.h b/mm/slab.h
index 6d7c6a5..0dc705b 100644
--- a/mm/slab.h
+++ b/mm/slab.h
@@ -630,4 +630,16 @@ static inline bool slab_want_init_on_free(struct 
kmem_cache *c)
return false;
 }
 
+#define KS_ADDRS_COUNT 16
+struct kmem_obj_info {
+   void *kp_ptr;
+   struct page *kp_page;
+   void *kp_objp;
+   unsigned long kp_data_offset;
+   struct kmem_cache *kp_slab_cache;
+   void *kp_ret;
+   void *kp_stack[KS_ADDRS_COUNT];