Re: [PATCH 00/14] replace call_rcu by kfree_rcu for simple kmem_cache_free callback

2024-06-19 Thread Uladzislau Rezki
On Wed, Jun 19, 2024 at 11:56:44AM +0200, Vlastimil Babka wrote: > On 6/19/24 11:51 AM, Uladzislau Rezki wrote: > > On Tue, Jun 18, 2024 at 09:48:49AM -0700, Paul E. McKenney wrote: > >> On Tue, Jun 18, 2024 at 11:31:00AM +0200, Uladzislau Rezki wrote: > >> > > On

Re: [PATCH 00/14] replace call_rcu by kfree_rcu for simple kmem_cache_free callback

2024-06-19 Thread Uladzislau Rezki
On Tue, Jun 18, 2024 at 09:48:49AM -0700, Paul E. McKenney wrote: > On Tue, Jun 18, 2024 at 11:31:00AM +0200, Uladzislau Rezki wrote: > > > On 6/17/24 8:42 PM, Uladzislau Rezki wrote: > > > >> + > > > >> + s = container_of(w

Re: [PATCH 00/14] replace call_rcu by kfree_rcu for simple kmem_cache_free callback

2024-06-18 Thread Uladzislau Rezki
> On 6/17/24 8:42 PM, Uladzislau Rezki wrote: > >> + > >> + s = container_of(work, struct kmem_cache, async_destroy_work); > >> + > >> + // XXX use the real kmem_cache_free_barrier() or similar thing here > > It implies that we need to introduce kfree

Re: [PATCH 00/14] replace call_rcu by kfree_rcu for simple kmem_cache_free callback

2024-06-17 Thread Uladzislau Rezki
e(struct kmem_cache *s) > kmem_cache_free(kmem_cache, s); > } > > +static void kmem_cache_kfree_rcu_destroy_workfn(struct work_struct *work) > +{ > + struct kmem_cache *s; > + int err = -EBUSY; > + bool rcu_set; > + > + s = container_of(work, struct kmem_cache, async_destroy_work); > + > + // XXX use the real kmem_cache_free_barrier() or similar thing here It implies that we need to introduce kfree_rcu_barrier(), a new API, which i wanted to avoid initially. Since you do it asynchronous can we just repeat and wait until it a cache is furry freed? I am asking because inventing a new kfree_rcu_barrier() might not be so straight forward. -- Uladzislau Rezki

Re: [PATCH 00/14] replace call_rcu by kfree_rcu for simple kmem_cache_free callback

2024-06-17 Thread Uladzislau Rezki
On Mon, Jun 17, 2024 at 06:57:45PM +0200, Jason A. Donenfeld wrote: > On Mon, Jun 17, 2024 at 06:42:23PM +0200, Uladzislau Rezki wrote: > > On Mon, Jun 17, 2024 at 06:33:23PM +0200, Jason A. Donenfeld wrote: > > > On Mon, Jun 17, 2024 at 6:30 PM Uladzislau Rezki wrote: >

Re: [PATCH 00/14] replace call_rcu by kfree_rcu for simple kmem_cache_free callback

2024-06-17 Thread Uladzislau Rezki
On Mon, Jun 17, 2024 at 06:33:23PM +0200, Jason A. Donenfeld wrote: > On Mon, Jun 17, 2024 at 6:30 PM Uladzislau Rezki wrote: > > Here if an "err" is less then "0" means there are still objects > > whereas "is_destroyed" is set to &quo

Re: [PATCH 00/14] replace call_rcu by kfree_rcu for simple kmem_cache_free callback

2024-06-17 Thread Uladzislau Rezki
On Mon, Jun 17, 2024 at 04:56:17PM +0200, Jason A. Donenfeld wrote: > On Mon, Jun 17, 2024 at 03:50:56PM +0200, Uladzislau Rezki wrote: > > On Fri, Jun 14, 2024 at 09:33:45PM +0200, Jason A. Donenfeld wrote: > > > On Fri, Jun 14, 2024 at 02:35:33PM +0200, Ulad

Re: [PATCH 00/14] replace call_rcu by kfree_rcu for simple kmem_cache_free callback

2024-06-17 Thread Uladzislau Rezki
On Fri, Jun 14, 2024 at 09:33:45PM +0200, Jason A. Donenfeld wrote: > On Fri, Jun 14, 2024 at 02:35:33PM +0200, Uladzislau Rezki wrote: > > + /* Should a destroy process be deferred? */ > > + if (s->flags & SLAB_DEFER_DESTROY) { > > + list_move_tail(

Re: [PATCH 00/14] replace call_rcu by kfree_rcu for simple kmem_cache_free callback

2024-06-14 Thread Uladzislau Rezki
On Fri, Jun 14, 2024 at 07:17:29AM -0700, Paul E. McKenney wrote: > On Fri, Jun 14, 2024 at 02:35:33PM +0200, Uladzislau Rezki wrote: > > On Thu, Jun 13, 2024 at 11:13:52AM -0700, Paul E. McKenney wrote: > > > On Thu, Jun 13, 2024 at 07:58:17PM +0200, Uladzislau Rezki wrote: >

Re: [PATCH 00/14] replace call_rcu by kfree_rcu for simple kmem_cache_free callback

2024-06-14 Thread Uladzislau Rezki
On Thu, Jun 13, 2024 at 11:13:52AM -0700, Paul E. McKenney wrote: > On Thu, Jun 13, 2024 at 07:58:17PM +0200, Uladzislau Rezki wrote: > > On Thu, Jun 13, 2024 at 10:45:59AM -0700, Paul E. McKenney wrote: > > > On Thu, Jun 13, 2024 at 07:38:59PM +0200, Uladzislau Rezki wrote: >

Re: [PATCH 00/14] replace call_rcu by kfree_rcu for simple kmem_cache_free callback

2024-06-13 Thread Uladzislau Rezki
On Thu, Jun 13, 2024 at 10:45:59AM -0700, Paul E. McKenney wrote: > On Thu, Jun 13, 2024 at 07:38:59PM +0200, Uladzislau Rezki wrote: > > On Thu, Jun 13, 2024 at 08:06:30AM -0700, Paul E. McKenney wrote: > > > On Thu, Jun 13, 2024 at 03:06:54PM +0200, Uladzislau Rezki wrote: >

Re: [PATCH 00/14] replace call_rcu by kfree_rcu for simple kmem_cache_free callback

2024-06-13 Thread Uladzislau Rezki
On Thu, Jun 13, 2024 at 08:06:30AM -0700, Paul E. McKenney wrote: > On Thu, Jun 13, 2024 at 03:06:54PM +0200, Uladzislau Rezki wrote: > > On Thu, Jun 13, 2024 at 05:47:08AM -0700, Paul E. McKenney wrote: > > > On Thu, Jun 13, 2024 at 01:58:59PM +0200, Jason A. Donenfeld wrote: &

Re: [PATCH 00/14] replace call_rcu by kfree_rcu for simple kmem_cache_free callback

2024-06-13 Thread Uladzislau Rezki
err = shutdown_cache(s); WARN(err, "%s %s: Slab cache still has objects when called from %pS", __func__, s->name, (void *)_RET_IP_); ... cpus_read_unlock(); if (!err && !rcu_set) kmem_cache_release(s); } so we have SLAB_TYPESAFE_BY_RCU flag that defers freeing slab-pages and a cache by a grace period. Similar flag can be added, like SLAB_DESTROY_ONCE_FULLY_FREED, in this case a worker rearm itself if there are still objects which should be freed. Any thoughts here? -- Uladzislau Rezki

Re: powerpc 5.10-rcN boot failures with RCU_SCALE_TEST=m

2020-12-04 Thread Uladzislau Rezki
On Thu, Dec 03, 2020 at 03:34:45PM +0100, Uladzislau Rezki wrote: > On Thu, Dec 03, 2020 at 05:22:20PM +1100, Michael Ellerman wrote: > > Uladzislau Rezki writes: > > > On Thu, Dec 03, 2020 at 01:03:32AM +1100, Michael Ellerman wrote: > > ... > > >> > >

Re: powerpc 5.10-rcN boot failures with RCU_SCALE_TEST=m

2020-12-03 Thread Uladzislau Rezki
On Thu, Dec 03, 2020 at 05:22:20PM +1100, Michael Ellerman wrote: > Uladzislau Rezki writes: > > On Thu, Dec 03, 2020 at 01:03:32AM +1100, Michael Ellerman wrote: > ... > >> > >> The SMP bringup stalls because _cpu_up() is blocked trying to take >

Re: powerpc 5.10-rcN boot failures with RCU_SCALE_TEST=m

2020-12-02 Thread Uladzislau Rezki
On Thu, Dec 03, 2020 at 01:03:32AM +1100, Michael Ellerman wrote: > Daniel Axtens writes: > > Hi all, > > > > I'm having some difficulty tracking down a bug. > > > > Some configurations of the powerpc kernel since somewhere in the 5.10 > > merge window fail to boot on some ppc64 systems. They

Re: powerpc 5.10-rcN boot failures with RCU_SCALE_TEST=m

2020-11-27 Thread Uladzislau Rezki
> Hi all, > > I'm having some difficulty tracking down a bug. > > Some configurations of the powerpc kernel since somewhere in the 5.10 > merge window fail to boot on some ppc64 systems. They hang while trying > to bring up SMP. It seems to depend on the RCU_SCALE/PERF_TEST option. > (It was

Re: [PATCH v10 1/5] kasan: support backing vmalloc space with real shadow memory

2019-10-30 Thread Uladzislau Rezki
Hello, Daniel > > @@ -1294,14 +1299,19 @@ static bool __purge_vmap_area_lazy(unsigned long > start, unsigned long end) > spin_lock(_vmap_area_lock); > llist_for_each_entry_safe(va, n_va, valist, purge_list) { > unsigned long nr = (va->va_end - va->va_start) >>

Re: [PATCH v8 1/5] kasan: support backing vmalloc space with real shadow memory

2019-10-07 Thread Uladzislau Rezki
> diff --git a/mm/vmalloc.c b/mm/vmalloc.c > index a3c70e275f4e..9fb7a16f42ae 100644 > --- a/mm/vmalloc.c > +++ b/mm/vmalloc.c > @@ -690,8 +690,19 @@ merge_or_add_vmap_area(struct vmap_area *va, > struct list_head *next; > struct rb_node **link; > struct rb_node *parent; > +

Re: [PATCH v8 1/5] kasan: support backing vmalloc space with real shadow memory

2019-10-02 Thread Uladzislau Rezki
On Wed, Oct 02, 2019 at 11:23:06AM +1000, Daniel Axtens wrote: > Hi, > > >>/* > >> * Find a place in the tree where VA potentially will be > >> * inserted, unless it is merged with its sibling/siblings. > >> @@ -741,6 +752,10 @@ merge_or_add_vmap_area(struct vmap_area *va, > >>

Re: [PATCH v8 1/5] kasan: support backing vmalloc space with real shadow memory

2019-10-01 Thread Uladzislau Rezki
Hello, Daniel. > diff --git a/mm/vmalloc.c b/mm/vmalloc.c > index a3c70e275f4e..9fb7a16f42ae 100644 > --- a/mm/vmalloc.c > +++ b/mm/vmalloc.c > @@ -690,8 +690,19 @@ merge_or_add_vmap_area(struct vmap_area *va, > struct list_head *next; > struct rb_node **link; > struct rb_node