Re: uao references & uao_swap_off() cleanup

2021-06-28 Thread Martin Pieuchot
On 23/06/21(Wed) 23:03, Jonathan Matthew wrote:
> On Wed, Jun 23, 2021 at 09:37:10AM +0200, Martin Pieuchot wrote:
> > On 16/06/21(Wed) 11:26, Martin Pieuchot wrote:
> > > Diff below does two things:
> > > 
> > > - Use atomic operations for incrementing/decrementing references of
> > >   anonymous objects.  This allows us to manipulate them without holding
> > >   the KERNEL_LOCK().
> > > 
> > > - Rewrite the loop from uao_swap_off() to only keep a reference to the
> > >   next item in the list.  This is imported from NetBSD and is necessary
> > >   to introduce locking around uao_pagein().
> > > 
> > > ok?
> > 
> > Anyone?
> 
> uao_reference_locked() and uao_detach_locked() are prototyped in
> uvm_extern.h, so they should be removed here too.

Thanks, I'll do that.
 
> It doesn't look like uao_detach() is safe to call without the
> kernel lock; it calls uao_dropswap() for each page, which calls
> uao_set_swslot(), which includes a KERNEL_ASSERT_LOCKED().
> Should we keep the KERNEL_ASSERT_LOCKED() in uao_detach()?

I prefer to keep the KERNEL_ASSERT_LOCKED() where it is needed and not
spread it to all the callers.  My current plan is to trade those assert
by assertions on the vmobjlock so I don't want to add new ones.



Re: uao references & uao_swap_off() cleanup

2021-06-23 Thread Jonathan Matthew
On Wed, Jun 23, 2021 at 09:37:10AM +0200, Martin Pieuchot wrote:
> On 16/06/21(Wed) 11:26, Martin Pieuchot wrote:
> > Diff below does two things:
> > 
> > - Use atomic operations for incrementing/decrementing references of
> >   anonymous objects.  This allows us to manipulate them without holding
> >   the KERNEL_LOCK().
> > 
> > - Rewrite the loop from uao_swap_off() to only keep a reference to the
> >   next item in the list.  This is imported from NetBSD and is necessary
> >   to introduce locking around uao_pagein().
> > 
> > ok?
> 
> Anyone?

uao_reference_locked() and uao_detach_locked() are prototyped in
uvm_extern.h, so they should be removed here too.

It doesn't look like uao_detach() is safe to call without the
kernel lock; it calls uao_dropswap() for each page, which calls
uao_set_swslot(), which includes a KERNEL_ASSERT_LOCKED().
Should we keep the KERNEL_ASSERT_LOCKED() in uao_detach()?

ok jmatthew@ otherwise

> 
> > 
> > Index: uvm/uvm_aobj.c
> > ===
> > RCS file: /cvs/src/sys/uvm/uvm_aobj.c,v
> > retrieving revision 1.98
> > diff -u -p -r1.98 uvm_aobj.c
> > --- uvm/uvm_aobj.c  16 Jun 2021 09:02:21 -  1.98
> > +++ uvm/uvm_aobj.c  16 Jun 2021 09:20:26 -
> > @@ -779,19 +779,11 @@ uao_init(void)
> >  void
> >  uao_reference(struct uvm_object *uobj)
> >  {
> > -   KERNEL_ASSERT_LOCKED();
> > -   uao_reference_locked(uobj);
> > -}
> > -
> > -void
> > -uao_reference_locked(struct uvm_object *uobj)
> > -{
> > -
> > /* Kernel object is persistent. */
> > if (UVM_OBJ_IS_KERN_OBJECT(uobj))
> > return;
> >  
> > -   uobj->uo_refs++;
> > +   atomic_inc_int(>uo_refs);
> >  }
> >  
> >  
> > @@ -801,34 +793,19 @@ uao_reference_locked(struct uvm_object *
> >  void
> >  uao_detach(struct uvm_object *uobj)
> >  {
> > -   KERNEL_ASSERT_LOCKED();
> > -   uao_detach_locked(uobj);
> > -}
> > -
> > -
> > -/*
> > - * uao_detach_locked: drop a reference to an aobj
> > - *
> > - * => aobj may freed upon return.
> > - */
> > -void
> > -uao_detach_locked(struct uvm_object *uobj)
> > -{
> > struct uvm_aobj *aobj = (struct uvm_aobj *)uobj;
> > struct vm_page *pg;
> >  
> > /*
> >  * Detaching from kernel_object is a NOP.
> >  */
> > -   if (UVM_OBJ_IS_KERN_OBJECT(uobj)) {
> > +   if (UVM_OBJ_IS_KERN_OBJECT(uobj))
> > return;
> > -   }
> >  
> > /*
> >  * Drop the reference.  If it was the last one, destroy the object.
> >  */
> > -   uobj->uo_refs--;
> > -   if (uobj->uo_refs) {
> > +   if (atomic_dec_int_nv(>uo_refs) > 0) {
> > return;
> > }
> >  
> > @@ -1265,68 +1242,54 @@ uao_dropswap(struct uvm_object *uobj, in
> >  boolean_t
> >  uao_swap_off(int startslot, int endslot)
> >  {
> > -   struct uvm_aobj *aobj, *nextaobj, *prevaobj = NULL;
> > +   struct uvm_aobj *aobj;
> >  
> > /*
> > -* Walk the list of all anonymous UVM objects.
> > +* Walk the list of all anonymous UVM objects.  Grab the first.
> >  */
> > mtx_enter(_list_lock);
> > +   if ((aobj = LIST_FIRST(_list)) == NULL) {
> > +   mtx_leave(_list_lock);
> > +   return FALSE;
> > +   }
> > +   uao_reference(>u_obj);
> >  
> > -   for (aobj = LIST_FIRST(_list);
> > -aobj != NULL;
> > -aobj = nextaobj) {
> > +   do {
> > +   struct uvm_aobj *nextaobj;
> > boolean_t rv;
> >  
> > /*
> > -* add a ref to the aobj so it doesn't disappear
> > -* while we're working.
> > -*/
> > -   uao_reference_locked(>u_obj);
> > -
> > -   /*
> > -* now it's safe to unlock the uao list.
> > -* note that lock interleaving is alright with IPL_NONE mutexes.
> > +* Prefetch the next object and immediately hold a reference
> > +* on it, so neither the current nor the next entry could
> > +* disappear while we are iterating.
> >  */
> > -   mtx_leave(_list_lock);
> > -
> > -   if (prevaobj) {
> > -   uao_detach_locked(>u_obj);
> > -   prevaobj = NULL;
> > +   if ((nextaobj = LIST_NEXT(aobj, u_list)) != NULL) {
> > +   uao_reference(>u_obj);
> > }
> > +   mtx_leave(_list_lock);
> >  
> > /*
> > -* page in any pages in the swslot range.
> > -* if there's an error, abort and return the error.
> > +* Page in all pages in the swap slot range.
> >  */
> > rv = uao_pagein(aobj, startslot, endslot);
> > +
> > +   /* Drop the reference of the current object. */
> > +   uao_detach(>u_obj);
> > if (rv) {
> > -   uao_detach_locked(>u_obj);
> > +   if (nextaobj) {
> > +   uao_detach(>u_obj);
> > +   }
> > return rv;
> > }
> >  
> > -   /*
> 

Re: uao references & uao_swap_off() cleanup

2021-06-23 Thread Martin Pieuchot
On 16/06/21(Wed) 11:26, Martin Pieuchot wrote:
> Diff below does two things:
> 
> - Use atomic operations for incrementing/decrementing references of
>   anonymous objects.  This allows us to manipulate them without holding
>   the KERNEL_LOCK().
> 
> - Rewrite the loop from uao_swap_off() to only keep a reference to the
>   next item in the list.  This is imported from NetBSD and is necessary
>   to introduce locking around uao_pagein().
> 
> ok?

Anyone?

> 
> Index: uvm/uvm_aobj.c
> ===
> RCS file: /cvs/src/sys/uvm/uvm_aobj.c,v
> retrieving revision 1.98
> diff -u -p -r1.98 uvm_aobj.c
> --- uvm/uvm_aobj.c16 Jun 2021 09:02:21 -  1.98
> +++ uvm/uvm_aobj.c16 Jun 2021 09:20:26 -
> @@ -779,19 +779,11 @@ uao_init(void)
>  void
>  uao_reference(struct uvm_object *uobj)
>  {
> - KERNEL_ASSERT_LOCKED();
> - uao_reference_locked(uobj);
> -}
> -
> -void
> -uao_reference_locked(struct uvm_object *uobj)
> -{
> -
>   /* Kernel object is persistent. */
>   if (UVM_OBJ_IS_KERN_OBJECT(uobj))
>   return;
>  
> - uobj->uo_refs++;
> + atomic_inc_int(>uo_refs);
>  }
>  
>  
> @@ -801,34 +793,19 @@ uao_reference_locked(struct uvm_object *
>  void
>  uao_detach(struct uvm_object *uobj)
>  {
> - KERNEL_ASSERT_LOCKED();
> - uao_detach_locked(uobj);
> -}
> -
> -
> -/*
> - * uao_detach_locked: drop a reference to an aobj
> - *
> - * => aobj may freed upon return.
> - */
> -void
> -uao_detach_locked(struct uvm_object *uobj)
> -{
>   struct uvm_aobj *aobj = (struct uvm_aobj *)uobj;
>   struct vm_page *pg;
>  
>   /*
>* Detaching from kernel_object is a NOP.
>*/
> - if (UVM_OBJ_IS_KERN_OBJECT(uobj)) {
> + if (UVM_OBJ_IS_KERN_OBJECT(uobj))
>   return;
> - }
>  
>   /*
>* Drop the reference.  If it was the last one, destroy the object.
>*/
> - uobj->uo_refs--;
> - if (uobj->uo_refs) {
> + if (atomic_dec_int_nv(>uo_refs) > 0) {
>   return;
>   }
>  
> @@ -1265,68 +1242,54 @@ uao_dropswap(struct uvm_object *uobj, in
>  boolean_t
>  uao_swap_off(int startslot, int endslot)
>  {
> - struct uvm_aobj *aobj, *nextaobj, *prevaobj = NULL;
> + struct uvm_aobj *aobj;
>  
>   /*
> -  * Walk the list of all anonymous UVM objects.
> +  * Walk the list of all anonymous UVM objects.  Grab the first.
>*/
>   mtx_enter(_list_lock);
> + if ((aobj = LIST_FIRST(_list)) == NULL) {
> + mtx_leave(_list_lock);
> + return FALSE;
> + }
> + uao_reference(>u_obj);
>  
> - for (aobj = LIST_FIRST(_list);
> -  aobj != NULL;
> -  aobj = nextaobj) {
> + do {
> + struct uvm_aobj *nextaobj;
>   boolean_t rv;
>  
>   /*
> -  * add a ref to the aobj so it doesn't disappear
> -  * while we're working.
> -  */
> - uao_reference_locked(>u_obj);
> -
> - /*
> -  * now it's safe to unlock the uao list.
> -  * note that lock interleaving is alright with IPL_NONE mutexes.
> +  * Prefetch the next object and immediately hold a reference
> +  * on it, so neither the current nor the next entry could
> +  * disappear while we are iterating.
>*/
> - mtx_leave(_list_lock);
> -
> - if (prevaobj) {
> - uao_detach_locked(>u_obj);
> - prevaobj = NULL;
> + if ((nextaobj = LIST_NEXT(aobj, u_list)) != NULL) {
> + uao_reference(>u_obj);
>   }
> + mtx_leave(_list_lock);
>  
>   /*
> -  * page in any pages in the swslot range.
> -  * if there's an error, abort and return the error.
> +  * Page in all pages in the swap slot range.
>*/
>   rv = uao_pagein(aobj, startslot, endslot);
> +
> + /* Drop the reference of the current object. */
> + uao_detach(>u_obj);
>   if (rv) {
> - uao_detach_locked(>u_obj);
> + if (nextaobj) {
> + uao_detach(>u_obj);
> + }
>   return rv;
>   }
>  
> - /*
> -  * we're done with this aobj.
> -  * relock the list and drop our ref on the aobj.
> -  */
> + aobj = nextaobj;
>   mtx_enter(_list_lock);
> - nextaobj = LIST_NEXT(aobj, u_list);
> - /*
> -  * prevaobj means that we have an object that we need
> -  * to drop a reference for. We can't just drop it now with
> -  * the list locked since that could cause lock recursion in
> -  * the case where we reduce the refcount to 0. It will be
> -

uao references & uao_swap_off() cleanup

2021-06-16 Thread Martin Pieuchot
Diff below does two things:

- Use atomic operations for incrementing/decrementing references of
  anonymous objects.  This allows us to manipulate them without holding
  the KERNEL_LOCK().

- Rewrite the loop from uao_swap_off() to only keep a reference to the
  next item in the list.  This is imported from NetBSD and is necessary
  to introduce locking around uao_pagein().

ok?

Index: uvm/uvm_aobj.c
===
RCS file: /cvs/src/sys/uvm/uvm_aobj.c,v
retrieving revision 1.98
diff -u -p -r1.98 uvm_aobj.c
--- uvm/uvm_aobj.c  16 Jun 2021 09:02:21 -  1.98
+++ uvm/uvm_aobj.c  16 Jun 2021 09:20:26 -
@@ -779,19 +779,11 @@ uao_init(void)
 void
 uao_reference(struct uvm_object *uobj)
 {
-   KERNEL_ASSERT_LOCKED();
-   uao_reference_locked(uobj);
-}
-
-void
-uao_reference_locked(struct uvm_object *uobj)
-{
-
/* Kernel object is persistent. */
if (UVM_OBJ_IS_KERN_OBJECT(uobj))
return;
 
-   uobj->uo_refs++;
+   atomic_inc_int(>uo_refs);
 }
 
 
@@ -801,34 +793,19 @@ uao_reference_locked(struct uvm_object *
 void
 uao_detach(struct uvm_object *uobj)
 {
-   KERNEL_ASSERT_LOCKED();
-   uao_detach_locked(uobj);
-}
-
-
-/*
- * uao_detach_locked: drop a reference to an aobj
- *
- * => aobj may freed upon return.
- */
-void
-uao_detach_locked(struct uvm_object *uobj)
-{
struct uvm_aobj *aobj = (struct uvm_aobj *)uobj;
struct vm_page *pg;
 
/*
 * Detaching from kernel_object is a NOP.
 */
-   if (UVM_OBJ_IS_KERN_OBJECT(uobj)) {
+   if (UVM_OBJ_IS_KERN_OBJECT(uobj))
return;
-   }
 
/*
 * Drop the reference.  If it was the last one, destroy the object.
 */
-   uobj->uo_refs--;
-   if (uobj->uo_refs) {
+   if (atomic_dec_int_nv(>uo_refs) > 0) {
return;
}
 
@@ -1265,68 +1242,54 @@ uao_dropswap(struct uvm_object *uobj, in
 boolean_t
 uao_swap_off(int startslot, int endslot)
 {
-   struct uvm_aobj *aobj, *nextaobj, *prevaobj = NULL;
+   struct uvm_aobj *aobj;
 
/*
-* Walk the list of all anonymous UVM objects.
+* Walk the list of all anonymous UVM objects.  Grab the first.
 */
mtx_enter(_list_lock);
+   if ((aobj = LIST_FIRST(_list)) == NULL) {
+   mtx_leave(_list_lock);
+   return FALSE;
+   }
+   uao_reference(>u_obj);
 
-   for (aobj = LIST_FIRST(_list);
-aobj != NULL;
-aobj = nextaobj) {
+   do {
+   struct uvm_aobj *nextaobj;
boolean_t rv;
 
/*
-* add a ref to the aobj so it doesn't disappear
-* while we're working.
-*/
-   uao_reference_locked(>u_obj);
-
-   /*
-* now it's safe to unlock the uao list.
-* note that lock interleaving is alright with IPL_NONE mutexes.
+* Prefetch the next object and immediately hold a reference
+* on it, so neither the current nor the next entry could
+* disappear while we are iterating.
 */
-   mtx_leave(_list_lock);
-
-   if (prevaobj) {
-   uao_detach_locked(>u_obj);
-   prevaobj = NULL;
+   if ((nextaobj = LIST_NEXT(aobj, u_list)) != NULL) {
+   uao_reference(>u_obj);
}
+   mtx_leave(_list_lock);
 
/*
-* page in any pages in the swslot range.
-* if there's an error, abort and return the error.
+* Page in all pages in the swap slot range.
 */
rv = uao_pagein(aobj, startslot, endslot);
+
+   /* Drop the reference of the current object. */
+   uao_detach(>u_obj);
if (rv) {
-   uao_detach_locked(>u_obj);
+   if (nextaobj) {
+   uao_detach(>u_obj);
+   }
return rv;
}
 
-   /*
-* we're done with this aobj.
-* relock the list and drop our ref on the aobj.
-*/
+   aobj = nextaobj;
mtx_enter(_list_lock);
-   nextaobj = LIST_NEXT(aobj, u_list);
-   /*
-* prevaobj means that we have an object that we need
-* to drop a reference for. We can't just drop it now with
-* the list locked since that could cause lock recursion in
-* the case where we reduce the refcount to 0. It will be
-* released the next time we drop the list lock.
-*/
-   prevaobj = aobj;
-   }
+   } while (aobj);
 
/*
 * done with