Re: Unlock uvm a tiny bit more

2019-05-15 Thread Mike Larkin
On Tue, May 14, 2019 at 12:13:52AM +0200, Mark Kettenis wrote:
> This changes uvm_unmap_detach() to get rid of the "easy" entries first
> before grabbing the kernel lock.  Probably doesn't help much with the
> lock contention, but it avoids a locking problem that happens with
> pools that use kernel_map() to allocate the kva for their pages.
> 
> ok?
> 

Reads ok to me, ok mlarkin

> 
> Index: uvm/uvm_map.c
> ===
> RCS file: /cvs/src/sys/uvm/uvm_map.c,v
> retrieving revision 1.243
> diff -u -p -r1.243 uvm_map.c
> --- uvm/uvm_map.c 23 Apr 2019 13:35:12 -  1.243
> +++ uvm/uvm_map.c 13 May 2019 22:09:26 -
> @@ -1538,8 +1538,18 @@ uvm_mapent_tryjoin(struct vm_map *map, s
>  void
>  uvm_unmap_detach(struct uvm_map_deadq *deadq, int flags)
>  {
> - struct vm_map_entry *entry;
> + struct vm_map_entry *entry, *tmp;
>   int waitok = flags & UVM_PLA_WAITOK;
> +
> + TAILQ_FOREACH_SAFE(entry, deadq, dfree.deadq, tmp) {
> + /* Skip entries for which we have to grab the kernel lock. */
> + if (entry->aref.ar_amap || UVM_ET_ISSUBMAP(entry) ||
> + UVM_ET_ISOBJ(entry))
> + continue;
> +
> + TAILQ_REMOVE(deadq, entry, dfree.deadq);
> + uvm_mapent_free(entry);
> + }
>  
>   if (TAILQ_EMPTY(deadq))
>   return;
> 



Unlock uvm a tiny bit more

2019-05-13 Thread Mark Kettenis
This changes uvm_unmap_detach() to get rid of the "easy" entries first
before grabbing the kernel lock.  Probably doesn't help much with the
lock contention, but it avoids a locking problem that happens with
pools that use kernel_map() to allocate the kva for their pages.

ok?


Index: uvm/uvm_map.c
===
RCS file: /cvs/src/sys/uvm/uvm_map.c,v
retrieving revision 1.243
diff -u -p -r1.243 uvm_map.c
--- uvm/uvm_map.c   23 Apr 2019 13:35:12 -  1.243
+++ uvm/uvm_map.c   13 May 2019 22:09:26 -
@@ -1538,8 +1538,18 @@ uvm_mapent_tryjoin(struct vm_map *map, s
 void
 uvm_unmap_detach(struct uvm_map_deadq *deadq, int flags)
 {
-   struct vm_map_entry *entry;
+   struct vm_map_entry *entry, *tmp;
int waitok = flags & UVM_PLA_WAITOK;
+
+   TAILQ_FOREACH_SAFE(entry, deadq, dfree.deadq, tmp) {
+   /* Skip entries for which we have to grab the kernel lock. */
+   if (entry->aref.ar_amap || UVM_ET_ISSUBMAP(entry) ||
+   UVM_ET_ISOBJ(entry))
+   continue;
+
+   TAILQ_REMOVE(deadq, entry, dfree.deadq);
+   uvm_mapent_free(entry);
+   }
 
if (TAILQ_EMPTY(deadq))
return;