On 2/18/23 03:23, Martin Pieuchot wrote:
On 24/01/23(Tue) 04:40, Renato Aguiar wrote:
Hi Martin,

"David Hill" <[email protected]> writes:


Yes, same result as before.  This patch does not seem to help.


I could also reproduce it with patched 'current' :(

Here's another possible fix I came up with.  The idea is to deliberately
allow readers to starve writers for the VM map lock.

My previous fix didn't work because a writer could already be waiting on
the lock before calling vm_map_busy() and there's currently no way to
tell it to back out and stop preventing readers from making progress.

I don't see any simpler solution without rewriting the vm_map_lock()
primitive.

Could you confirm this work for you?  If so I'll document the new
RW_FORCEREAD flag and add a comment to explain why we need it in this
case.

Thanks,
Martin


I cannot reproduce the hang with this diff.

Index: kern/kern_rwlock.c
===================================================================
RCS file: /cvs/src/sys/kern/kern_rwlock.c,v
retrieving revision 1.48
diff -u -p -r1.48 kern_rwlock.c
--- kern/kern_rwlock.c  10 May 2022 16:56:16 -0000      1.48
+++ kern/kern_rwlock.c  18 Feb 2023 07:48:23 -0000
@@ -225,7 +225,7 @@ rw_enter(struct rwlock *rwl, int flags)
  {
        const struct rwlock_op *op;
        struct sleep_state sls;
-       unsigned long inc, o;
+       unsigned long inc, o, check;
  #ifdef MULTIPROCESSOR
        /*
         * If process holds the kernel lock, then we want to give up on CPU
@@ -248,10 +248,14 @@ rw_enter(struct rwlock *rwl, int flags)
  #endif
op = &rw_ops[(flags & RW_OPMASK) - 1];
+       check = op->check;
+
+       if (flags & RW_FORCEREAD)
+               check &= ~RWLOCK_WRWANT;
inc = op->inc + RW_PROC(curproc) * op->proc_mult;
  retry:
-       while (__predict_false(((o = rwl->rwl_owner) & op->check) != 0)) {
+       while (__predict_false(((o = rwl->rwl_owner) & check) != 0)) {
                unsigned long set = o | op->wait_set;
                int do_sleep;
Index: sys/rwlock.h
===================================================================
RCS file: /cvs/src/sys/sys/rwlock.h,v
retrieving revision 1.28
diff -u -p -r1.28 rwlock.h
--- sys/rwlock.h        11 Jan 2021 18:49:38 -0000      1.28
+++ sys/rwlock.h        18 Feb 2023 07:44:13 -0000
@@ -117,6 +117,7 @@ struct rwlock {
  #define RW_NOSLEEP            0x0040UL /* don't wait for the lock */
  #define RW_RECURSEFAIL                0x0080UL /* Fail on recursion for RRW 
locks. */
  #define RW_DUPOK              0x0100UL /* Permit duplicate lock */
+#define RW_FORCEREAD           0x0200UL /* Let a reader preempt writers.  */
/*
   * for rw_status() and rrw_status() only: exclusive lock held by
Index: uvm/uvm_map.c
===================================================================
RCS file: /cvs/src/sys/uvm/uvm_map.c,v
retrieving revision 1.312
diff -u -p -r1.312 uvm_map.c
--- uvm/uvm_map.c       13 Feb 2023 14:52:55 -0000      1.312
+++ uvm/uvm_map.c       18 Feb 2023 07:46:23 -0000
@@ -5406,7 +5406,7 @@ void
  vm_map_lock_read_ln(struct vm_map *map, char *file, int line)
  {
        if ((map->flags & VM_MAP_INTRSAFE) == 0)
-               rw_enter_read(&map->lock);
+               rw_enter(&map->lock, RW_READ|RW_FORCEREAD);
        else
                mtx_enter(&map->mtx);
        LPRINTF(("map   lock: %p (at %s %d)\n", map, file, line));


Reply via email to