The correct solution was to make single_thread_set() and friends don't
rely on KERNEL_LOCK, and only on SCHED_LOCK.

The not-so-correct-solution is to fix assert and allow the recursion,
this is the diff I have in my tree for a while:

diff --git sys/kern/kern_lock.c sys/kern/kern_lock.c
index c87cb9a..cabdb9a 100644
--- sys/kern/kern_lock.c
+++ sys/kern/kern_lock.c
@@ -123,7 +123,8 @@ _kernel_lock_init(void)
 void
 _kernel_lock(void)
 {
-       SCHED_ASSERT_UNLOCKED();
+       if (!__mp_lock_held(&kernel_lock))
+               SCHED_ASSERT_UNLOCKED();
        __mp_lock(&kernel_lock);
 }

Reply via email to