Hello,

Michael Kelly, le jeu. 19 mars 2026 17:22:57 +0000, a ecrit:
> If this analysis is agreed, then it seems to me that it will be necessary to
> rearrange the code to call vm_fault(), for those virtual addresses that
> cannot be wired fast, without the map lock held.

It looks so indeed. The question becomes whether we can avoid keeping
the read lock. AIUI, it keeps it because vm_fault_wire is going through
the VMA so it wants to avoid another thread to be modifying the VMA
entry concurrently. For instance, what if userland is munmap()ing it
concurrently or whatever other memory trick. Possibly we could make
vm_fault_wire cope with concurrent changes of the entry start/end,
but the very existence of the entry could be dropped, and replace by
something else at the same virtual adress, and whatnot, so it sounds
like a very slippery rope. See the comment:

        /*
         * HACK HACK HACK HACK
         *
         * If we are wiring in the kernel map or a submap of it,
         * unlock the map to avoid deadlocks.  We trust that the
         * kernel threads are well-behaved, and therefore will
         * not do anything destructive to this region of the map
         * while we have it unlocked.  We cannot trust user threads
         * to do the same.
         *
         * We set the in_transition bit in the entries to prevent
         * them from getting coalesced with their neighbors at the
         * same time as we're accessing them.
         *
         * HACK HACK HACK HACK
         */

It'd be useful to determine why vm_fault_page is blocking.  You
mentioned the page to be busy. I guess what could be happening is that
defpager is given as RPC input some data which is currently getting
paged-in, and some other part of defpager is trying to provide the data,
but cannot because it cannot take the map lock?

Maybe we can, when thread->vm_privilege == TRUE, do the same unlock as
for kernel maps/submaps, so vm-privileged threads (which we already
trust for other reasons) can avoid the lock loop.

Samuel

Reply via email to