On Sat, Dec 12, 2020 at 10:54:57PM +1000, Jonathan Matthew wrote:
> On Thu, Dec 10, 2020 at 10:46:58AM -0300, Martin Pieuchot wrote:
> > On 08/12/20(Tue) 22:55, Jonathan Matthew wrote:
> > > On Mon, Dec 07, 2020 at 03:15:50PM -0300, Martin Pieuchot wrote:
> > > > Getting a page from the fault handler might require poking at some
> > > > swap-related states.
> > > >
> > > > These are not in the hot-path of the fault handler so for the moment
> > > > just assert that the KERNEL_LOCK() is held or grab it if the function
> > > > might be called from an future unlocked path.
> > > >
> > > > ok?
> > >
> > > Could you add 'K' to the list of locks in the comment above struct uvmexp
> > > too?
> >
> > Updated diff below.
> >
> > > I went looking for other uses of swpgonly and saw that it's used under
> > > uvm_map_teardown -> uvm_unmap_kill_entry -> uvm_km_pgremove,
> > > and uvm_map_teardown ensures that the kernel lock is not held.
> > > Not related to this diff exactly, but is this something we need to fix?
> >
> > I suppose that the problem can only occur if a kernel thread is exiting
> > since this code is only executed for the kernel pmap. Anyway I added an
> > assertion.
>
> Right, and as I understand it, kernel threads all share the proc0 vm space,
> so its reference count won't ever reach 0, so the kernel map portions of
> uvm_unmap_kill_entry() can't be reached from the reaper. Looks like this is
> all safe, it just requires a bit more reading than I did the first time.
> I'll see if I can find a way to make it more clear.
And now that I've tested this out and checked that it doesn't blow up when
you drive the machine into swap, ok jmatthew@
>
> >
> > Index: uvm/uvm_km.c
> > ===================================================================
> > RCS file: /cvs/src/sys/uvm/uvm_km.c,v
> > retrieving revision 1.137
> > diff -u -p -r1.137 uvm_km.c
> > --- uvm/uvm_km.c 23 May 2020 06:15:09 -0000 1.137
> > +++ uvm/uvm_km.c 10 Dec 2020 13:33:49 -0000
> > @@ -243,6 +243,7 @@ uvm_km_pgremove(struct uvm_object *uobj,
> > voff_t curoff;
> > int slot;
> >
> > + KERNEL_ASSERT_LOCKED();
> > KASSERT(uobj->pgops == &aobj_pager);
> >
> > for (curoff = start ; curoff < end ; curoff += PAGE_SIZE) {
> > Index: uvm/uvm_swap.c
> > ===================================================================
> > RCS file: /cvs/src/sys/uvm/uvm_swap.c,v
> > retrieving revision 1.147
> > diff -u -p -r1.147 uvm_swap.c
> > --- uvm/uvm_swap.c 29 Sep 2020 11:47:41 -0000 1.147
> > +++ uvm/uvm_swap.c 10 Dec 2020 13:30:30 -0000
> > @@ -1403,7 +1403,7 @@ uvm_swap_alloc(int *nslots, boolean_t le
> > /*
> > * lock data lock, convert slots into blocks, and enter loop
> > */
> > -
> > + KERNEL_ASSERT_LOCKED();
> > ReTry: /* XXXMRG */
> > LIST_FOREACH(spp, &swap_priority, spi_swappri) {
> > TAILQ_FOREACH(sdp, &spp->spi_swapdev, swd_next) {
> > @@ -1449,8 +1449,10 @@ uvm_swapisfull(void)
> > {
> > int result;
> >
> > + KERNEL_LOCK();
> > KASSERT(uvmexp.swpgonly <= uvmexp.swpages);
> > result = (uvmexp.swpgonly == uvmexp.swpages);
> > + KERNEL_UNLOCK();
> >
> > return result;
> > }
> > @@ -1465,6 +1467,7 @@ uvm_swap_markbad(int startslot, int nslo
> > {
> > struct swapdev *sdp;
> >
> > + KERNEL_LOCK();
> > sdp = swapdrum_getsdp(startslot);
> > if (sdp != NULL) {
> > /*
> > @@ -1475,6 +1478,7 @@ uvm_swap_markbad(int startslot, int nslo
> > */
> > sdp->swd_npgbad += nslots;
> > }
> > + KERNEL_UNLOCK();
> > }
> >
> > /*
> > @@ -1501,7 +1505,7 @@ uvm_swap_free(int startslot, int nslots)
> > * in the extent, and return. must hold pri lock to do
> > * lookup and access the extent.
> > */
> > -
> > + KERNEL_LOCK();
> > sdp = swapdrum_getsdp(startslot);
> > KASSERT(uvmexp.nswapdev >= 1);
> > KASSERT(sdp != NULL);
> > @@ -1533,6 +1537,7 @@ uvm_swap_free(int startslot, int nslots)
> > }
> > }
> > #endif /* UVM_SWAP_ENCRYPT */
> > + KERNEL_UNLOCK();
> > }
> >
> > /*
> > @@ -1567,6 +1572,7 @@ uvm_swap_get(struct vm_page *page, int s
> > return VM_PAGER_ERROR;
> > }
> >
> > + KERNEL_LOCK();
> > /* this page is (about to be) no longer only in swap. */
> > uvmexp.swpgonly--;
> >
> > @@ -1577,7 +1583,7 @@ uvm_swap_get(struct vm_page *page, int s
> > /* oops, the read failed so it really is still only in swap. */
> > uvmexp.swpgonly++;
> > }
> > -
> > + KERNEL_UNLOCK();
> > return (result);
> > }
> >
> > @@ -1599,6 +1605,8 @@ uvm_swap_io(struct vm_page **pps, int st
> > struct swapdev *sdp;
> > int encrypt = 0;
> > #endif
> > +
> > + KERNEL_ASSERT_LOCKED();
> >
> > write = (flags & B_READ) == 0;
> > async = (flags & B_ASYNC) != 0;
> > Index: uvm/uvmexp.h
> > ===================================================================
> > RCS file: /cvs/src/sys/uvm/uvmexp.h,v
> > retrieving revision 1.6
> > diff -u -p -r1.6 uvmexp.h
> > --- uvm/uvmexp.h 1 Dec 2020 13:56:22 -0000 1.6
> > +++ uvm/uvmexp.h 10 Dec 2020 13:31:01 -0000
> > @@ -42,6 +42,7 @@
> > *
> > * Locks used to protect struct members in this file:
> > * I immutable after creation
> > + * K kernel lock
> > * F uvm_lock_fpageq
> > */
> > struct uvmexp {
> > @@ -79,9 +80,9 @@ struct uvmexp {
> >
> > /* swap */
> > int nswapdev; /* number of configured swap devices in system */
> > - int swpages; /* number of PAGE_SIZE'ed swap pages */
> > + int swpages; /* [K] number of PAGE_SIZE'ed swap pages */
> > int swpginuse; /* number of swap pages in use */
> > - int swpgonly; /* number of swap pages in use, not also in RAM */
> > + int swpgonly; /* [K] number of swap pages in use, not also in RAM */
> > int nswget; /* number of swap pages moved from disk to RAM */
> > int nanon; /* XXX number total of anon's in system */
> > int unused05; /* formerly nanonneeded */