On 21/10/23(Sat) 14:28, Miod Vallat wrote:
> > Stuart, Miod, I wonder if this also help for the off-by-one issue you
> > are seeing.  It might not.
> 
> It makes the aforementioned issue disappear on the affected machine.

Thanks at lot for testing!

> > Comments, ok?
> 
> > diff --git sys/uvm/uvm_pdaemon.c sys/uvm/uvm_pdaemon.c
> > index 284211d226c..a26a776df67 100644
> > --- sys/uvm/uvm_pdaemon.c
> > +++ sys/uvm/uvm_pdaemon.c
> 
> > @@ -917,9 +914,7 @@ uvmpd_scan(struct uvm_pmalloc *pma, struct 
> > uvm_constraint_range *constraint)
> >      */
> >     free = uvmexp.free - BUFPAGES_DEFICIT;
> >     swap_shortage = 0;
> > -   if (free < uvmexp.freetarg &&
> > -       uvmexp.swpginuse == uvmexp.swpages &&
> > -       !uvm_swapisfull() &&
> > +   if (free < uvmexp.freetarg && uvm_swapisfilled() && !uvm_swapisfull() &&
> >         pages_freed == 0) {
> >             swap_shortage = uvmexp.freetarg - free;
> >     }
> 
> It's unfortunate that you now invoke two uvm_swapisxxx() routines, which
> will both acquire a mutex. Maybe a third uvm_swapisxxx routine could be
> introduced to compute the swapisfilled && !swapisfull condition at once?

I'm mot interested in such micro optimization yet.  Not acquiring a mutex
twice is IMHO not worth making half-shiny. 

However is someone is interested in going down in this direction, I'd
suggest try placing `uvmexp.freetarg' under the same lock and deal with
all its occurrences.  This is a possible next step to reduce the scope of
the uvm_lock_pageq() which is currently responsible for most of the
contention on MP in UVM.

Reply via email to