On Fri, Jun 03, 2016 at 11:29:13AM -0600, Alan Somers wrote:
> On Fri, Jun 3, 2016 at 11:26 AM, Konstantin Belousov
> <kostik...@gmail.com> wrote:
> > On Fri, Jun 03, 2016 at 09:29:16AM -0600, Alan Somers wrote:
> >> I notice that, with the exception of the VM_PHYSSEG_MAX change, these
> >> patches never made it into head or ports. Are they unsuitable for low
> >> core-count machines, or is there some other reason not to commit them?
> >> If not, what would it take to get these into 11.0 or 11.1 ?
> > The fast page fault handler was redesigned and committed in r269728
> > and r270011 (with several follow-ups).
> > Instead of lock-less buffer queues iterators, Jeff changed buffer allocator
> > to use uma, see r289279. Other improvement to the buffer cache was
> > committed as r267255.
> > What was not committed is the aggressive pre-population of the phys objects
> > mem queue, and a knob to further split NUMA domains into smaller domains.
> > The later change is rotten.
> > In fact, I think that with that load, what you would see right now on
> > HEAD, is the contention on vm_page_queue_free_mtx. There are plans to
> > handle it.
> Thanks for the update. Is it still recommended to enable the
> multithreaded pagedaemon?
Single-threaded pagedaemon cannot maintain the good system state even
on non-NUMA systems, if machine has large memory. This was the motivation
for the NUMA domain split patch. So yes, to get better performance you
should enable VM_NUMA_ALLOC option.
Unfortunately, there were some code changes of quite low quality which
resulted in the NUMA-enabled system to randomly fail with NULL pointer
deref in the vm page alloc path. Supposedly that was fixed, but you
should try that yourself. One result of the mentioned changes was that
nobody used/tested NUMA-enabled systems under any significant load, for
quite long time.
email@example.com mailing list
To unsubscribe, send any mail to "freebsd-current-unsubscr...@freebsd.org"