On 17 August 2016 at 08:43, Ben RUBSON <ben.rub...@gmail.com> wrote: > >> On 17 Aug 2016, at 17:38, Adrian Chadd <adrian.ch...@gmail.com> wrote: >> >> [snip] >> >> ok, so this is what I was seeing when I was working on this stuff last. >> >> The big abusers are: >> >> * so_snd lock, for TX'ing producer/consumer socket data >> * tcp stack pcb locking (which rss tries to work around, but it again >> doesn't help producer/consumer locking, only multiple sockets) >> * for some of the workloads, the scheduler spinlocks are pretty >> heavily contended and that's likely worth digging into. >> >> Thanks! I'll go try this on a couple of boxes I have with >> intel/chelsio 40g hardware in it and see if I can reproduce it. (My >> test boxes have the 40g NICs in NUMA domain 1...) > > You're welcome, happy to help and troubleshoot :) > > What about the performance which differs from one reboot to another, > as if the NUMA domains have switched ? (0 to 1 & 1 to 0) > Did you already see this ?
I've seen some varying behaviours, yeah. There are a lot of missing pieces in kernel-side NUMA, so a lot of the kernel memory allocation behaviours are undefined. Well, tehy'e defined; it's just there's no way right now for the kernel (eg mbufs, etc) to allocate domain local memory. So it's "by accident", and sometimes it's fine; sometimes it's not. -adrian _______________________________________________ freebsd-net@freebsd.org mailing list https://lists.freebsd.org/mailman/listinfo/freebsd-net To unsubscribe, send any mail to "freebsd-net-unsubscr...@freebsd.org"