Hello all,

I saw 4.6 was released, great! From the release notes:

> SMP performance was already very good. As part of the NVMe driver work we
revamped the buffer cache subsystem and a number of other I/O related
paths, further reducing lock contention and IPI signalling overheads.

Was the IPI signaling cost reduced due to the cache improvements, or were
there other updates specifically for the IPI subsystem to improve the
performance?

Is there any available data illustrating or describing the IPI latencies,
overheads, sustained throughput, or other characteristics on DF across
cores and sockets on SMP systems?

I admit I have not poked around enough in the IPI code[1] to measure these
myself, but if no such data is available, I may try so.

[1] sys/kern/lwkt_ipiq.c (?)

Thanks!
Alex

Reply via email to