On 16.04.20 08:34, Pavel Timofeev wrote:

вт, 14 апр. 2020 г., 12:51 Kristof Provost <k...@freebsd.org>:

Hi,

Thanks to support from The FreeBSD Foundation I’ve been able to work
on improving the throughput of if_bridge.
It changes the (data path) locking to use the NET_EPOCH infrastructure.
Benchmarking shows substantial improvements (x5 in test setups).

This work is ready for wider testing now.

It’s under review here: https://reviews.freebsd.org/D24250

Patch for CURRENT: https://reviews.freebsd.org/D24250?download=true
Patches for stable/12:
https://people.freebsd.org/~kp/if_bridge/stable_12/

I’m not currently aware of any panics or issues resulting from these
patches.

Do note that if you run a Bhyve + tap on bridges setup the tap code
suffers from a similar bottleneck and you will likely not see major
improvements in single VM to host throughput. I would expect, but have
not tested, improvements in overall throughput (i.e. when multiple VMs
send traffic at the same time).

Best regards,
Kristof

Hi!
Thank you for your work!
Do you know if epair suffers from the same issue as tap?

As Kirstof Provost said if_epair locks has per CPU locks, but a problem exists a layer about the epair driver. At leas on FreeBSD 12.0 and 12.1 all the packet processing happens in a single netisr thread that becomes CPU bound and limits how fast useful traffic can move through epair interfaces. Afaik TCP doesn't benifit from multiple netisr threads, but unorderer protocols (e.g. UDP) could profit from multiple threads.


I have only tested with iperf (using multiple connections) between the FreeBSD 12.x host and a vnet enabled jail connected via an epair interface and maxed out at about 1-2Gb/s depending on the CPUs single threaded throughput.

_______________________________________________
freebsd-current@freebsd.org mailing list
https://lists.freebsd.org/mailman/listinfo/freebsd-current
To unsubscribe, send any mail to "freebsd-current-unsubscr...@freebsd.org"

Reply via email to