Hey folks, I've been trying to leverage vxlan hardware offload (checksum) to improve tunnel performance.
If I run vxlan tunnels over a single 10Gbps interface, I can achieve roughly 9Gbps throughput between VMs with MTU 1500 vnic. Without hardware offload, the performance is much worse. With bonding (2x10G), however, the performance doesn't go above 8Gbps. I've tried doing bonding via ifenslave (6.5Gbps) as well as OVS bonding (8Gbps). Any ideas why bonding seems to negate the hardware offload capability? Any recommendations on configuration to fully leverage such hardware? Thanks. -Simon
_______________________________________________ discuss mailing list [email protected] http://openvswitch.org/mailman/listinfo/discuss
