I have 2 vms over the same host (openstack compute). Both vms have these vNICs attached:
* pci-passthrough 10Gbps (vNIC1) * virtio (ovs) (vNIC2) The traffic flows as follows: Traffic generator -> pNIC -> [vm1 vNIC1 (passthrough) -> fwd -> vNIC2] -> OVS -> [vm2 vNIC2] Because vNIC1 is passthrough, I can get to 10Gbps when redirecting back to vNIC1. But redirecting through OVS puts traffic down to 150Mbps. I'm aware this is not ovs-dpdk, but as I've seen http://www.openvswitch.org/support/ovscon2014/18/1600-ovs_perf.pptx (here) it should get up to 1.1Gbps for 64B packets. This is my host: * Intel(R) Xeon(R) CPU X5650 @ 2.67GHz * 24 CPUs (HT) * VT-x enabled * 32GB ram * Isolated cpus Both the vms have * 8 GB ram * 3 CPU pinning (actually only 1 works redirecting the traffic) All sibling-cpus are idle. I mean, I treat it as non-HT machine. I'm using openstack rocky. # ovs-vswitchd --version ovs-vswitchd (Open vSwitch) 2.10.0 Is there anything I can do to boost native OVS performance? Doesn't 150Mbps look a bit slow? Thanks, Yogev
_______________________________________________ discuss mailing list [email protected] https://mail.openvswitch.org/mailman/listinfo/ovs-discuss
