Hi all,

I am able to get expected performance using ovs dpdk on a single socket
system.
But on a system with 2 NUMA nodes, the throughput is less than expected.

The system has 8 physical cores each socket with hyperthreading enabled. So
total 32 cores.

Only one physical 10G interface is being used which after binding to dpdk
gets associated with socket 1.

The OVS passes the traffic from this interface to dpdkvhostuser interfaces
of 2 VMs, the VCPUS of each VM are pinned to physical cores from different
sockets.

So the traffic flows is as follows:
PHY <-> VM1 <-> PHY
PHY <-> VM2 <-> PHY

Since the only physical dpdk interface is associated with socket1, i see
that the pmd core on socket 1 is 100% utilized but no work is done by the
core of socket 2 where the other pmd thread is pinned. I know this is
expected since there are no dpdk interfaces associated with socket2. But
since I have VMs pinned to cores of socket 2, there is a cross node packet
transfer which I think is affecting the performance.

I wanted to know if there is any configuration or parameter that can help
optimized this inter-NUMA data path.

Thanks,
Onkar
_______________________________________________
discuss mailing list
disc...@openvswitch.org
https://mail.openvswitch.org/mailman/listinfo/ovs-discuss

Reply via email to