Hi Onkar,

Thanks for your email. Your setup isn't very clear to me, so a few
queries in-line.

On 04/10/2018 06:06, Onkar Pednekar wrote:
> Hi,
> 
> I have been experimenting with OVS DPDK on 1G interfaces. The system has
> 8 cores (hyperthreading enabled) mix of dpdk and non-dpdk capable ports,
> but the data traffic runs only on dpdk ports.
> 
> DPDK ports are backed by vhost user netdev and I have configured the
> system so that hugepages areĀ enabled, CPU cores isolated with PMD
> threads allocated to them and also pinning the VCPUs.>
> When I run UDP traffic, I see ~ 1G throughput on dpdk interfaces with <
> 1% packet loss. However, with tcp traffic, I see around 300Mbps
> thoughput. I see that setting generic receive offload to off helps, but
> still the TCP thpt is very less compared to the nic capabilities. I know
> that there will be some performance degradation for TCP as against UDP
> but this is way below expected.
> 

When transmitting traffic between the DPDK ports, what are the flows you
have setup? Does it follow a p2p or pvp setup? In other words, does the
traffic flow between the VM and the physical ports, or only between
physical ports?

> I don't see any packets dropped for tcp on the internal VM (virtual)
> interfaces.
> 
> I would like to know if there is an settings (offloads) for the
> interfaces or any other config I might be missing.

What is the MTU set on the DPDK ports? Both physical and vhost-user?

$ ovs-vsctl get Interface [dpdk0|vhostuserclient0] mtu

This will help to clarify some doubts around your setup first.

Tiago.

> 
> Thanks,
> Onkar
> 
> 
> _______________________________________________
> discuss mailing list
> [email protected]
> https://mail.openvswitch.org/mailman/listinfo/ovs-discuss
> 
_______________________________________________
discuss mailing list
[email protected]
https://mail.openvswitch.org/mailman/listinfo/ovs-discuss

Reply via email to