Hi Michael,

Thanks for your reply. Below are the answers to your questions inline.

On Thu, Oct 4, 2018 at 8:01 AM Michael Richardson <[email protected]> wrote:

>
> Onkar Pednekar <[email protected]> wrote:
>     > I have been experimenting with OVS DPDK on 1G interfaces. The system
>     > has 8 cores (hyperthreading enabled) mix of dpdk and non-dpdk capable
>     > ports, but the data traffic runs only on dpdk ports.
>
>     > DPDK ports are backed by vhost user netdev and I have configured the
>     > system so that hugepages are enabled, CPU cores isolated with PMD
>     > threads allocated to them and also pinning the VCPUs.
>
>     > When I run UDP traffic, I see ~ 1G throughput on dpdk interfaces
> with <
>     > 1% packet loss. However, with tcp traffic, I see around 300Mbps
>
> What size packet?
>
UDP packet size = 1300 bytes
TCP packet size = default packet size using iperf, the physical dpdk port
MTU is 2000, dpdk vhost user port MTU is 1500, also the client running
iperf alos has 1500 mtu interface, so I guess the packet size would be 1500
for TCP.

What's your real pps?
>
UDP: dpdk and dpdkvhost user interfaces show ~ 803Kpps
TCP: dpdk and dpdkvhost user interfaces show ~ 225Kpps

What do you do for test traffic?
>
Client and server machines with iperf for generation tcp and udp traffic
with 6 client threads running in parallel (-P 6 option with iperf)
Client Commands:
TCP: iperf -c <server ip> -P 6 -t 90
UDP: iperf -c <server ip> -u -l1300 -b 180M -P 6 -t 90

What is your latency?  Are there queues full?
>
How can I check this?


> Are you layer-2 switching or layer-3 routing, or something exotic?
>
OVS contains mix of l2 and l3 flows, but the (tcp/udp) traffic path uses l2
switching.

>
>     > thoughput. I see that setting generic receive offload to off helps,
> but
>     > still the TCP thpt is very less compared to the nic capabilities.  I
>     > know that there will be some performance degradation for TCP as
> against
>     > UDP but this is way below expected.
>
> Receive offload should only help if you are terminating the TCP flows.
> I could well see that it would affect a switching situation significantly.
> What are you using for TCP flow generation?  Are you running real TCP
> stacks with window calculations and back-off, etc?  Is your offered load
> actually going up?
>
I am using iperf to generate traffic between client and server with stable
workload.

>
>     > I don't see any packets dropped for tcp on the internal VM (virtual)
>     > interfaces.
>
> ?virtual?
> I don't understand: do you have senders/receivers on the machine under
> test?
>
By virtual i meant the dpdk vhost user interfaces. iperf is running on
client and server machines external to the machine (with ovs) under test.

Topology:
[image: image.png]


>
> --
> ]               Never tell me the odds!                 | ipv6 mesh
> networks [
> ]   Michael Richardson, Sandelman Software Works        | network
> architect  [
> ]     [email protected]  http://www.sandelman.ca/        |   ruby on
> rails    [
>
>
>
_______________________________________________
discuss mailing list
[email protected]
https://mail.openvswitch.org/mailman/listinfo/ovs-discuss

Reply via email to