On 10/25/2018 12:05 PM, Mehmet Yaren wrote:
Hi Ian,

We are using XL710 as below

argela@dsfc-ovs:~$ lspci

...

08:00.0 Ethernet controller: Intel Corporation Ethernet Controller XL710 for 40GbE QSFP+ (rev 02)

08:00.1 Ethernet controller: Intel Corporation Ethernet Controller XL710 for 40GbE QSFP+ (rev 02) ...


Ok, I assume its the XL710-QDA2 as that has 2 x 40Gb ports.

Have you tried the ports with another DPDK application such as Testpmd? Do you see 40Gb traffic being processed on the ports or is it still ~10Gb in that DPDK application also?

From reading the tech docs on the card it seems the QSFP can be configured to run as 10Gb per port or 40 Gb per port so I'd like to clarify this first.

Our test setup is simple phy to phy test. We are using Trex to send traffic into our DPDK based OvS with XL710. It is a basic loopback setup. We can see ~10Gbps at most on OvS even though our CPUs usage are below than %25. OvS drops remaining traffic after ~10Gb. In addition, even if we use just one CPU in OvS, we see the same traffic ~10Gb. Why does more CPU count not increase our traffic handling?  Our configuration is as follows:

Out of interest, what is the profile of  the test traffic?

Thanks
Ian

argela@dsfc-ovs:~$ sudo ovs-ofctl dump-flows br0  cookie=0x0, duration=135.441s, table=0, n_packets=38181256, n_bytes=6251887265, priority=50,ip,in_port=dpdk0,nw_dst=10.0.0.0/16 actions=mod_dl_dst:3c:fd:fe:a8:23:e0,output:dpdk1 <http://10.0.0.0/16%0Aactions=mod_dl_dst:3c:fd:fe:a8:23:e0,output:dpdk1>

 cookie=0x0, duration=128.913s, table=0, n_packets=14099677, n_bytes=8482650942, priority=50,ip,in_port=dpdk1,nw_dst=10.0.0.0/16 actions=mod_dl_dst:3c:fd:fe:a8:23:e1,output:dpdk0 <http://10.0.0.0/16%0Aactions=mod_dl_dst:3c:fd:fe:a8:23:e1,output:dpdk0>

argela@dsfc-ovs:~$ sudo ovs-vsctl show

0ce5f427-33c6-4d79-a67c-b2a1c588c3e9

     Bridge "br0"

         Port "dpdk0"

Interface "dpdk0"

type: dpdk

options: {dpdk-devargs="0000:08:00.0", n_rxq="4"}

         Port "br0"

Interface "br0"

type: internal

         Port "dpdk1"

Interface "dpdk1"

type: dpdk

options: {dpdk-devargs="0000:08:00.1", n_rxq="4"}

argela@dsfc-ovs:~$ sudo ovs-appctl dpif-netdev/pmd-rxq-show pmd thread numa_id 0 core_id 0:

   isolated : true

   port: dpdk0             queue-id:  0  pmd usage: 25 %

pmd thread numa_id 1 core_id 1:

   isolated : true

   port: dpdk0             queue-id:  1  pmd usage: 25 %

pmd thread numa_id 0 core_id 2:

   isolated : true

   port: dpdk0             queue-id:  2  pmd usage: 23 %

pmd thread numa_id 1 core_id 3:

   isolated : true

   port: dpdk0             queue-id:  3  pmd usage: 26 %

pmd thread numa_id 0 core_id 4:

   isolated : true

   port: dpdk1             queue-id:  0  pmd usage:  9 %

pmd thread numa_id 1 core_id 5:

   isolated : true

   port: dpdk1             queue-id:  1  pmd usage: 10 %

pmd thread numa_id 0 core_id 6:

   isolated : true

   port: dpdk1             queue-id:  2  pmd usage:  9 %

pmd thread numa_id 1 core_id 7:

   isolated : true

   port: dpdk1             queue-id:  3  pmd usage: 10 %

Regards,
Mehmet.

Ian Stokes <ian.sto...@intel.com <mailto:ian.sto...@intel.com>>, 24 Eki 2018 Çar, 17:23 tarihinde şunu yazdı:

    On 10/23/2018 1:02 PM, mehmetyaren wrote:
     > Hi,
     >
     > We want to send 40 Gbit traffic with t-rex traffic generator, we are
     > using 40 Gbit supported DPDK NIC, but when we generate traffic as 40
     > Gbit, it can not transmit all traffics just 10 Gbit traffic is
     > transmitted by DPDK NIC. We are using Open vSwitch 2.10.0 version
    and
     > DPDK 17.11.2 version.

    What type of NICs are being used in your setup?

     >
     > We made some configuration to increasing core number for dpdk
    ports with
     > below commands;
     >
     > ovs-vsctl set interface dpdk0 options:n_rxq=4
     > other_config:pmd-rxq-affinity="0:0,1:1,2:2,3:3"
     >
     > ovs-vsctl set interface dpdk1 options:n_rxq=4
     > other_config:pmd-rxq-affinity="0:4,1:5,2:6,3:7"
     >
     > And we have seen all pmd capacities are not used by dpdk ports as
    seen
     > below results;
     >
     > sudo ovs-appctl dpif-netdev/pmd-rxq-show pmd thread numa_id 0
    core_id 0:
     >
     >    isolated : true
     >
     >    port: dpdk0             queue-id:  0  pmd usage: 19 %
     >
     > pmd thread numa_id 1 core_id 1:
     >
     >    isolated : true
     >
     >    port: dpdk0             queue-id:  1  pmd usage: 20 %
     >
     > pmd thread numa_id 0 core_id 2:
     >
     >    isolated : true
     >
     >    port: dpdk0             queue-id:  2  pmd usage: 20 %
     >
     > pmd thread numa_id 1 core_id 3:
     >
     >    isolated : true
     >
     >    port: dpdk0             queue-id:  3  pmd usage: 21 %
     >
     > pmd thread numa_id 0 core_id 4:
     >
     >    isolated : true
     >
     >    port: dpdk1             queue-id:  0  pmd usage: 13 %
     >
     > pmd thread numa_id 1 core_id 5:
     >
     >    isolated : true
     >
     >    port: dpdk1             queue-id:  1  pmd usage: 13 %
     >
     > pmd thread numa_id 0 core_id 6:
     >
     >    isolated : true
     >
     >    port: dpdk1             queue-id:  2  pmd usage: 13 %
     >
     > pmd thread numa_id 1 core_id 7:
     >
     >    isolated : true
     >
     >    port: dpdk1             queue-id:  3  pmd usage: 15 %
     >

    Can you provide more information regarding the test setup? I assume
    it's
    simple hair pin tests (phy to phy).

    What flow rules are you using?

    Regards
    Ian

     > Can anyone help me ?
     >
     > Mehmet.
     >
     > Sent from Mail <https://go.microsoft.com/fwlink/?LinkId=550986> for
     > Windows 10
     >
     >
     >
     > _______________________________________________
     > discuss mailing list
     > disc...@openvswitch.org <mailto:disc...@openvswitch.org>
     > https://mail.openvswitch.org/mailman/listinfo/ovs-discuss
     >


_______________________________________________
discuss mailing list
disc...@openvswitch.org
https://mail.openvswitch.org/mailman/listinfo/ovs-discuss

Reply via email to