[dpdk-users] Need help to run test_pmd via sr-iov setup

2017-10-11 Thread zhi ma
Need help to run test_pmd via sr-iov setup
my steps:1. set "intel_iommu=on" in kernel boot parameter2. update host grub 
and reboot3. create VF:    echo 1 > /sys/class/net/eth17/device sriov_numvfs    
echo 1 > /sys/class/net/eth18/device sriov_numvfs    cat 
/sys/class/net/eth17/device sriov_numvfs    1    cat 
/sys/class/net/eth18/device sriov_numvfs    1 4. set Mac    ip link set eth17 
vf 0 mac aa:bb:cc:dd:00:00    ip link set eth18 vf 0 mac aa:bb:cc:dd:01:00 5. 
rmmod ixgbevf 6. start ubuntu 14.04 vm with KVM virt-manager, add two PCI host 
device (Virtual function devices) 7. run dpdk app --- test_pmd in vm (two ports 
connected back to back)    start tx_first    stop    no packets  8. if I run 
test_pmd inside host machine, it works fine. (so those two ports connected fine)
 INTEL X520-SR2 both host and guest vm running as ubuntu 14.04
 any thing wrong with my setup? Please help
part of my virus xml         
                                 
    

                                          
    



Re: [dpdk-users] VLAN tags always stripped on i40evf [VMware SR-IOV]

2017-10-11 Thread Iain Barker
On Tuesday, October 10, 2017 9:49 AM (EST), Iain Barker wrote:
> I have a problem trying to get VLAN tagged frames to be received at the 
> i40evf PMD.  

With more debugging enabled, I can see that this seems to be a compatibility 
problem between DPDK and i40evf related to VLAN hardware stripping.

Specifically, when DPDK requests VLAN stripping to be disabled by VF, but the 
PF policy doesn't allow it to be disabled (as is the case for VMware SR-IOV), 
an error is returned from the API.

testpmd> vlan set strip off 0
  i40evf_execute_vf_cmd(): No response for 28
  i40evf_disable_vlan_strip(): Failed to execute command of 
VIRTCHNL_OP_DISABLE_VLAN_STRIPPING

In that case, received frames with VLAN headers will still be stripped at the 
PF, and the TCI will record the missing VLAN details when handed up to the DPDK 
driver.

With i40e debug enabled, it's clear to see the difference being reported in 
i40e_rxd_to_vlan_tci:

Example using VLAN on i40e PCI (vlan works):
  PMD: i40e_rxd_to_vlan_tci(): Mbuf vlan_tci: 0, vlan_tci_outer: 0
  Port 0 pkt-len=102 nb-segs=1
ETH:  src=00:10:E0:8D:A7:52 dst=00:10:E0:8A:86:8A [vlan id=8] type=0x0800
IPV4: src=8.8.8.102 dst=8.8.8.3 proto=1 (ICMP)
ICMP: echo request seq id=1

Example using VLAN on i40evf SR-IOV (vlan fails):
  PMD: i40e_rxd_to_vlan_tci(): Mbuf vlan_tci: 8, vlan_tci_outer: 0
  Port 0 pkt-len=60 nb-segs=1
ETH:  src=00:10:E0:8D:A7:52 dst=FF:FF:FF:FF:FF:FF type=0x0806
ARP:  hrd=1 proto=0x0800 hln=6 pln=4 op=1 (ARP Request)
  sha=00:10:E0:8D:A7:52 sip=8.8.8.102
  tha=00:00:00:00:00:00 tip=8.8.8.3

As the application requested tagging not be stripped, and the hardware driver 
was not able to disable strip, in my opinion DPDK should emulate the requested 
behavior by re-add the missing VLAN header in the RX thread, before it passes 
the mbuf to the application.  I'm guessing that the native Linux driver is 
smart enough to do something like this automatically in software, but DPDK does 
not...

Adding a call to rte_vlan_insert() to reinstate the VLAN header using the data 
from TCI is sufficient to avoid the problem in a quick test.

--- drivers/net/i40e/i40e_rxtx.c.orig  2016-11-30 04:28:48.0 -0500
+++ drivers/net/i40e/i40e_rxtx.c   2017-10-10 15:07:10.851398087 -0400
@@ -93,6 +93,8 @@
rte_le_to_cpu_16(rxdp->wb.qword0.lo_dword.l2tag1);
PMD_RX_LOG(DEBUG, "Descriptor l2tag1: %u",
   rte_le_to_cpu_16(rxdp->wb.qword0.lo_dword.l2tag1));
+   // vlan got stripped. Re-inject vlan from tci
+   rte_vlan_insert();
} else {
mb->vlan_tci = 0;
}

For a proper solution, this would need to be made selective based on whether 
the port config originally asked for VLANs to be stripped or not.  But I'm not 
sure that rte_vlan_insert() has enough context to be able to access that data, 
as it's stored in the driver/hw struct not the rx buffer.

Obviously the same would be required in the vector rxtx and similar data paths 
for other drivers, if affected by the same shortcoming.  I don't have other 
combinations available that I could test with, and I guess VMware i40evf SR-IOV 
VLAN isn't part of the DPDK release test suite either.

cc: d...@dpdk.org for comment as this is getting beyond my level of knowledge 
as a DPDK user

thanks,
Iain


Re: [dpdk-users] 8K flow limit

2017-10-11 Thread Wiles, Keith


> On Oct 11, 2017, at 3:57 AM, Jacobus Gericke  
> wrote:
> 
> Hi,
> 
> We are running Pktgen on two seperate VMs.  We use a range of src mac
> addresses to create a certain amount of flows. We have noticed that there
> is a ~8K flow limit per VF bonded to the VM. It is basically a limit of 8K
> src macs that create a flow for each port/VF. Is this a Pktgen limit? Is
> there a way to achieve more flows per VF bonded to the VM?

In Pktgen this is a limit to the amount of memory allocated per port for the 
range command. You can increase the number by increasing the define in the file 
app/pktgen-constants.h and increase the number of packets per port. Increasing 
these numbers will consume memory very quickly so be be careful.

> 
> -- 
> Kind regards
> Jaco Gericke

Regards,
Keith



[dpdk-users] 8K flow limit

2017-10-11 Thread Jacobus Gericke
Hi,

We are running Pktgen on two seperate VMs.  We use a range of src mac
addresses to create a certain amount of flows. We have noticed that there
is a ~8K flow limit per VF bonded to the VM. It is basically a limit of 8K
src macs that create a flow for each port/VF. Is this a Pktgen limit? Is
there a way to achieve more flows per VF bonded to the VM?

-- 
Kind regards
Jaco Gericke