You will need to ask intel folks,
but generally it makes sense that if NIC needs to parse vlan tag and distribute 
packets to different queues performance will go down..


> On 4 Sep 2019, at 14:47, Miroslav Kováč <[email protected]> wrote:
> 
> Isn t sriov supposed to be as fast as physical function? and besides why 
> would we receive different number of processed packets with 7 VFs and dropped 
> by 16 millions of packets using 8 VFs? and the same result goes with 9 or 
> 10.... VFS as with 8 VFs..
> Od: Damjan Marion via Lists.Fd.Io <[email protected] 
> <mailto:[email protected]>>
> Odoslané: streda, 4. septembra 2019 12:46:55
> Komu: Miroslav Kováč
> Kópia: [email protected] <mailto:[email protected]>
> Predmet: Re: [vpp-dev] Intel XXV710 SR-IOV packet loss
>  
> 
> Isn't that just a hardware limit of the card?
> 
> 
>> On 4 Sep 2019, at 12:45, Miroslav Kováč <[email protected] 
>> <mailto:[email protected]>> wrote:
>> 
>> Yes we have tried that as well, with AVF we received simlar results as well
>> Od: Damjan Marion <[email protected] <mailto:[email protected]>>
>> Odoslané: streda, 4. septembra 2019 12:44:33
>> Komu: Miroslav Kováč
>> Kópia: [email protected] <mailto:[email protected]>
>> Predmet: Re: [vpp-dev] Intel XXV710 SR-IOV packet loss
>>  
>> 
>> Have you tried to use native AVF driver instead?
>> 
>> 
>>> On 4 Sep 2019, at 12:42, Miroslav Kováč <[email protected] 
>>> <mailto:[email protected]>> wrote:
>>> 
>>> Hello,
>>> 
>>> We are trying a setup with intel 25 GB card XXV710 and sr-iov. We need 
>>> sr-iov to sort packets based on vlan in between the VFs. We are using trex 
>>> on one machine to generate packets and multiple VPPs (each in docker 
>>> container, using one VF) on another one. Trex machine contains the exact 
>>> same hardware. 
>>> 
>>> Each VF contains one vlan with spoof checking off and trust on and specific 
>>> MAC address. For example ->
>>> 
>>> vf 0 MAC ba:dc:0f:fe:ed:00, vlan 1537, spoof checking off, link-state auto, 
>>> trust on
>>> 
>>> 
>>> We are generating packets with VF destination MACs with the corresponding 
>>> VLAN. When sending packets to 3 VFs trex shows 35 million tx-packets and 
>>> Dpdk stats on the trex machine show that 35 million were in fact sent out:
>>> 
>>> ##### DPDK Statistics port0 #####
>>> {
>>>     "tx_good_bytes": 2142835740,
>>>     "tx_good_packets": 35713929,
>>>     "tx_size_64_packets": 35713929,
>>>     "tx_unicast_packets": 35713929
>>> }
>>> 
>>> rate= '96%'; pktSize=       64; frameLoss%=51.31%; bytesReceived/s=    
>>> 1112966528.00; totalReceived=   17390102; totalSent=   35713929; frameLoss= 
>>>   18323827; bytesReceived=    1112966528; targetDuration=1.0
>>> 
>>> 
>>> However VPP shows only 33 million rx-packets:
>>> 
>>> VirtualFunctionEthernet17/a/0     2      up          9000/0/0/0     
>>> rx packets               5718196
>>> rx bytes               343091760
>>> rx-miss                  5572089     
>>> 
>>> VirtualFunctionEthernet17/a/1     2      up          9000/0/0/0     
>>> rx packets               5831396
>>> rx bytes               349883760
>>> rx-miss                  5459089
>>> 
>>> VirtualFunctionEthernet17/a/2     2      up          9000/0/0/0     
>>> rx packets               5840512
>>> rx bytes               350430720
>>> rx-miss                  5449466
>>> 
>>> Sum of rx packets and rx-miss is 33,870,748. About 2 million is missing.
>>> 
>>> 
>>> Even when I check VFs stats I see only 33 million to come (out of which 9.9 
>>> million are rx-missed):
>>> 
>>> root@protonet:/home/protonet# for f in $(ls 
>>> /sys/class/net/enp23s0f1/device/sriov/*/stats/rx_packets); do echo "$f: 
>>> $(cat $f)"; done | grep -v ' 0$'
>>> 
>>> /sys/class/net/enp23s0f1/device/sriov/0/stats/rx_packets: 11290290
>>> /sys/class/net/enp23s0f1/device/sriov/1/stats/rx_packets: 11290485
>>> /sys/class/net/enp23s0f1/device/sriov/2/stats/rx_packets: 11289978
>>> 
>>> 
>>> When increasing the number of VFs the number of rx-packets on VPP is 
>>> actually decreasing. Up to 6 or 7 VFs I still receive somewhere around 
>>> 28-33 million packets, but when I use 8 VFs all the sudden it drops to 16 
>>> million packets (no rx-miss any more). The same goes with trunk mode:
>>> 
>>> VirtualFunctionEthernet17/a/0     2      up          9000/0/0/0     
>>> rx packets               1959110
>>> rx bytes               117546600
>>> 
>>> VirtualFunctionEthernet17/a/1     2      up          9000/0/0/0     
>>> rx packets               1959181
>>> rx bytes               117550860
>>> 
>>> VirtualFunctionEthernet17/a/2     2      up          9000/0/0/0     
>>> rx packets               1956242
>>> rx bytes               117374520
>>> .
>>> .
>>> .
>>> Approximately the same amount of packets for each VPP instance which is 2 
>>> million packets * 8 = 16 million packets out of 35 million sent. Almost 20 
>>> million are gone
>>> 
>>> 
>>> We are using vfio-pci driver.
>>> 
>>> The strange thing is that when I use only PF, no sr-iov VFs are on and I 
>>> try the same vpp setup I can see all 35 million packets to come across. 
>>> 
>>> We have also tested this with X710 10GB intel card and we have received 
>>> similar results.
>>> 
>>> Regards,
>>> Miroslav Kovac
>>> -=-=-=-=-=-=-=-=-=-=-=-
>>> Links: You receive all messages sent to this group.
>>> 
>>> View/Reply Online (#13894): https://lists.fd.io/g/vpp-dev/message/13894 
>>> <https://lists.fd.io/g/vpp-dev/message/13894>
>>> Mute This Topic: https://lists.fd.io/mt/33136868/675642 
>>> <https://lists.fd.io/mt/33136868/675642>
>>> Group Owner: [email protected] <mailto:[email protected]>
>>> Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub 
>>> <https://lists.fd.io/g/vpp-dev/unsub>  [[email protected] 
>>> <mailto:[email protected]>]
>>> -=-=-=-=-=-=-=-=-=-=-=-
>> 
>> -=-=-=-=-=-=-=-=-=-=-=-
>> Links: You receive all messages sent to this group.
>> 
>> View/Reply Online (#13896): https://lists.fd.io/g/vpp-dev/message/13896 
>> <https://lists.fd.io/g/vpp-dev/message/13896>
>> Mute This Topic: https://lists.fd.io/mt/33136890/675642 
>> <https://lists.fd.io/mt/33136890/675642>
>> Group Owner: [email protected] <mailto:[email protected]>
>> Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub 
>> <https://lists.fd.io/g/vpp-dev/unsub>  [[email protected] 
>> <mailto:[email protected]>]
>> -=-=-=-=-=-=-=-=-=-=-=-
> 
> -=-=-=-=-=-=-=-=-=-=-=-
> Links: You receive all messages sent to this group.
> 
> View/Reply Online (#13901): https://lists.fd.io/g/vpp-dev/message/13901 
> <https://lists.fd.io/g/vpp-dev/message/13901>
> Mute This Topic: https://lists.fd.io/mt/33136890/675642 
> <https://lists.fd.io/mt/33136890/675642>
> Group Owner: [email protected] <mailto:[email protected]>
> Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub 
> <https://lists.fd.io/g/vpp-dev/unsub>  [[email protected] 
> <mailto:[email protected]>]
> -=-=-=-=-=-=-=-=-=-=-=-

-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.

View/Reply Online (#13902): https://lists.fd.io/g/vpp-dev/message/13902
Mute This Topic: https://lists.fd.io/mt/33136890/21656
Group Owner: [email protected]
Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub  [[email protected]]
-=-=-=-=-=-=-=-=-=-=-=-

Reply via email to