Re: [vpp-dev] VPP - DPDK - No ARP learning on VPP and no ARP reply sent.

2020-05-19 Thread Balaji Venkatraman via lists.fd.io
Hi Laurent,

Trying to understand your setup:

Do you have :
  100.100.101.x/24
TOR[.1] <>[.2] VLAN

So, is the interface on TOR end also on a vlan (with same id)?


--
Regards,
Balaji.


From: Laurent Dumont 
Date: Tuesday, May 19, 2020 at 4:58 PM
To: "Balaji Venkatraman (balajiv)" 
Cc: Mrityunjay Kumar , vpp-dev 
Subject: Re: [vpp-dev] VPP - DPDK - No ARP learning on VPP and no ARP reply 
sent.

Hey everyone,

Thank you for all the comments. Just trying to work my way through it! :)

Just as a sanity check, here is what it looks like on a fresh VPP (without any 
config).

# Configure the VPP client with the proper vlan and IP address.
set interface state VirtualFunctionEthernet0/5/0 up
create sub-interfaces VirtualFunctionEthernet0/5/0 101
set interface state VirtualFunctionEthernet0/5/0.101 up
set interface ip address VirtualFunctionEthernet0/5/0.101 
100.100.101.2/24

I then have the following :
vpp# show interface address
VirtualFunctionEthernet0/5/0 (up):
VirtualFunctionEthernet0/5/0.101 (up):
  L3 100.100.101.2/24
local0 (dn):

TOR IP : 100.100.101.1 - VLAN 101
vpp# ping 100.100.101.1

Statistics: 5 sent, 0 received, 100% packet loss

vpp# show hardware-interfaces
  NameIdx   Link  Hardware
VirtualFunctionEthernet0/5/0   1 up   VirtualFunctionEthernet0/5/0
  Link speed: 10 Gbps
  Ethernet address fa:16:3e:92:30:f1
  Intel X710/XL710 Family VF
carrier up full duplex mtu 9206  promisc
flags: admin-up promisc pmd maybe-multiseg subif tx-offload 
intel-phdr-cksum rx-ip4-cksum
Devargs:
rx: queues 1 (max 16), desc 1024 (min 64 max 4096 align 32)
tx: queues 1 (max 16), desc 1024 (min 64 max 4096 align 32)
pci: device 8086:154c subsystem 103c: address :00:05.00 numa 0
max rx packet len: 9728
promiscuous: unicast on all-multicast on
vlan offload: strip off filter on qinq off
rx offload avail:  vlan-strip ipv4-cksum udp-cksum tcp-cksum qinq-strip
   outer-ipv4-cksum vlan-filter jumbo-frame scatter
rx offload active: ipv4-cksum jumbo-frame scatter
tx offload avail:  vlan-insert ipv4-cksum udp-cksum tcp-cksum sctp-cksum
   tcp-tso outer-ipv4-cksum qinq-insert vxlan-tnl-tso
   gre-tnl-tso ipip-tnl-tso geneve-tnl-tso multi-segs
tx offload active: udp-cksum tcp-cksum multi-segs
rss avail: ipv4-frag ipv4-tcp ipv4-udp ipv4-sctp ipv4-other 
ipv6-frag
   ipv6-tcp ipv6-udp ipv6-sctp ipv6-other l2-payload
rss active:none
tx burst function: i40e_xmit_pkts
rx burst function: i40e_recv_scattered_pkts_vec_avx2

tx frames ok   5
tx bytes ok  230
rx frames ok   5
rx bytes ok  320
extended stats:
  rx good packets  5
  tx good packets  5
  rx good bytes  320
  tx good bytes  230
  rx bytes   320
  rx unicast packets   5
  tx bytes   230
  tx broadcast packets 5
local0 0down  local0
  Link speed: unknown
  local
vpp# show interface address
VirtualFunctionEthernet0/5/0 (up):
VirtualFunctionEthernet0/5/0.101 (up):
  L3 100.100.101.2/24
local0 (dn):

I can see that I have 5 packets IN (5 ARP IN and 5 replies I assume)
No output from :
vpp# show ip neighbor
vpp# show ip neighbors
vpp#

Would that basic L3 configuration be enough to ping across the TOR to VPP?

Thanks!



On Fri, May 15, 2020 at 10:47 AM Balaji Venkatraman (balajiv) 
mailto:bala...@cisco.com>> wrote:
As Neale replied earlier, adding L3 addr to the interface should implicitly 
enable arp on it.

Thanks!

--
Regards,
Balaji.


From: Mrityunjay Kumar mailto:kumarn...@gmail.com>>
Date: Friday, May 15, 2020 at 7:19 AM
To: "Balaji Venkatraman (balajiv)" mailto:bala...@cisco.com>>
Cc: Laurent Dumont mailto:laurentfdum...@gmail.com>>, 
vpp-dev mailto:vpp-dev@lists.fd.io>>
Subject: Re: [vpp-dev] VPP - DPDK - No ARP learning on VPP and no ARP reply 
sent.

Hi  Balaji
Its working for me. I was just trying to help "Laurent Dumont"

--
@Laurent Dumont,   can try it, even this arp 
issue, is it working for you if you are adding neghbor IP?

 set ip neighbor  set ip neighbor [del]  
  [static] [no-fib-entry] [count ] [fib-id 
] [proxy  - ]
vpp#

Please try it with adding route and arp.

you can also use this, to set vlan on vf.
#ip link set eth0 vf 18 valn 101  --> u not required to create vlan 

Re: [vpp-dev] Segmentation fault in VPP 20.05 release when using VCL VPPCOM_PROTO_UDPC #vpp-hoststack

2020-05-19 Thread Florin Coras
Hi Raj, 

Try maybe:

make wipe-release
make build-release
…

Also, try removing all packages before reinstalling them to make sure that 
you’re including/linking against the right vppcom lib. 

Regards,
Florin

> On May 19, 2020, at 4:59 PM, Raj Kumar  wrote:
> 
> Hi Florin,
> I am facing a weird problem. After making the VPP code changes, I 
> recompiled/re-installed VPP by using the following commands- 
> make rebuild-release 
> make pkg-rpm
> rpm -ivh /opt/vpp/build-root/*.rpm
> 
> But, it looks that VPP is still using the old code  I also stopped the VPP 
> service before compiling and installing the new code. 
> Also, recompiled the application using the new vppcom library. 
> But, the line# in the following trace indicates that VPP is using the old 
> code 
> vppcom_session_create:1279: vcl<28267:1>: created session 1
> 
> May be because of this issue , VPP is still crashing with UDPC. 
> 
> Please let me know  if there is any other way to compile VPP with the local 
> code changes.
> 
> thanks,
> -Raj
> .
> 
> On Tue, May 19, 2020 at 12:31 AM Florin Coras  > wrote:
> Hi Raj, 
> 
> By the looks of it, something’s not right because in the logs VCL still 
> reports it’s binding using UDPC. You probably cherry-picked [1] but it needs 
> [2] as well. More inline.
> 
> [1] https://gerrit.fd.io/r/c/vpp/+/27111 
> 
> [2] https://gerrit.fd.io/r/c/vpp/+/27106 
> 
> 
>> On May 18, 2020, at 8:42 PM, Raj Kumar > > wrote:
>> 
>> 
>> Hi Florin,
>> I tried the path [1] , but still VPP is crashing when  application is using 
>> listen with UDPC.
>> 
>> [1] https://gerrit.fd.io/r/c/vpp/+/27111 
>>  
>> 
>> 
>> 
>> On a different topic , I have some questions. Could you please  provide your 
>> inputs - 
>> 
>> 1) I am using Mellanox NIC. Any idea how can I enable Tx checksum offload ( 
>> for udp).  Also, how to change the Tx burst mode and Rx burst mode to the 
>> Vector .
>> 
>> HundredGigabitEthernet12/0/1   3 up   HundredGigabitEthernet12/0/1
>>   Link speed: 100 Gbps
>>   Ethernet address b8:83:03:9e:98:81
>>   Mellanox ConnectX-4 Family
>> carrier up full duplex mtu 9206
>> flags: admin-up pmd maybe-multiseg rx-ip4-cksum
>> rx: queues 4 (max 1024), desc 1024 (min 0 max 65535 align 1)
>> tx: queues 5 (max 1024), desc 1024 (min 0 max 65535 align 1)
>> pci: device 15b3:1013 subsystem 1590:00c8 address :12:00.01 numa 0
>> switch info: name :12:00.1 domain id 1 port id 65535
>> max rx packet len: 65536
>> promiscuous: unicast off all-multicast on
>> vlan offload: strip off filter off qinq off
>> rx offload avail:  vlan-strip ipv4-cksum udp-cksum tcp-cksum vlan-filter
>>jumbo-frame scatter timestamp keep-crc rss-hash
>> rx offload active: ipv4-cksum udp-cksum tcp-cksum jumbo-frame scatter
>> tx offload avail:  vlan-insert ipv4-cksum udp-cksum tcp-cksum tcp-tso
>>outer-ipv4-cksum vxlan-tnl-tso gre-tnl-tso 
>> geneve-tnl-tso
>>multi-segs udp-tnl-tso ip-tnl-tso
>> tx offload active: multi-segs
>> rss avail: ipv4-frag ipv4-tcp ipv4-udp ipv4-other ipv4 
>> ipv6-tcp-ex
>>ipv6-udp-ex ipv6-frag ipv6-tcp ipv6-udp ipv6-other
>>ipv6-ex ipv6 l4-dst-only l4-src-only l3-dst-only 
>> l3-src-only
>> rss active:ipv4-frag ipv4-tcp ipv4-udp ipv4-other ipv4 
>> ipv6-tcp-ex
>>ipv6-udp-ex ipv6-frag ipv6-tcp ipv6-udp ipv6-other
>>ipv6-ex ipv6
>> tx burst mode: No MPW + MULTI + TSO + INLINE + METADATA
>> rx burst mode: Scalar
> 
> FC: Not sure why (might not be supported) but the offloads are not enabled in 
> dpdk_lib_init for VNET_DPDK_PMD_MLX* nics. You could try replicating what’s 
> done for the Intel cards and see if that works. Alternatively, you might want 
> to try the rdma driver, although I don’t know if it supports csum offloading 
> (cc Ben and Damjan). 
> 
>>
>> 2) My application needs to send routing header (SRv6) and Destination option 
>> extension header. On RedHat 8.1 , I was using socket option to add routing 
>> and destination option extension header.
>> With VPP , I can use SRv6 policy to let VPP add the routing header. But, I 
>> am not sure if there is any option in VPP or HostStack to add the 
>> destination option header.
> 
> FC: We don’t currently support this. 
> 
> Regards,
> Florin
> 
>> 
>> 
>> Coming back to the original problem, here are the traces- 
>> 
>> VCL<39673>: configured VCL debug level (2) from VCL_DEBUG!
>> VCL<39673>: using default heapsize 268435456 (0x1000)
>> VCL<39673>: allocated VCL heap = 0x7f6b40221010, size 268435456 (0x1000)
>> VCL<39673>: using default configuration.
>> vppcom_connect_to_vpp:487: vcl<39673:0>: app (udp6_rx) connecting to 

Re: [vpp-dev] Segmentation fault in VPP 20.05 release when using VCL VPPCOM_PROTO_UDPC #vpp-hoststack

2020-05-19 Thread Raj Kumar
Hi Florin,
I am facing a weird problem. After making the VPP code changes, I
recompiled/re-installed VPP by using the following commands-
make rebuild-release
make pkg-rpm
rpm -ivh /opt/vpp/build-root/*.rpm

But, it looks that VPP is still using the old code  I also stopped the VPP
service before compiling and installing the new code.
Also, recompiled the application using the new vppcom library.
But, the line# in the following trace indicates that VPP is using the old
code
vppcom_session_create:*1279*: vcl<28267:1>: created session 1

May be because of this issue , VPP is still crashing with UDPC.

Please let me know  if there is any other way to compile VPP with the local
code changes.

thanks,
-Raj
.

On Tue, May 19, 2020 at 12:31 AM Florin Coras 
wrote:

> Hi Raj,
>
> By the looks of it, something’s not right because in the logs VCL still
> reports it’s binding using UDPC. You probably cherry-picked [1] but it
> needs [2] as well. More inline.
>
> [1] https://gerrit.fd.io/r/c/vpp/+/27111
> [2] https://gerrit.fd.io/r/c/vpp/+/27106
>
> On May 18, 2020, at 8:42 PM, Raj Kumar  wrote:
>
>
> Hi Florin,
> I tried the path [1] , but still VPP is crashing when  application is
> using listen with UDPC.
>
> [1] https://gerrit.fd.io/r/c/vpp/+/27111
>
>
>
> On a different topic , I have some questions. Could you please  provide
> your inputs -
>
> 1) I am using Mellanox NIC. Any idea how can I enable Tx checksum offload
> ( for udp).  Also, how to change the Tx burst mode and Rx burst mode to the
> Vector .
>
> HundredGigabitEthernet12/0/1   3 up   HundredGigabitEthernet12/0/1
>   Link speed: 100 Gbps
>   Ethernet address b8:83:03:9e:98:81
>  * Mellanox ConnectX-4 Family*
> carrier up full duplex mtu 9206
> flags: admin-up pmd maybe-multiseg rx-ip4-cksum
> rx: queues 4 (max 1024), desc 1024 (min 0 max 65535 align 1)
> tx: queues 5 (max 1024), desc 1024 (min 0 max 65535 align 1)
> pci: device 15b3:1013 subsystem 1590:00c8 address :12:00.01 numa 0
> switch info: name :12:00.1 domain id 1 port id 65535
> max rx packet len: 65536
> promiscuous: unicast off all-multicast on
> vlan offload: strip off filter off qinq off
> rx offload avail:  vlan-strip ipv4-cksum udp-cksum tcp-cksum
> vlan-filter
>jumbo-frame scatter timestamp keep-crc rss-hash
> rx offload active: ipv4-cksum udp-cksum tcp-cksum jumbo-frame scatter
> tx offload avail:  vlan-insert ipv4-cksum udp-cksum tcp-cksum tcp-tso
>outer-ipv4-cksum vxlan-tnl-tso gre-tnl-tso
> geneve-tnl-tso
>multi-segs udp-tnl-tso ip-tnl-tso
>* tx offload active: multi-segs*
> rss avail: ipv4-frag ipv4-tcp ipv4-udp ipv4-other ipv4
> ipv6-tcp-ex
>ipv6-udp-ex ipv6-frag ipv6-tcp ipv6-udp ipv6-other
>ipv6-ex ipv6 l4-dst-only l4-src-only l3-dst-only
> l3-src-only
> rss active:ipv4-frag ipv4-tcp ipv4-udp ipv4-other ipv4
> ipv6-tcp-ex
>ipv6-udp-ex ipv6-frag ipv6-tcp ipv6-udp ipv6-other
>ipv6-ex ipv6
>
> *   tx burst mode: No MPW + MULTI + TSO + INLINE + METADATArx burst
> mode: Scalar*
>
>
> FC: Not sure why (might not be supported) but the offloads are not enabled
> in dpdk_lib_init for VNET_DPDK_PMD_MLX* nics. You could try replicating
> what’s done for the Intel cards and see if that works. Alternatively, you
> might want to try the rdma driver, although I don’t know if it supports
> csum offloading (cc Ben and Damjan).
>
>
> 2) My application needs to send routing header (SRv6) and Destination
> option extension header. On RedHat 8.1 , I was using socket option to add
> routing and destination option extension header.
> With VPP , I can use SRv6 policy to let VPP add the routing header. But, I
> am not sure if there is any option in VPP or HostStack to add the
> destination option header.
>
>
> FC: We don’t currently support this.
>
> Regards,
> Florin
>
>
>
> Coming back to the original problem, here are the traces-
>
> VCL<39673>: configured VCL debug level (2) from VCL_DEBUG!
> VCL<39673>: using default heapsize 268435456 (0x1000)
> VCL<39673>: allocated VCL heap = 0x7f6b40221010, size 268435456
> (0x1000)
> VCL<39673>: using default configuration.
> vppcom_connect_to_vpp:487: vcl<39673:0>: app (udp6_rx) connecting to VPP
> api (/vpe-api)...
> vppcom_connect_to_vpp:502: vcl<39673:0>: app (udp6_rx) is connected to VPP!
> vppcom_app_create:1200: vcl<39673:0>: sending session enable
> vppcom_app_create:1208: vcl<39673:0>: sending app attach
> vppcom_app_create:1217: vcl<39673:0>: app_name 'udp6_rx', my_client_index
> 0 (0x0)
> vppcom_connect_to_vpp:487: vcl<39673:1>: app (udp6_rx-wrk-1) connecting to
> VPP api (/vpe-api)...
> vppcom_connect_to_vpp:502: vcl<39673:1>: app (udp6_rx-wrk-1) is connected
> to VPP!
> vcl_worker_register_with_vpp:262: vcl<39673:1>: added worker 1
> 

Re: [vpp-dev] VPP - DPDK - No ARP learning on VPP and no ARP reply sent.

2020-05-19 Thread Laurent Dumont
Hey everyone,

Thank you for all the comments. Just trying to work my way through it! :)

Just as a sanity check, here is what it looks like on a fresh VPP (without
any config).

# Configure the VPP client with the proper vlan and IP address.
set interface state VirtualFunctionEthernet0/5/0 up
create sub-interfaces VirtualFunctionEthernet0/5/0 101
set interface state VirtualFunctionEthernet0/5/0.101 up
set interface ip address VirtualFunctionEthernet0/5/0.101 100.100.101.2/24

I then have the following :
vpp# show interface address
VirtualFunctionEthernet0/5/0 (up):
VirtualFunctionEthernet0/5/0.101 (up):
  L3 100.100.101.2/24
local0 (dn):

TOR IP : 100.100.101.1 - VLAN 101
vpp# ping 100.100.101.1

Statistics: 5 sent, 0 received, 100% packet loss

vpp# show hardware-interfaces
  NameIdx   Link  Hardware
VirtualFunctionEthernet0/5/0   1 up   VirtualFunctionEthernet0/5/0
  Link speed: 10 Gbps
  Ethernet address fa:16:3e:92:30:f1
  Intel X710/XL710 Family VF
carrier up full duplex mtu 9206  promisc
flags: admin-up promisc pmd maybe-multiseg subif tx-offload
intel-phdr-cksum rx-ip4-cksum
Devargs:
rx: queues 1 (max 16), desc 1024 (min 64 max 4096 align 32)
tx: queues 1 (max 16), desc 1024 (min 64 max 4096 align 32)
pci: device 8086:154c subsystem 103c: address :00:05.00 numa 0
max rx packet len: 9728
promiscuous: unicast on all-multicast on
vlan offload: strip off filter on qinq off
rx offload avail:  vlan-strip ipv4-cksum udp-cksum tcp-cksum qinq-strip
   outer-ipv4-cksum vlan-filter jumbo-frame scatter
rx offload active: ipv4-cksum jumbo-frame scatter
tx offload avail:  vlan-insert ipv4-cksum udp-cksum tcp-cksum
sctp-cksum
   tcp-tso outer-ipv4-cksum qinq-insert vxlan-tnl-tso
   gre-tnl-tso ipip-tnl-tso geneve-tnl-tso multi-segs
tx offload active: udp-cksum tcp-cksum multi-segs
rss avail: ipv4-frag ipv4-tcp ipv4-udp ipv4-sctp ipv4-other
ipv6-frag
   ipv6-tcp ipv6-udp ipv6-sctp ipv6-other l2-payload
rss active:none
tx burst function: i40e_xmit_pkts
rx burst function: i40e_recv_scattered_pkts_vec_avx2

tx frames ok   5
tx bytes ok  230
rx frames ok   5
rx bytes ok  320
extended stats:
  rx good packets  5
  tx good packets  5
  rx good bytes  320
  tx good bytes  230
  rx bytes   320
  rx unicast packets   5
  tx bytes   230
  tx broadcast packets 5
local0 0down  local0
  Link speed: unknown
  local
vpp# show interface address
VirtualFunctionEthernet0/5/0 (up):
VirtualFunctionEthernet0/5/0.101 (up):
  L3 100.100.101.2/24
local0 (dn):

I can see that I have 5 packets IN (5 ARP IN and 5 replies I assume)
No output from :
vpp# show ip neighbor
vpp# show ip neighbors
vpp#

Would that basic L3 configuration be enough to ping across the TOR to VPP?

Thanks!



On Fri, May 15, 2020 at 10:47 AM Balaji Venkatraman (balajiv) <
bala...@cisco.com> wrote:

> As Neale replied earlier, adding L3 addr to the interface should
> implicitly enable arp on it.
>
>
>
> Thanks!
>
>
>
> --
>
> Regards,
>
> Balaji.
>
>
>
>
>
> *From: *Mrityunjay Kumar 
> *Date: *Friday, May 15, 2020 at 7:19 AM
> *To: *"Balaji Venkatraman (balajiv)" 
> *Cc: *Laurent Dumont , vpp-dev <
> vpp-dev@lists.fd.io>
> *Subject: *Re: [vpp-dev] VPP - DPDK - No ARP learning on VPP and no ARP
> reply sent.
>
>
>
> Hi  Balaji
>
> Its working for me. I was just trying to help "Laurent Dumont"
>
>
>
> --
>
> @Laurent Dumont ,   can try it, even this arp
> issue, is it working for you if you are adding neghbor IP?
>
>
>
>  set ip neighbor  set ip neighbor [del] 
>   [static] [no-fib-entry] [count ] [fib-id
> ] [proxy  - ]
> vpp#
>
>
>
> Please try it with adding route and arp.
>
>
>
> you can also use this, to set vlan on vf.
>
> #ip link set eth0 vf 18 valn 101  --> u not required to create vlan
> interface in vpp.
>
>
>
>
>
>
>
> *Regards*,
> Mrityunjay Kumar.
> Mobile: +91 - 9731528504
>
>
>
>
>
> On Fri, May 15, 2020 at 6:59 PM Balaji Venkatraman (balajiv) <
> bala...@cisco.com> wrote:
>
> Hi Mrityunjay,
>
>
>
> Could you try adding an ip route and recheck.
>
>
>
> I think ARP is enabled once ip routing is enabled.
>
>
>
> Thanks
>
>
>
> --
>
> Regards,
>
> Balaji.
>
>
>
>
>
> *From: * on behalf of Laurent Dumont <
> laurentfdum...@gmail.com>
> *Date: *Friday, May 15, 2020 at 4:57 AM
> *To: *Mrityunjay Kumar 
> *Cc: 

Re: [vpp-dev] DPDK packets received by NIC but not delivered to engine

2020-05-19 Thread Mohammed Alshohayeb
Thanks Ben

Yes indeed it was promiscuous mode, I thought it was the default for some 
reason.

Though the small percentage being passed on threw me off, don't think my tg 
generates any broadcasts, thats a mystery for another day..

btw do you know how VPP determines the TX queue per worker thread? is it 1-to-1 
mapping? what if the NIC doesn't support as many tx queues as available worker 
threads?

Best,
Mohammed
-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.

View/Reply Online (#16452): https://lists.fd.io/g/vpp-dev/message/16452
Mute This Topic: https://lists.fd.io/mt/74316945/21656
Group Owner: vpp-dev+ow...@lists.fd.io
Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub  [arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-


[vpp-dev] IPFIX Export not sending Data Sets #ipfix-export #vpp

2020-05-19 Thread mauricio.solisjr via lists.fd.io
Hi,

I have successfully set ipfix export (to some extent) and I can see the 
template set and records I have configured in the vpp is in fac being sent and 
received by tap0.  I'm using scapy, session NetflowSession to verify this and 
tcpdump.  Everything about the template record looks fine, but I'm not 
receiving any data records with it, only the template records. I can see the 
telemetry information in the decap vpp, along with data from other nodes, via 
"show trace".

Here is was I did to get ipfix export working. Please let me know if I've 
missed anything.

Decap Node;
IOAM configs
set ioam-trace profile trace-type 0x1f trace-elts 3 trace-tsp 2 node-id 0xf 
app-data 0xD000
classify table miss-next ip6-node ip6-lookup mask l3 ip6 dst
classify session acl-hit-next ip6-node ip6-lookup table-index 0 match l3 ip6 
dst db04::2 opaque-index 100 ioam-decap test1
set int input acl intfc host- ci0 ip6-table 0
set int input acl intfc host- ci1 ip6-table 0

Tap interface for collector
create tap id 0
set interface ip address tap0 10.10.1.1/24
set interface state tap0 up

IPFIX configs
flowprobe params record l2 l3 l4 active 1 passive 1
flowprobe feature add-del tap0 ip4
set ipfix exporter collector 10.10.1.2 src 10.10.1.1 template-interval 1 
path-mtu 1450
trace add af-packet-input 255

Notes:
I've set tap0 mtu to 1450 on host side
I have also noted that "set ioam export ipfix collector 10.10.1.2 src 
10.10.1.1" does not seem to do anything. Is this correct?!?
The command that 'allows' ipfix export to work is the "set ipfix exporter 
collector "
-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.

View/Reply Online (#16451): https://lists.fd.io/g/vpp-dev/message/16451
Mute This Topic: https://lists.fd.io/mt/74324326/21656
Mute #ipfix-export: https://lists.fd.io/mk?hashtag=ipfix-export=1480452
Mute #vpp: https://lists.fd.io/mk?hashtag=vpp=1480452
Group Owner: vpp-dev+ow...@lists.fd.io
Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub  [arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-


Re: [vpp-dev] DPDK packets received by NIC but not delivered to engine

2020-05-19 Thread Benoit Ganne (bganne) via lists.fd.io
Hi Mohammed,

Are you sure packets are sent to the correct destination mac address? Looking 
at DPDK stats, it reports a lot of 'unicast packets' but only ~1800 'good 
packets' which in turn are all delivered to VPP.
The fact that xconnect works also hints toward a mac destination issue, as 
xconnect put the interfaces in promiscuous mode.
You can check by setting the interface manually in promiscuous mode and look at 
a packet trace:
vpp# set interface promiscuous on HundredGigabitEthernet86/0/0
vpp# clear trace
vpp# trace add dpdk-input 10
vpp# show trace

ben

> -Original Message-
> From: vpp-dev@lists.fd.io  On Behalf Of Mohammed
> Alshohayeb
> Sent: mardi 19 mai 2020 03:32
> To: vpp-dev@lists.fd.io
> Subject: [vpp-dev] DPDK packets received by NIC but not delivered to
> engine
> 
> I am having an issue where I see packets in the #show hardware-interfaces
> but only a very small fraction is deliver to the vlib engine
> 
> Here are the things I've tried
> 
> 
> * Using different packet generators (pktgen/trex/tcpreplay)
> * Using variety of physical servers
> * All versions running from 19.01 to 20.01
> * Tried multiple NICs (Mellanox ConnectX5) and Chelsio T6 (cxgb)
> * Made sure checksums are ok since some NICs drop bad frames in the
> pmd
> 
> 
> The vpp.conf is straight forward
> 
> unix {
>   nodaemon
>   log /var/log/vpp/vpp.log
>   cli-listen /run/vpp/cli.sock
>   interactive
> }
> dpdk {
> dev :86:00.0
> dev :86:00.1
> }
> 
> Notes
> - When connecting the two interfaces via xconnect things work well
> - Tried using the macswap plugin and enabling it but exhibited the same
> very slow behaviour
> 
> Here is the show interface counters after pushing ~100 packets
> 
> 
> 
> vpp# sh hardware-interfaces
> 
>   NameIdx   Link  Hardware
> 
> HundredGigabitEthernet86/0/0   1 up   HundredGigabitEthernet86/0/0
> 
>   Link speed: 100 Gbps
> 
>   Ethernet address ec:0d:9a:cd:94:8a
> 
>   Mellanox ConnectX-4 Family
> 
> carrier up full duplex mtu 9206
> 
> flags: admin-up pmd maybe-multiseg rx-ip4-cksum
> 
> rx: queues 1 (max 65535), desc 1024 (min 0 max 65535 align 1)
> 
> tx: queues 1 (max 65535), desc 1024 (min 0 max 65535 align 1)
> 
> pci: device 15b3:1017 subsystem 15b3:0007 address :86:00.00 numa 1
> 
> module: unknown
> 
> max rx packet len: 65536
> 
> promiscuous: unicast off all-multicast on
> 
> vlan offload: strip off filter off qinq off
> 
> rx offload avail:  vlan-strip ipv4-cksum udp-cksum tcp-cksum vlan-
> filter
> 
>jumbo-frame scatter timestamp keep-crc
> 
> rx offload active: ipv4-cksum jumbo-frame scatter
> 
> tx offload avail:  vlan-insert ipv4-cksum udp-cksum tcp-cksum tcp-tso
> 
>outer-ipv4-cksum vxlan-tnl-tso gre-tnl-tso multi-
> segs
> 
>udp-tnl-tso ip-tnl-tso
> 
> tx offload active: multi-segs
> 
> rss avail: ipv4 ipv4-frag ipv4-tcp ipv4-udp ipv4-other ipv6
> ipv6-frag
> 
>ipv6-tcp ipv6-udp ipv6-other ipv6-tcp-ex ipv6-udp-
> ex
> 
>ipv6-ex ipv6-tcp-ex ipv6-udp-ex
> 
> rss active:none
> 
> tx burst function: mlx5_tx_burst_vec
> 
> rx burst function: mlx5_rx_burst
> 
> 
> 
> rx frames ok1847
> 
> rx bytes ok   465972
> 
> extended stats:
> 
>   rx good packets   1847
> 
>   rx good bytes   465972
> 
>   rx q0packets  1847
> 
>   rx q0bytes  465972
> 
>   rx port unicast packets 16
> 
>   rx port unicast bytes   1034369007
> 
>   rx port multicast packets 1838
> 
>   rx port multicast bytes 462894
> 
>   rx port broadcast packets9
> 
>   rx port broadcast bytes   3078
> 
>   rx packets phy 142
> 
>   rx bytes phy1038835147
> 
> 
> 
> 
> 
> 
> 
> vpp# sh int
> 
>   Name   IdxState  MTU (L3/IP4/IP6/MPLS)
> Counter  Count
> 
> HundredGigabitEthernet86/0/0  1  up  9000/0/0/0 rx
> packets  1856
> 
> rx
> bytes  467272
> 
> drops
> 1856
> 
> ip4
> 1816
> 
> ip6
> 26
> 
> local00 down  0/0/0/0
> 
> vpp#
> 
> You can see the packets