Re: [vpp-dev] VRRP issue when using interface in a table

2021-07-01 Thread Mechthild Buescher via lists.fd.io
Hi Neale,

I did some deeper investigations on the vrrp issue. What I observed is as 
follows:

On one node1 the VRRP config is:
set interface state Ext-0 up
set interface ip address Ext-0 192.168.61.52/25
vrrp vr add Ext-0 vr_id 61 priority 200 no_preempt accept_mode 192.168.61.50

On the other node2 the VRRP config is:
set interface state Ext-0 up
set interface ip address Ext-0 192.168.61.51/25
vrrp vr add Ext-0 vr_id 61 priority 100 no_preempt accept_mode 192.168.61.50

When I start vpp and vrrp (vppctl vrrp proto start Ext-0 vr_id 61) on both 
nodes, everything looks fine:
The node1 is master and has VIP:
vppctl show int addr
Ext-0 (up):
  L3 192.168.61.52/25
  L3 192.168.61.50/25

The node2 is backup:
vppctl show int addr
Ext-0 (up):
  L3 192.168.61.51/25

I can also swap the roles (master/backup) of the nodes by stopping and starting 
vrrp on the node1:
vppctl vrrp proto stop Ext-0 vr_id 61
vppctl vrrp proto start Ext-0 vr_id 61

But if node1 (master) goes down because the interface is flapping, simulated 
with:
vppctl set int state Ext-0 down; vppctl set int state Ext-0 up

then node2 is getting master as expected but node1 is changing from state 
'Interface Down' to 'Backup' and then to 'Master'.
Now both nodes are master and both have the VIP.

Is this another bug in VRRP?

Your help is really appreciated.

Thanks, BR/Mechthild


From: Mechthild Buescher
Sent: Wednesday, 30 June 2021 17:40
To: vpp-dev@lists.fd.io
Subject: RE: VRRP issue when using interface in a table

Hi Neale,

Thanks for your reply. The bugfix partly solved the issue - VRRP goes into 
master/backup and keeps stable for a while. Unfortunately, it changes back to 
master/master after some time (15 minutes - 1 hour). We are currently trying to 
get more details and will come back to you.

But thanks for your support so far,

BR/Mechthild

From: Neale Ranns mailto:ne...@graphiant.com>>
Sent: Thursday, 24 June 2021 12:33
To: Mechthild Buescher 
mailto:mechthild.buesc...@ericsson.com>>; 
vpp-dev@lists.fd.io
Subject: Re: VRRP issue when using interface in a table

Hi Mechthild,

You'll need to include:
  
https://gerrit.fd.io/r/c/vpp/+/32298

/neale

From: vpp-dev@lists.fd.io 
mailto:vpp-dev@lists.fd.io>> on behalf of Mechthild 
Buescher via lists.fd.io 
mailto:mechthild.buescher=ericsson@lists.fd.io>>
Date: Thursday, 24 June 2021 at 10:49
To: vpp-dev@lists.fd.io 
mailto:vpp-dev@lists.fd.io>>
Subject: [vpp-dev] VRRP issue when using interface in a table
Hi all,

we are using VPP on two nodes where we would like to run VRRP. This works fine 
if the VRRP VR interface is in fib 0 but if we but the interface into FIB table 
1 instead, VRRP is not working correctly anymore. Can you please help?

Our setup:

* 2 nodes with VPP on each node and one DPDK interface (we reduced the 
config to isolate the issue) connected to each VPP

* a switch between the nodes which just forwards the traffic, so that 
it's like a peer-2-peer connection

The VPP version is (both nodes):

vpp# show version
vpp v21.01.0-6~gf70123b2c built by suse on SUSE at 2021-05-06T12:18:31
vpp# show version verbose
Version:  v21.01.0-6~gf70123b2c
Compiled by:  suse
Compile host: SUSE
Compile date: 2021-05-06T12:18:31
Compile location: /root/vpp-sp/vpp
Compiler: GCC 7.5.0
Current PID:  6677

The VPP config uses the DPDK interface (both nodes):

vpp# show hardware-interfaces
  NameIdx   Link  Hardware
Ext-0  1 up   Ext-0
  Link speed: 10 Gbps
  Ethernet address e4:43:4b:ed:59:10
  Intel X710/XL710 Family
carrier up full duplex mtu 9206
flags: admin-up pmd maybe-multiseg tx-offload intel-phdr-cksum rx-ip4-cksum
Devargs:
rx: queues 1 (max 192), desc 1024 (min 64 max 4096 align 32)
tx: queues 3 (max 192), desc 1024 (min 64 max 4096 align 32)
pci: device 8086:1572 subsystem 1028:1f9c address :17:00.00 numa 0
max rx packet len: 9728
promiscuous: unicast off all-multicast on
vlan offload: strip off filter off qinq off
rx offload avail:  vlan-strip ipv4-cksum udp-cksum tcp-cksum qinq-strip
   outer-ipv4-cksum vlan-filter vlan-extend jumbo-frame
   scatter keep-crc rss-hash
rx offload active: ipv4-cksum jumbo-frame scatter
tx offload avail:  vlan-insert ipv4-cksum udp-cksum tcp-cksum sctp-cksum
   tcp-tso outer-ipv4-cksum qinq-insert vxlan-tnl-tso
   gre-tnl-tso ipip-tnl-tso geneve-tnl-tso multi-segs
   mbuf-fast-free
tx offload active: udp-cksum tcp-cksum multi-segs
rss avail: 

Re: [vpp-dev] heap sizes

2021-07-01 Thread Matthew Smith via lists.fd.io
On Thu, Jul 1, 2021 at 10:07 AM Damjan Marion  wrote:

>
>
> > On 01.07.2021., at 16:12, Matthew Smith  wrote:
> >
> >
> >
> > On Thu, Jul 1, 2021 at 6:36 AM Damjan Marion  wrote:
> >
> >
> > > On 01.07.2021., at 11:12, Benoit Ganne (bganne) via lists.fd.io
>  wrote:
> > >
> > >> Yes, allowing dynamic heap growth sounds like it could be better.
> > >> Alternatively... if memory allocations could fail and something more
> > >> graceful than VPP exiting could occur, that may also be better. E.g.
> if
> > >> I'm adding a route and try to allocate a counter for it and that
> fails, it
> > >> would be better to refuse to add the route than to exit and take the
> > >> network down.
> > >>
> > >> I realize that neither of those options is easy to do btw. I'm just
> trying
> > >> to figure out how to make it easier and more forgiving for users to
> set up
> > >> their configuration without making them learn about various memory
> > >> parameters.
> > >
> > > Understood, but setting a very high default will just make users of
> smaller config puzzled too  and I think changing all memory allocation
> callsites to check for NULL would be a big paradigm change in VPP.
> > > That's why I think a dynamically growing heap might be better but I do
> not really know what would be the complexity.
> > > That said, you can probably change the default in your own build and
> that should work.
> > >
> >
> > Fully agree wirth Benoit. We should not increase heap size default value.
> >
> > Things are actually a bit more complicated. For performance reasons
> people should use
> > hugepages whenever they are available, but they are also not default.
> > When hugepages are used all pages are immediately backed with physical
> memory.
> >
> > So different use cases require different heap configurations and end
> user needs to tune that.
> > Same applies for other things like stats segment page size which again
> may impact forwarding
> > performance significantly.
> >
> > If messing with startup.conf is too complicated for end user, some nice
> configuration script may be helpful.
> > Or just throwing few startup.confs into extras/startup_configs.
> >
> > Dynamic heap is possible, but not straight forward, as at some places we
> use offsets
> > to the start of the heap, so additional allocation cannot be anywhere.
> > Also it will not help in some cases, i.e. when 1G hugepage is used for
> heap, growing up to 2G
> > will fail if 2nd 1G page is not pre-allocated.
> >
> >
> > Sorry for not being clear. I was not advocating any change to defaults
> in VPP code in gerrit. I was trying to figure out the impact of changing
> the default value written in startup.conf by the management plane I work
> on. And also have a conversation on whether there are ways that it could be
> made easier to tune memory parameters correctly.
>
> ok, so let me try to answer your original questions:
>
> > It's my understanding that when you set the size of the main heap or the
> stat segment in startup.conf, the size you specify is used to set up
> virtual address space and the system does not actually allocate that full
> amount of memory to VPP. I think when VPP tries to read/write addresses
> within the address space, then memory is requested from the system to back
> the chunk of address space containing the address being accessed. Is my
> understanding correct(ish)?
>
> heap-size parameter defines size of memory mapping created for the heap.
> With the normal 4K pages mapping is not backed by physical memory. Instead,
> first time you try to access specific page CPU will generate page fault,
> and kernel will handle it by allocating 4k chunk of physical memory to back
> that specific virtual address and setup MMU mapping for that page.
>
> In VPP we don’t have reverse process, even if all memory allocations which
> use specific 4k page are freed, that 4K page will not be returned to
> kernel, as kernel simply doesn’t know that specific page is not in use
> anymore.
> Solution would be to somehow track number of memory allocations sharing
> single 4K page and call madvise() system call when last one is freed...
>
> If you are using hugepages, all virtual memory is immediately backed by
> physical memory so VPP with 32G of hugepage heap will use 32G of physical
> memory as long as VPP is running.
>
> If you do `show memory main-heap` you will actually see how many physical
> pages are allocated:
>
> vpp# show memory main-heap
> Thread 0 vpp_main
>   base 0x7f6f95c9f000, size 1g, locked, unmap-on-destroy, name 'main heap'
> page stats: page-size 4K, total 262144, mapped 50702, not-mapped 211442
>   numa 1: 50702 pages, 198.05m bytes
> total: 1023.99M, used: 115.51M, free: 908.49M, trimmable: 905.75M
>
>
> Out of this you can see that heap is using 4K pages, 262144 total, and
> 50702 are mapped to physical memory.
> All 50702 pages are using memory on numa node 1.
>
> So effectively VPP is using around 198 MB of physical memory for heap
> while 

[vpp-dev] Flow API questions/Hierarchical Queuing feature support

2021-07-01 Thread satish amara
[Edited Message Follows]

Subject:
#vpp #dpdk
Flow API questions/Hierarchical Queuing feature support

Hi,

I have a question about Flow API.
test flow [add|del|enable|disable] [index ] "
   "[src-ip ] [dst-ip ] "
   "[ip6-src-ip ] [ip6-dst-ip ] "
   "[src-port ] [dst-port ] "
   "[proto ] "
   "[gtpc teid ] [gtpu teid ] [vxlan ] "
   "[session id ] [spi ]"
   "[next-node ] [mark ] [buffer-advance ] "
   "[redirect-to-queue ] [drop] "
   "[rss function ] [rss types ]" ,

It was mentioned that Flow action can be mark. Is the mark-id is an opaque 
field in the meta buffer field of the packet buffer. I am trying to see if the 
mark id can be preserved until the packet is reached the output interface.  
Does the Flow API expect support from hardware for flow classification?  Is 
there any trace function or log where I can see this field set ?

Also, it looks Hierarchical Queuing support is not enabled in the latest 
release. Any plans to support it again in the future?   I would like to know an 
example command on how to set the pktfield for hierarchical queuing.  It's not 
clear from the documentation how to set it based on packet fields like src-ip, 
dst-ip, src-port, dst-port and proto. Do I need to mention mask value for it?

When does the pktfield is set. When the packet arrived on the interface or 
before Queuing the packet

The command below is used to set the packet fields required for classifying the 
incoming packet. As a result of the classification process, packet field 
information will be mapped to 5 tuples (subport, pipe, traffic class, pipe, 
color) and stored in packet mbuf.

set dpdk interface hqos pktfield  id subport|pipe|tc offset 
   mask 

Regards,
Satish K Amara

-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.
View/Reply Online (#19642): https://lists.fd.io/g/vpp-dev/message/19642
Mute This Topic: https://lists.fd.io/mt/83853480/21656
Group Owner: vpp-dev+ow...@lists.fd.io
Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub [arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-



[vpp-dev] Flow API questions/Hierarchical Queuing feature support

2021-07-01 Thread satish amara
[Edited Message Follows]

Subject:
#vpp #dpdk
Flow API questions/Hierarchical Queuing feature support

Hi,

I have a question about Flow API.
test flow [add|del|enable|disable] [index ] "
   "[src-ip ] [dst-ip ] "
   "[ip6-src-ip ] [ip6-dst-ip ] "
   "[src-port ] [dst-port ] "
   "[proto ] "
   "[gtpc teid ] [gtpu teid ] [vxlan ] "
   "[session id ] [spi ]"
   "[next-node ] [mark ] [buffer-advance ] "
   "[redirect-to-queue ] [drop] "
   "[rss function ] [rss types ]" ,

It was mentioned that Flow action can be mark. Is the mark-id is an opaque 
field in the meta buffer field of the packet buffer. I am trying to see if the 
mark id can be preserved until the packet is reached the output interface.  
Does the Flow API expect support from hardware for flow classification?  Is 
there any trace function or log where I can see this field set ?

Also, it looks Hierarchical Queuing support is not enabled in the latest 
release. Any plans to support it again in the future?   I would like to know an 
example command on how to set the pktfield for hierarchical queuing.  It's not 
clear from the documentation how to set it based on packet fields like src-ip, 
dst-ip, src-port, dst-port and proto. Do I need to mention mask value for it?

What does the pktfield is set. When the packet arrived on interface or before 
Queuing the packet

The command below is used to set the packet fields required for classifying the 
incoming packet. As a result of the classification process, packet field 
information will be mapped to 5 tuples (subport, pipe, traffic class, pipe, 
color) and stored in packet mbuf.

set dpdk interface hqos pktfield  id subport|pipe|tc offset 
   mask 

Regards,
Satish K Amara

-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.
View/Reply Online (#19642): https://lists.fd.io/g/vpp-dev/message/19642
Mute This Topic: https://lists.fd.io/mt/83853480/21656
Group Owner: vpp-dev+ow...@lists.fd.io
Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub [arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-



Re: [vpp-dev] heap sizes

2021-07-01 Thread Damjan Marion via lists.fd.io


> On 01.07.2021., at 16:12, Matthew Smith  wrote:
> 
> 
> 
> On Thu, Jul 1, 2021 at 6:36 AM Damjan Marion  wrote:
> 
> 
> > On 01.07.2021., at 11:12, Benoit Ganne (bganne) via lists.fd.io 
> >  wrote:
> > 
> >> Yes, allowing dynamic heap growth sounds like it could be better.
> >> Alternatively... if memory allocations could fail and something more
> >> graceful than VPP exiting could occur, that may also be better. E.g. if
> >> I'm adding a route and try to allocate a counter for it and that fails, it
> >> would be better to refuse to add the route than to exit and take the
> >> network down.
> >> 
> >> I realize that neither of those options is easy to do btw. I'm just trying
> >> to figure out how to make it easier and more forgiving for users to set up
> >> their configuration without making them learn about various memory
> >> parameters.
> > 
> > Understood, but setting a very high default will just make users of smaller 
> > config puzzled too  and I think changing all memory allocation callsites 
> > to check for NULL would be a big paradigm change in VPP.
> > That's why I think a dynamically growing heap might be better but I do not 
> > really know what would be the complexity.
> > That said, you can probably change the default in your own build and that 
> > should work.
> > 
> 
> Fully agree wirth Benoit. We should not increase heap size default value.
> 
> Things are actually a bit more complicated. For performance reasons people 
> should use 
> hugepages whenever they are available, but they are also not default.
> When hugepages are used all pages are immediately backed with physical memory.
> 
> So different use cases require different heap configurations and end user 
> needs to tune that.
> Same applies for other things like stats segment page size which again may 
> impact forwarding
> performance significantly.
> 
> If messing with startup.conf is too complicated for end user, some nice 
> configuration script may be helpful.
> Or just throwing few startup.confs into extras/startup_configs.
> 
> Dynamic heap is possible, but not straight forward, as at some places we use 
> offsets
> to the start of the heap, so additional allocation cannot be anywhere.
> Also it will not help in some cases, i.e. when 1G hugepage is used for heap, 
> growing up to 2G
> will fail if 2nd 1G page is not pre-allocated.
> 
> 
> Sorry for not being clear. I was not advocating any change to defaults in VPP 
> code in gerrit. I was trying to figure out the impact of changing the default 
> value written in startup.conf by the management plane I work on. And also 
> have a conversation on whether there are ways that it could be made easier to 
> tune memory parameters correctly. 

ok, so let me try to answer your original questions:

> It's my understanding that when you set the size of the main heap or the stat 
> segment in startup.conf, the size you specify is used to set up virtual 
> address space and the system does not actually allocate that full amount of 
> memory to VPP. I think when VPP tries to read/write addresses within the 
> address space, then memory is requested from the system to back the chunk of 
> address space containing the address being accessed. Is my understanding 
> correct(ish)?

heap-size parameter defines size of memory mapping created for the heap. With 
the normal 4K pages mapping is not backed by physical memory. Instead, first 
time you try to access specific page CPU will generate page fault, and kernel 
will handle it by allocating 4k chunk of physical memory to back that specific 
virtual address and setup MMU mapping for that page.

In VPP we don’t have reverse process, even if all memory allocations which use 
specific 4k page are freed, that 4K page will not be returned to kernel, as 
kernel simply doesn’t know that specific page is not in use anymore.
Solution would be to somehow track number of memory allocations sharing single 
4K page and call madvise() system call when last one is freed...

If you are using hugepages, all virtual memory is immediately backed by 
physical memory so VPP with 32G of hugepage heap will use 32G of physical 
memory as long as VPP is running.

If you do `show memory main-heap` you will actually see how many physical pages 
are allocated:

vpp# show memory main-heap
Thread 0 vpp_main
  base 0x7f6f95c9f000, size 1g, locked, unmap-on-destroy, name 'main heap'
page stats: page-size 4K, total 262144, mapped 50702, not-mapped 211442
  numa 1: 50702 pages, 198.05m bytes
total: 1023.99M, used: 115.51M, free: 908.49M, trimmable: 905.75M


Out of this you can see that heap is using 4K pages, 262144 total, and 50702 
are mapped to physical memory.
All 50702 pages are using memory on numa node 1.

So effectively VPP is using around 198 MB of physical memory for heap while 
real heap usage is only 115 MB.
Such a big difference is mainly caused by one place in our code which temporary 
allocates ~200M of memory for 
temporary vector. 

Re: [vpp-dev] issue with RA suppress

2021-07-01 Thread ashish . saxena
Hi,
Thanks for all the replies.
We were trying to run the RA suppress on VPP 21.01 version , but still facing 
the same issue. Earlier in the this thread there were discussions regarding 
rewriting of the IPv6 RA code.
Do we have fix for this issue now in the latest release (21.06) or in any 
developer's branch ?

Regards,

Ashish

-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.
View/Reply Online (#19686): https://lists.fd.io/g/vpp-dev/message/19686
Mute This Topic: https://lists.fd.io/mt/80151031/21656
Group Owner: vpp-dev+ow...@lists.fd.io
Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub [arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-



Re: [vpp-dev] heap sizes

2021-07-01 Thread Matthew Smith via lists.fd.io
On Thu, Jul 1, 2021 at 6:36 AM Damjan Marion  wrote:

>
>
> > On 01.07.2021., at 11:12, Benoit Ganne (bganne) via lists.fd.io  cisco@lists.fd.io> wrote:
> >
> >> Yes, allowing dynamic heap growth sounds like it could be better.
> >> Alternatively... if memory allocations could fail and something more
> >> graceful than VPP exiting could occur, that may also be better. E.g. if
> >> I'm adding a route and try to allocate a counter for it and that fails,
> it
> >> would be better to refuse to add the route than to exit and take the
> >> network down.
> >>
> >> I realize that neither of those options is easy to do btw. I'm just
> trying
> >> to figure out how to make it easier and more forgiving for users to set
> up
> >> their configuration without making them learn about various memory
> >> parameters.
> >
> > Understood, but setting a very high default will just make users of
> smaller config puzzled too  and I think changing all memory allocation
> callsites to check for NULL would be a big paradigm change in VPP.
> > That's why I think a dynamically growing heap might be better but I do
> not really know what would be the complexity.
> > That said, you can probably change the default in your own build and
> that should work.
> >
>
> Fully agree wirth Benoit. We should not increase heap size default value.
>
> Things are actually a bit more complicated. For performance reasons people
> should use
> hugepages whenever they are available, but they are also not default.
> When hugepages are used all pages are immediately backed with physical
> memory.
>
> So different use cases require different heap configurations and end user
> needs to tune that.
> Same applies for other things like stats segment page size which again may
> impact forwarding
> performance significantly.
>
> If messing with startup.conf is too complicated for end user, some nice
> configuration script may be helpful.
> Or just throwing few startup.confs into extras/startup_configs.
>
> Dynamic heap is possible, but not straight forward, as at some places we
> use offsets
> to the start of the heap, so additional allocation cannot be anywhere.
> Also it will not help in some cases, i.e. when 1G hugepage is used for
> heap, growing up to 2G
> will fail if 2nd 1G page is not pre-allocated.
>
>
Sorry for not being clear. I was not advocating any change to defaults in
VPP code in gerrit. I was trying to figure out the impact of changing the
default value written in startup.conf by the management plane I work on.
And also have a conversation on whether there are ways that it could be
made easier to tune memory parameters correctly.

-Matt

-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.
View/Reply Online (#19685): https://lists.fd.io/g/vpp-dev/message/19685
Mute This Topic: https://lists.fd.io/mt/83856384/21656
Group Owner: vpp-dev+ow...@lists.fd.io
Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub [arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-



Re: [vpp-dev] next-hop-table between two FIB tables results in punt and 'unknown ip protocol'

2021-07-01 Thread Mechthild Buescher via lists.fd.io
Hi all,

I still don’t have success. This is the configuration I tried:
set interface state NCIC-1-v1 up

create host-interface name Vpp2Host
set interface state host-Vpp2Host up


ip table add 4093
create sub-interfaces host-Vpp2Host 4093
set interface state host-Vpp2Host.4093 up
set interface ip table host-Vpp2Host.4093 4093
set interface ip address host-Vpp2Host.4093 198.19.255.249/29

ip table add 1
create sub-interfaces NCIC-1-v1 1
set interface state NCIC-1-v1.1 up
set interface ip table NCIC-1-v1.1 1
ip route add 198.19.255.248/29 table 1 via 0.0.0.0 next-hop-table 4093

ip table add 2
create sub-interfaces NCIC-1-v1 2
set interface state NCIC-1-v1.2 up
set interface ip table NCIC-1-v1.2 2
set interface ip address NCIC-1-v1.2 10.10.203.19/29
ip route add 198.19.255.248/29 table 2 via 198.19.255.249 next-hop-table 4093

ip table add 3
create sub-interfaces NCIC-1-v1 3
set interface state NCIC-1-v1.3 up
set interface ip table NCIC-1-v1.3 3
set interface ip address NCIC-1-v1.3 10.10.203.19/29
ip route add 198.19.255.248/29 table 3 via 198.19.255.249 ip4-look-in-table 4093

This is how I tried the config:
vppctl ping 198.19.255.253 table-id 4093
116 bytes from 198.19.255.253: icmp_seq=2 ttl=64 time=9.2805 ms
vppctl ping 198.19.255.253 table-id 1
Failed: no egress interface
vppctl ping vppctl sh int addr
Ext-0 (dn):
NCIC-1-v1 (up):
NCIC-1-v1.1 (up):
NCIC-1-v1.2 (up):
  L3 10.10.203.19/29 ip4 table-id 2 fib-idx 3
NCIC-1-v1.3 (up):
  L3 10.10.203.19/29 ip4 table-id 3 fib-idx 4
NCIC-1-v2 (dn):
Radio-0 (dn):
host-Vpp2Host (up):
host-Vpp2Host.4093 (up):
  L3 198.19.255.249/29 ip4 table-id 4093 fib-idx 1
local0 (dn):table-id 2
Statistics: 5 sent, 0 received, 100% packet loss
vppctl ping 198.19.255.253 table-id 3
Failed: no egress interface

This is what I see in vpp:
vppctl sh int addr
Ext-0 (dn):
NCIC-1-v1 (up):
NCIC-1-v1.1 (up):
NCIC-1-v1.2 (up):
  L3 10.10.203.19/29 ip4 table-id 2 fib-idx 3
NCIC-1-v1.3 (up):
  L3 10.10.203.19/29 ip4 table-id 3 fib-idx 4
NCIC-1-v2 (dn):
Radio-0 (dn):
host-Vpp2Host (up):
host-Vpp2Host.4093 (up):
  L3 198.19.255.249/29 ip4 table-id 4093 fib-idx 1
local0 (dn):

vppctl sh ip fib 198.19.255.253
ipv4-VRF:0, fib_index:0, flow hash:[src dst sport dport proto flowlabel ] 
epoch:0 flags:none locks:[default-route:1, ]
0.0.0.0/0 fib:0 index:0 locks:2
  default-route refs:1 entry-flags:drop, src-flags:added,contributing,active,
path-list:[0] locks:2 flags:drop, uPRF-list:0 len:0 itfs:[]
  path:[0] pl-index:0 ip4 weight=1 pref=0 special:  cfg-flags:drop,
[@0]: dpo-drop ip4

forwarding:   unicast-ip4-chain
  [@0]: dpo-load-balance: [proto:ip4 index:1 buckets:1 uRPF:0 to:[0:0]]
[0] [@0]: dpo-drop ip4
ipv4-VRF:4093, fib_index:1, flow hash:[src dst sport dport proto flowlabel ] 
epoch:0 flags:none locks:[CLI:2, adjacency:1, recursive-resolution:2, ]
198.19.255.253/32 fib:1 index:41 locks:2
  adjacency refs:1 entry-flags:attached, src-flags:added,contributing,active, 
cover:12
path-list:[54] locks:2 uPRF-list:46 len:1 itfs:[6, ]
  path:[60] pl-index:54 ip4 weight=1 pref=0 attached-nexthop:  
oper-flags:resolved,
198.19.255.253 host-Vpp2Host.4093
  [@0]: ipv4 via 198.19.255.253 host-Vpp2Host.4093: mtu:9000 next:3 
flags:[] a215f39524f302fe349f8c8a81000ffd0800
Extensions:
 path:60 adj-flags:[refines-cover]
forwarding:   unicast-ip4-chain
  [@0]: dpo-load-balance: [proto:ip4 index:42 buckets:1 uRPF:46 to:[6:576]]
[0] [@5]: ipv4 via 198.19.255.253 host-Vpp2Host.4093: mtu:9000 next:3 
flags:[] a215f39524f302fe349f8c8a81000ffd0800
ipv4-VRF:1, fib_index:2, flow hash:[src dst sport dport proto flowlabel ] 
epoch:0 flags:none locks:[CLI:2, ]
198.19.255.248/29 fib:2 index:21 locks:2
  CLI refs:1 src-flags:added,contributing,active,
path-list:[31] locks:2 flags:shared, uPRF-list:23 len:0 itfs:[]
  path:[33] pl-index:31 ip4 weight=1 pref=0 deag:  oper-flags:resolved,
 fib-index:1

forwarding:   unicast-ip4-chain
  [@0]: dpo-load-balance: [proto:ip4 index:22 buckets:1 uRPF:23 to:[0:0]]
[0] [@11]: dst-address,unicast lookup in ipv4-VRF:4093
ipv4-VRF:2, fib_index:3, flow hash:[src dst sport dport proto flowlabel ] 
epoch:0 flags:none locks:[CLI:2, ]
198.19.255.248/29 fib:3 index:31 locks:2
  CLI refs:1 src-flags:added,contributing,active,
path-list:[42] locks:2 flags:shared, uPRF-list:34 len:0 itfs:[]
  path:[46] pl-index:42 ip4 weight=1 pref=0 recursive:  oper-flags:resolved,
via 198.19.255.249 in fib:1 via-fib:15 via-dpo:[dpo-load-balance:16]

forwarding:   unicast-ip4-chain
  [@0]: dpo-load-balance: [proto:ip4 index:32 buckets:1 uRPF:34 to:[5:480]]
[0] [@12]: dpo-load-balance: [proto:ip4 index:16 buckets:1 uRPF:18 
to:[6:576] via:[5:480]]
  [0] [@2]: dpo-receive: 198.19.255.249 on host-Vpp2Host.4093
ipv4-VRF:3, fib_index:4, flow hash:[src dst sport dport proto flowlabel ] 
epoch:0 flags:none locks:[CLI:2, ]
0.0.0.0/0 fib:4 index:32 locks:2
  default-route refs:1 entry-flags:drop, 

Re: [vpp-dev] Issue in VPP v21.06 compilation

2021-07-01 Thread Chinmaya Aggarwal
We are using Centos 8.4.2105 and gcc version 8.4.1.

Also, we have downloaded and installed mellanox driver from below link : -
https://www.mellanox.com/products/infiniband-drivers/linux/mlnx_ofed

Driver Version : 5.3-1.0.0.1

Thanks and Regards,
Chinmaya Agarwal.

-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.
View/Reply Online (#19683): https://lists.fd.io/g/vpp-dev/message/19683
Mute This Topic: https://lists.fd.io/mt/83890737/21656
Group Owner: vpp-dev+ow...@lists.fd.io
Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub [arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-



Re: [vpp-dev] VPP on a Bluefield-2 smartNIC

2021-07-01 Thread Damjan Marion via lists.fd.io


> On 01.07.2021., at 07:35, Pierre Louis Aublin  
> wrote:
> 
> diff --git a/build/external/packages/ipsec-mb.mk 
> b/build/external/packages/ipsec-mb.mk
> index d0bd2af19..119eb5219 100644
> --- a/build/external/packages/ipsec-mb.mk
> +++ b/build/external/packages/ipsec-mb.mk
> @@ -34,7 +34,7 @@ define  ipsec-mb_build_cmds
>   SAFE_DATA=n \
>   PREFIX=$(ipsec-mb_install_dir) \
>   NASM=$(ipsec-mb_install_dir)/bin/nasm \
> - EXTRA_CFLAGS="-g -msse4.2" > $(ipsec-mb_build_log)
> + EXTRA_CFLAGS="-g" > $(ipsec-mb_build_log)

Why do you need this change?

If i get it right bluefield uses ARM cpus and we don’t compile intel ipsecmb 
lib on arm.

$ git grep ARCH_X86_64 build/external/Makefile
build/external/Makefile:ARCH_X86_64=$(filter x86_64,$(shell uname -m))
build/external/Makefile:install: $(if $(ARCH_X86_64), nasm-install 
ipsec-mb-install) dpdk-install rdma-core-install quicly-install libbpf-install
build/external/Makefile:config: $(if $(ARCH_X86_64), nasm-config 
ipsec-mb-config) dpdk-config rdma-core-config quicly-build

— 
Damjan
-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.
View/Reply Online (#19682): https://lists.fd.io/g/vpp-dev/message/19682
Mute This Topic: https://lists.fd.io/mt/83910198/21656
Group Owner: vpp-dev+ow...@lists.fd.io
Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub [arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-



Re: [vpp-dev] VPP on a Bluefield-2 smartNIC

2021-07-01 Thread Damjan Marion via lists.fd.io

Might be worth trying our native driver (rdma) instead of using dpdk…..

— 
Damjan


> On 01.07.2021., at 11:07, Pierre Louis Aublin  
> wrote:
> 
> The"Unsupported PCI device 0x15b3:0xa2d6 found at PCI address :03:00.0" 
> message disappears; however the network interface still doesn't show up. 
> Interestingly, vpp on the host also prints this message, yet the interface 
> can be used.
> 
> By any chance, would you have any clue on what I could try to further debug 
> this issue?
> 
> Best
> Pierre Louis
> 
> On 2021/07/01 17:50, Benoit Ganne (bganne) via lists.fd.io wrote:
>> Please try https://gerrit.fd.io/r/c/vpp/+/32965 and reports if it works.
>> 
>> Best
>> ben
>> 
>>> -Original Message-
>>> From: vpp-dev@lists.fd.io  On Behalf Of Pierre Louis
>>> Aublin
>>> Sent: jeudi 1 juillet 2021 07:36
>>> To: vpp-dev@lists.fd.io
>>> Subject: [vpp-dev] VPP on a Bluefield-2 smartNIC
>>> 
>>> Dear VPP developers
>>> 
>>> I would like to run VPP on the Bluefield-2 smartNIC, but even though I
>>> managed to compile it the interface doesn't show up inside the CLI. By
>>> any chance, would you know how to compile and configure vpp for this
>>> device?
>>> 
>>> I am using VPP v21.06-rc2 and did the following modifications so that it
>>> can compile:
>>> ```
>>> diff --git a/build/external/packages/dpdk.mk
>>> b/build/external/packages/dpdk.mk
>>> index c7eb0fc3f..31a5c764e 100644
>>> --- a/build/external/packages/dpdk.mk
>>> +++ b/build/external/packages/dpdk.mk
>>> @@ -15,8 +15,8 @@ DPDK_PKTMBUF_HEADROOM?= 128
>>>   DPDK_USE_LIBBSD  ?= n
>>>   DPDK_DEBUG   ?= n
>>>   DPDK_MLX4_PMD?= n
>>> -DPDK_MLX5_PMD?= n
>>> -DPDK_MLX5_COMMON_PMD ?= n
>>> +DPDK_MLX5_PMD?= y
>>> +DPDK_MLX5_COMMON_PMD ?= y
>>>   DPDK_TAP_PMD ?= n
>>>   DPDK_FAILSAFE_PMD?= n
>>>   DPDK_MACHINE ?= default
>>> diff --git a/build/external/packages/ipsec-mb.mk
>>> b/build/external/packages/ipsec-mb.mk
>>> index d0bd2af19..119eb5219 100644
>>> --- a/build/external/packages/ipsec-mb.mk
>>> +++ b/build/external/packages/ipsec-mb.mk
>>> @@ -34,7 +34,7 @@ define  ipsec-mb_build_cmds
>>>SAFE_DATA=n \
>>>PREFIX=$(ipsec-mb_install_dir) \
>>>NASM=$(ipsec-mb_install_dir)/bin/nasm \
>>> - EXTRA_CFLAGS="-g -msse4.2" > $(ipsec-mb_build_log)
>>> + EXTRA_CFLAGS="-g" > $(ipsec-mb_build_log)
>>>   endef
>>> 
>>>   define  ipsec-mb_install_cmds
>>> ```
>>> 
>>> 
>>> However, when running the VPP CLI, the network interface does not show up:
>>> ```
>>> $ sudo -E make run
>>> clib_sysfs_prealloc_hugepages:261: pre-allocating 6 additional 2048K
>>> hugepages on numa node 0
>>> dpdk   [warn  ]: Unsupported PCI device 0x15b3:0xa2d6 found
>>> at PCI address :03:00.0
>>> 
>>> dpdk/cryptodev [warn  ]: dpdk_cryptodev_init: Failed to configure
>>> cryptodev
>>> vat-plug/load  [error ]: vat_plugin_register: oddbuf plugin not
>>> loaded...
>>>  _____   _  ___
>>>   __/ __/ _ \  (_)__| | / / _ \/ _ \
>>>   _/ _// // / / / _ \   | |/ / ___/ ___/
>>>   /_/ /(_)_/\___/   |___/_/  /_/
>>> 
>>> DBGvpp# show int
>>>Name   IdxState  MTU
>>> (L3/IP4/IP6/MPLS) Counter  Count
>>> local00 down 0/0/0/0
>>> DBGvpp# sh hard
>>>NameIdx   Link  Hardware
>>> local0 0down  local0
>>>Link speed: unknown
>>>local
>>> ```
>>> 
>>> 
>>> The dpdk-testpmd application seems to start correctly though:
>>> ```
>>> $ sudo ./build-root/install-vpp_debug-native/external/bin/dpdk-testpmd
>>> -l 0-2 -a :03:00.00 -- -i --nb-cores=2 --nb-ports=1
>>> --total-num-mbufs=2048
>>> EAL: Detected 8 lcore(s)
>>> EAL: Detected 1 NUMA nodes
>>> EAL: Detected static linkage of DPDK
>>> EAL: Multi-process socket /var/run/dpdk/rte/mp_socket
>>> EAL: Selected IOVA mode 'VA'
>>> EAL: No available 32768 kB hugepages reported
>>> EAL: No available 64 kB hugepages reported
>>> EAL: No available 1048576 kB hugepages reported
>>> EAL: Probing VFIO support...
>>> EAL: VFIO support initialized
>>> EAL:   Invalid NUMA socket, default to 0
>>> EAL: Probe PCI driver: mlx5_pci (15b3:a2d6) device: :03:00.0 (socket
>>> 0)
>>> mlx5_pci: Failed to allocate Tx DevX UAR (BF)
>>> mlx5_pci: Failed to allocate Rx DevX UAR (BF)
>>> mlx5_pci: Size 0x is not power of 2, will be aligned to 0x1.
>>> Interactive-mode selected
>>> testpmd: create a new mbuf pool : n=2048, size=2176, socket=0
>>> testpmd: preferred mempool ops selected: ring_mp_mc
>>> 
>>> Warning! port-topology=paired and odd forward ports number, the last
>>> port will pair with itself.
>>> 
>>> Configuring Port 0 (socket 0)
>>> Port 0: 0C:42:A1:A4:89:B4
>>> Checking link statuses...
>>> Done
>>> testpmd>
>>> ```
>>> 
>>> Is the problem related to the failure to 

Re: [vpp-dev] heap sizes

2021-07-01 Thread Damjan Marion via lists.fd.io


> On 01.07.2021., at 11:12, Benoit Ganne (bganne) via lists.fd.io 
>  wrote:
> 
>> Yes, allowing dynamic heap growth sounds like it could be better.
>> Alternatively... if memory allocations could fail and something more
>> graceful than VPP exiting could occur, that may also be better. E.g. if
>> I'm adding a route and try to allocate a counter for it and that fails, it
>> would be better to refuse to add the route than to exit and take the
>> network down.
>> 
>> I realize that neither of those options is easy to do btw. I'm just trying
>> to figure out how to make it easier and more forgiving for users to set up
>> their configuration without making them learn about various memory
>> parameters.
> 
> Understood, but setting a very high default will just make users of smaller 
> config puzzled too  and I think changing all memory allocation callsites to 
> check for NULL would be a big paradigm change in VPP.
> That's why I think a dynamically growing heap might be better but I do not 
> really know what would be the complexity.
> That said, you can probably change the default in your own build and that 
> should work.
> 

Fully agree wirth Benoit. We should not increase heap size default value.

Things are actually a bit more complicated. For performance reasons people 
should use 
hugepages whenever they are available, but they are also not default.
When hugepages are used all pages are immediately backed with physical memory.

So different use cases require different heap configurations and end user needs 
to tune that.
Same applies for other things like stats segment page size which again may 
impact forwarding
performance significantly.

If messing with startup.conf is too complicated for end user, some nice 
configuration script may be helpful.
Or just throwing few startup.confs into extras/startup_configs.

Dynamic heap is possible, but not straight forward, as at some places we use 
offsets
to the start of the heap, so additional allocation cannot be anywhere.
Also it will not help in some cases, i.e. when 1G hugepage is used for heap, 
growing up to 2G
will fail if 2nd 1G page is not pre-allocated.

— 
Damjan


-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.
View/Reply Online (#19680): https://lists.fd.io/g/vpp-dev/message/19680
Mute This Topic: https://lists.fd.io/mt/83856384/21656
Group Owner: vpp-dev+ow...@lists.fd.io
Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub [arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-



Re: [vpp-dev] next-hop-table between two FIB tables results in punt and 'unknown ip protocol'

2021-07-01 Thread Neale Ranns



From: Benoit Ganne (bganne) 
Date: Thursday, 1 July 2021 at 11:35
To: Neale Ranns , Mechthild Buescher 
, vpp-dev@lists.fd.io 
Subject: RE: [vpp-dev] next-hop-table between two FIB tables results in punt 
and 'unknown ip protocol'
>> As 198.19.255.249 is the IP of host-Vpp2Host.4093, VPP interprets it as
>> you want to deliver the packet locally instead of forwarding it. Try
>> changing it to:
>> ip route add 198.19.255.248/29 table 1 via 0.0.0.0 next-hop-table 4093

> 0.0.0.0/32 in any table is a drop. One cannot specify a route to be
> recursive via another network, i.e. via 0.0.0.0/0, VPP doesn't support
> that.

I do not disagree with the rest of your comment, however this worked for me 
though - maybe because of some odd cli behavior?

probably 


vpp# create packet-generator interface pg0
vpp# create packet-generator interface pg1
vpp# set int ip addr pg0 192.168.1.1/24
vpp# set int st pg0 up
vpp# ip table add 4093
vpp# set int st pg1 up
vpp# set int ip table pg1 4093
vpp# set int ip address pg1 198.19.255.249/29
vpp# ip neigh pg1 198.19.255.253 1.2.3 static
vpp# ip route add 198.19.255.248/29 via 0.0.0.0 next-hop-table 4093
vpp# sh ip fib 198.19.255.253 table 0
ipv4-VRF:0, fib_index:0, flow hash:[src dst sport dport proto flowlabel ] 
epoch:0 flags:none locks:[default-route:1, ]
198.19.255.248/29 fib:0 index:21 locks:2
  CLI refs:1 src-flags:added,contributing,active,
path-list:[29] locks:2 flags:shared, uPRF-list:27 len:0 itfs:[]
  path:[33] pl-index:29 ip4 weight=1 pref=0 deag:  oper-flags:resolved,
 fib-index:1

 forwarding:   unicast-ip4-chain
  [@0]: dpo-load-balance: [proto:ip4 index:23 buckets:1 uRPF:27 to:[1:100]]
[0] [@12]: dst-address,unicast lookup in ipv4-VRF:4093

this is a lookup-DPO.

Looks like the CLI interprets ‘via 0.0.0.0 next-hop-table 4093’ and ‘via 
ip4-look-in-table 4093’ as the same thing.

/neale



And tracing seems to give the expected result:
00:00:02:247022: pg-input
  stream x, 100 bytes, sw_if_index 1
  current data 0, length 100, buffer-pool 0, ref-count 1, trace handle 0x0
  UDP: 192.168.1.2 -> 198.19.255.253
tos 0x00, ttl 64, length 100, checksum 0xf2cd dscp CS0 ecn NON_ECN
fragment id 0x
  UDP: 4321 -> 1234
length 72, checksum 0x7deb
00:00:02:247055: ip4-input
  UDP: 192.168.1.2 -> 198.19.255.253
tos 0x00, ttl 64, length 100, checksum 0xf2cd dscp CS0 ecn NON_ECN
fragment id 0x
  UDP: 4321 -> 1234
length 72, checksum 0x7deb
00:00:02:247079: ip4-lookup
  fib 0 dpo-idx 0 flow hash: 0x
  UDP: 192.168.1.2 -> 198.19.255.253
tos 0x00, ttl 64, length 100, checksum 0xf2cd dscp CS0 ecn NON_ECN
fragment id 0x
  UDP: 4321 -> 1234
length 72, checksum 0x7deb
00:00:02:247088: lookup-ip4-dst
 fib-index:1 addr:198.19.255.253 load-balance:22
00:00:02:247092: ip4-rewrite
  tx_sw_if_index 2 dpo-idx 4 : ipv4 via 198.19.255.253 pg1: mtu:9000 next:3 
flags:[] 00010002000302fe257345030800 flow hash: 0x
  : 00010002000302fe25734503080045643f11f3cdc0a80102c613
  0020: fffd10e104d200487deb000102030405060708090a0b0c0d0e0f1011
00:00:02:247099: pg1-output
  pg1
  IP4: 02:fe:25:73:45:03 -> 00:01:00:02:00:03
  UDP: 192.168.1.2 -> 198.19.255.253
tos 0x00, ttl 63, length 100, checksum 0xf3cd dscp CS0 ecn NON_ECN
fragment id 0x
  UDP: 4321 -> 1234
length 72, checksum 0x7deb
00:00:02:247106: pg1-tx
buffer 0x9ffd4: current data -14, length 114, buffer-pool 0, ref-count 1, 
trace handle 0x0
loop-counter 1
  IP4: 02:fe:25:73:45:03 -> 00:01:00:02:00:03
  UDP: 192.168.1.2 -> 198.19.255.253
tos 0x00, ttl 63, length 100, checksum 0xf3cd dscp CS0 ecn NON_ECN
fragment id 0x
  UDP: 4321 -> 1234
length 72, checksum 0x7deb

ben

-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.
View/Reply Online (#19679): https://lists.fd.io/g/vpp-dev/message/19679
Mute This Topic: https://lists.fd.io/mt/83895986/21656
Group Owner: vpp-dev+ow...@lists.fd.io
Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub [arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-



Re: [vpp-dev] VPP on a Bluefield-2 smartNIC

2021-07-01 Thread Benoit Ganne (bganne) via lists.fd.io
What is your VPP dpdk config looks like in startup.conf? Esp. did you whitelist 
the device? See 
https://fd.io/docs/vpp/master/gettingstarted/users/configuring/startup.html#the-dpdk-section
Also, please share the output of 'show logs'.

Best
ben

> -Original Message-
> From: Pierre Louis Aublin 
> Sent: jeudi 1 juillet 2021 11:08
> To: Benoit Ganne (bganne) ; vpp-dev@lists.fd.io
> Subject: Re: [vpp-dev] VPP on a Bluefield-2 smartNIC
> 
> The"Unsupported PCI device 0x15b3:0xa2d6 found at PCI address
> :03:00.0" message disappears; however the network interface still
> doesn't show up. Interestingly, vpp on the host also prints this
> message, yet the interface can be used.
> 
> By any chance, would you have any clue on what I could try to further
> debug this issue?
> 
> Best
> Pierre Louis
> 
> On 2021/07/01 17:50, Benoit Ganne (bganne) via lists.fd.io wrote:
> > Please try https://gerrit.fd.io/r/c/vpp/+/32965 and reports if it works.
> >
> > Best
> > ben
> >
> >> -Original Message-
> >> From: vpp-dev@lists.fd.io  On Behalf Of Pierre
> Louis
> >> Aublin
> >> Sent: jeudi 1 juillet 2021 07:36
> >> To: vpp-dev@lists.fd.io
> >> Subject: [vpp-dev] VPP on a Bluefield-2 smartNIC
> >>
> >> Dear VPP developers
> >>
> >> I would like to run VPP on the Bluefield-2 smartNIC, but even though I
> >> managed to compile it the interface doesn't show up inside the CLI. By
> >> any chance, would you know how to compile and configure vpp for this
> >> device?
> >>
> >> I am using VPP v21.06-rc2 and did the following modifications so that
> it
> >> can compile:
> >> ```
> >> diff --git a/build/external/packages/dpdk.mk
> >> b/build/external/packages/dpdk.mk
> >> index c7eb0fc3f..31a5c764e 100644
> >> --- a/build/external/packages/dpdk.mk
> >> +++ b/build/external/packages/dpdk.mk
> >> @@ -15,8 +15,8 @@ DPDK_PKTMBUF_HEADROOM    ?= 128
> >>    DPDK_USE_LIBBSD  ?= n
> >>    DPDK_DEBUG   ?= n
> >>    DPDK_MLX4_PMD    ?= n
> >> -DPDK_MLX5_PMD    ?= n
> >> -DPDK_MLX5_COMMON_PMD ?= n
> >> +DPDK_MLX5_PMD    ?= y
> >> +DPDK_MLX5_COMMON_PMD ?= y
> >>    DPDK_TAP_PMD ?= n
> >>    DPDK_FAILSAFE_PMD    ?= n
> >>    DPDK_MACHINE ?= default
> >> diff --git a/build/external/packages/ipsec-mb.mk
> >> b/build/external/packages/ipsec-mb.mk
> >> index d0bd2af19..119eb5219 100644
> >> --- a/build/external/packages/ipsec-mb.mk
> >> +++ b/build/external/packages/ipsec-mb.mk
> >> @@ -34,7 +34,7 @@ define  ipsec-mb_build_cmds
> >>     SAFE_DATA=n \
> >>     PREFIX=$(ipsec-mb_install_dir) \
> >>     NASM=$(ipsec-mb_install_dir)/bin/nasm \
> >> - EXTRA_CFLAGS="-g -msse4.2" > $(ipsec-mb_build_log)
> >> + EXTRA_CFLAGS="-g" > $(ipsec-mb_build_log)
> >>    endef
> >>
> >>    define  ipsec-mb_install_cmds
> >> ```
> >>
> >>
> >> However, when running the VPP CLI, the network interface does not show
> up:
> >> ```
> >> $ sudo -E make run
> >> clib_sysfs_prealloc_hugepages:261: pre-allocating 6 additional 2048K
> >> hugepages on numa node 0
> >> dpdk   [warn  ]: Unsupported PCI device 0x15b3:0xa2d6 found
> >> at PCI address :03:00.0
> >>
> >> dpdk/cryptodev [warn  ]: dpdk_cryptodev_init: Failed to configure
> >> cryptodev
> >> vat-plug/load  [error ]: vat_plugin_register: oddbuf plugin not
> >> loaded...
> >>       ___    _    _   _  ___
> >>    __/ __/ _ \  (_)__    | | / / _ \/ _ \
> >>    _/ _// // / / / _ \   | |/ / ___/ ___/
> >>    /_/ /(_)_/\___/   |___/_/  /_/
> >>
> >> DBGvpp# show int
> >>     Name   Idx    State  MTU
> >> (L3/IP4/IP6/MPLS) Counter  Count
> >> local0    0 down 0/0/0/0
> >> DBGvpp# sh hard
> >>     Name    Idx   Link  Hardware
> >> local0 0    down  local0
> >>     Link speed: unknown
> >>     local
> >> ```
> >>
> >>
> >> The dpdk-testpmd application seems to start correctly though:
> >> ```
> >> $ sudo ./build-root/install-vpp_debug-native/external/bin/dpdk-testpmd
> >> -l 0-2 -a :03:00.00 -- -i --nb-cores=2 --nb-ports=1
> >> --total-num-mbufs=2048
> >> EAL: Detected 8 lcore(s)
> >> EAL: Detected 1 NUMA nodes
> >> EAL: Detected static linkage of DPDK
> >> EAL: Multi-process socket /var/run/dpdk/rte/mp_socket
> >> EAL: Selected IOVA mode 'VA'
> >> EAL: No available 32768 kB hugepages reported
> >> EAL: No available 64 kB hugepages reported
> >> EAL: No available 1048576 kB hugepages reported
> >> EAL: Probing VFIO support...
> >> EAL: VFIO support initialized
> >> EAL:   Invalid NUMA socket, default to 0
> >> EAL: Probe PCI driver: mlx5_pci (15b3:a2d6) device: :03:00.0
> (socket
> >> 0)
> >> mlx5_pci: Failed to allocate Tx DevX UAR (BF)
> >> mlx5_pci: Failed to allocate Rx DevX UAR (BF)
> >> mlx5_pci: Size 0x is not power of 2, will be aligned to 0x1.
> >> Interactive-mode selected

Re: [vpp-dev] next-hop-table between two FIB tables results in punt and 'unknown ip protocol'

2021-07-01 Thread Benoit Ganne (bganne) via lists.fd.io
>> As 198.19.255.249 is the IP of host-Vpp2Host.4093, VPP interprets it as
>> you want to deliver the packet locally instead of forwarding it. Try
>> changing it to:
>> ip route add 198.19.255.248/29 table 1 via 0.0.0.0 next-hop-table 4093

> 0.0.0.0/32 in any table is a drop. One cannot specify a route to be
> recursive via another network, i.e. via 0.0.0.0/0, VPP doesn't support
> that.

I do not disagree with the rest of your comment, however this worked for me 
though - maybe because of some odd cli behavior?

vpp# create packet-generator interface pg0
vpp# create packet-generator interface pg1
vpp# set int ip addr pg0 192.168.1.1/24
vpp# set int st pg0 up
vpp# ip table add 4093
vpp# set int st pg1 up
vpp# set int ip table pg1 4093
vpp# set int ip address pg1 198.19.255.249/29
vpp# ip neigh pg1 198.19.255.253 1.2.3 static
vpp# ip route add 198.19.255.248/29 via 0.0.0.0 next-hop-table 4093
vpp# sh ip fib 198.19.255.253 table 0
ipv4-VRF:0, fib_index:0, flow hash:[src dst sport dport proto flowlabel ] 
epoch:0 flags:none locks:[default-route:1, ]
198.19.255.248/29 fib:0 index:21 locks:2
  CLI refs:1 src-flags:added,contributing,active,
path-list:[29] locks:2 flags:shared, uPRF-list:27 len:0 itfs:[]
  path:[33] pl-index:29 ip4 weight=1 pref=0 deag:  oper-flags:resolved,
 fib-index:1

 forwarding:   unicast-ip4-chain
  [@0]: dpo-load-balance: [proto:ip4 index:23 buckets:1 uRPF:27 to:[1:100]]
[0] [@12]: dst-address,unicast lookup in ipv4-VRF:4093

And tracing seems to give the expected result:
00:00:02:247022: pg-input
  stream x, 100 bytes, sw_if_index 1
  current data 0, length 100, buffer-pool 0, ref-count 1, trace handle 0x0
  UDP: 192.168.1.2 -> 198.19.255.253
tos 0x00, ttl 64, length 100, checksum 0xf2cd dscp CS0 ecn NON_ECN
fragment id 0x
  UDP: 4321 -> 1234
length 72, checksum 0x7deb
00:00:02:247055: ip4-input
  UDP: 192.168.1.2 -> 198.19.255.253
tos 0x00, ttl 64, length 100, checksum 0xf2cd dscp CS0 ecn NON_ECN
fragment id 0x
  UDP: 4321 -> 1234
length 72, checksum 0x7deb
00:00:02:247079: ip4-lookup
  fib 0 dpo-idx 0 flow hash: 0x
  UDP: 192.168.1.2 -> 198.19.255.253
tos 0x00, ttl 64, length 100, checksum 0xf2cd dscp CS0 ecn NON_ECN
fragment id 0x
  UDP: 4321 -> 1234
length 72, checksum 0x7deb
00:00:02:247088: lookup-ip4-dst
 fib-index:1 addr:198.19.255.253 load-balance:22
00:00:02:247092: ip4-rewrite
  tx_sw_if_index 2 dpo-idx 4 : ipv4 via 198.19.255.253 pg1: mtu:9000 next:3 
flags:[] 00010002000302fe257345030800 flow hash: 0x
  : 00010002000302fe25734503080045643f11f3cdc0a80102c613
  0020: fffd10e104d200487deb000102030405060708090a0b0c0d0e0f1011
00:00:02:247099: pg1-output
  pg1
  IP4: 02:fe:25:73:45:03 -> 00:01:00:02:00:03
  UDP: 192.168.1.2 -> 198.19.255.253
tos 0x00, ttl 63, length 100, checksum 0xf3cd dscp CS0 ecn NON_ECN
fragment id 0x
  UDP: 4321 -> 1234
length 72, checksum 0x7deb
00:00:02:247106: pg1-tx
buffer 0x9ffd4: current data -14, length 114, buffer-pool 0, ref-count 1, 
trace handle 0x0
loop-counter 1
  IP4: 02:fe:25:73:45:03 -> 00:01:00:02:00:03
  UDP: 192.168.1.2 -> 198.19.255.253
tos 0x00, ttl 63, length 100, checksum 0xf3cd dscp CS0 ecn NON_ECN
fragment id 0x
  UDP: 4321 -> 1234
length 72, checksum 0x7deb

ben

-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.
View/Reply Online (#19677): https://lists.fd.io/g/vpp-dev/message/19677
Mute This Topic: https://lists.fd.io/mt/83895986/21656
Group Owner: vpp-dev+ow...@lists.fd.io
Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub [arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-



Re: [vpp-dev] heap sizes

2021-07-01 Thread Benoit Ganne (bganne) via lists.fd.io
> Yes, allowing dynamic heap growth sounds like it could be better.
> Alternatively... if memory allocations could fail and something more
> graceful than VPP exiting could occur, that may also be better. E.g. if
> I'm adding a route and try to allocate a counter for it and that fails, it
> would be better to refuse to add the route than to exit and take the
> network down.
> 
> I realize that neither of those options is easy to do btw. I'm just trying
> to figure out how to make it easier and more forgiving for users to set up
> their configuration without making them learn about various memory
> parameters.

Understood, but setting a very high default will just make users of smaller 
config puzzled too  and I think changing all memory allocation callsites to 
check for NULL would be a big paradigm change in VPP.
That's why I think a dynamically growing heap might be better but I do not 
really know what would be the complexity.
That said, you can probably change the default in your own build and that 
should work.

Best
ben

-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.
View/Reply Online (#19676): https://lists.fd.io/g/vpp-dev/message/19676
Mute This Topic: https://lists.fd.io/mt/83856384/21656
Group Owner: vpp-dev+ow...@lists.fd.io
Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub [arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-



Re: [vpp-dev] VPP on a Bluefield-2 smartNIC

2021-07-01 Thread Pierre Louis Aublin
The"Unsupported PCI device 0x15b3:0xa2d6 found at PCI address 
:03:00.0" message disappears; however the network interface still 
doesn't show up. Interestingly, vpp on the host also prints this 
message, yet the interface can be used.


By any chance, would you have any clue on what I could try to further 
debug this issue?


Best
Pierre Louis

On 2021/07/01 17:50, Benoit Ganne (bganne) via lists.fd.io wrote:

Please try https://gerrit.fd.io/r/c/vpp/+/32965 and reports if it works.

Best
ben


-Original Message-
From: vpp-dev@lists.fd.io  On Behalf Of Pierre Louis
Aublin
Sent: jeudi 1 juillet 2021 07:36
To: vpp-dev@lists.fd.io
Subject: [vpp-dev] VPP on a Bluefield-2 smartNIC

Dear VPP developers

I would like to run VPP on the Bluefield-2 smartNIC, but even though I
managed to compile it the interface doesn't show up inside the CLI. By
any chance, would you know how to compile and configure vpp for this
device?

I am using VPP v21.06-rc2 and did the following modifications so that it
can compile:
```
diff --git a/build/external/packages/dpdk.mk
b/build/external/packages/dpdk.mk
index c7eb0fc3f..31a5c764e 100644
--- a/build/external/packages/dpdk.mk
+++ b/build/external/packages/dpdk.mk
@@ -15,8 +15,8 @@ DPDK_PKTMBUF_HEADROOM    ?= 128
   DPDK_USE_LIBBSD  ?= n
   DPDK_DEBUG   ?= n
   DPDK_MLX4_PMD    ?= n
-DPDK_MLX5_PMD    ?= n
-DPDK_MLX5_COMMON_PMD ?= n
+DPDK_MLX5_PMD    ?= y
+DPDK_MLX5_COMMON_PMD ?= y
   DPDK_TAP_PMD ?= n
   DPDK_FAILSAFE_PMD    ?= n
   DPDK_MACHINE ?= default
diff --git a/build/external/packages/ipsec-mb.mk
b/build/external/packages/ipsec-mb.mk
index d0bd2af19..119eb5219 100644
--- a/build/external/packages/ipsec-mb.mk
+++ b/build/external/packages/ipsec-mb.mk
@@ -34,7 +34,7 @@ define  ipsec-mb_build_cmds
    SAFE_DATA=n \
    PREFIX=$(ipsec-mb_install_dir) \
    NASM=$(ipsec-mb_install_dir)/bin/nasm \
- EXTRA_CFLAGS="-g -msse4.2" > $(ipsec-mb_build_log)
+ EXTRA_CFLAGS="-g" > $(ipsec-mb_build_log)
   endef

   define  ipsec-mb_install_cmds
```


However, when running the VPP CLI, the network interface does not show up:
```
$ sudo -E make run
clib_sysfs_prealloc_hugepages:261: pre-allocating 6 additional 2048K
hugepages on numa node 0
dpdk   [warn  ]: Unsupported PCI device 0x15b3:0xa2d6 found
at PCI address :03:00.0

dpdk/cryptodev [warn  ]: dpdk_cryptodev_init: Failed to configure
cryptodev
vat-plug/load  [error ]: vat_plugin_register: oddbuf plugin not
loaded...
      ___    _    _   _  ___
   __/ __/ _ \  (_)__    | | / / _ \/ _ \
   _/ _// // / / / _ \   | |/ / ___/ ___/
   /_/ /(_)_/\___/   |___/_/  /_/

DBGvpp# show int
    Name   Idx    State  MTU
(L3/IP4/IP6/MPLS) Counter  Count
local0    0 down 0/0/0/0
DBGvpp# sh hard
    Name    Idx   Link  Hardware
local0 0    down  local0
    Link speed: unknown
    local
```


The dpdk-testpmd application seems to start correctly though:
```
$ sudo ./build-root/install-vpp_debug-native/external/bin/dpdk-testpmd
-l 0-2 -a :03:00.00 -- -i --nb-cores=2 --nb-ports=1
--total-num-mbufs=2048
EAL: Detected 8 lcore(s)
EAL: Detected 1 NUMA nodes
EAL: Detected static linkage of DPDK
EAL: Multi-process socket /var/run/dpdk/rte/mp_socket
EAL: Selected IOVA mode 'VA'
EAL: No available 32768 kB hugepages reported
EAL: No available 64 kB hugepages reported
EAL: No available 1048576 kB hugepages reported
EAL: Probing VFIO support...
EAL: VFIO support initialized
EAL:   Invalid NUMA socket, default to 0
EAL: Probe PCI driver: mlx5_pci (15b3:a2d6) device: :03:00.0 (socket
0)
mlx5_pci: Failed to allocate Tx DevX UAR (BF)
mlx5_pci: Failed to allocate Rx DevX UAR (BF)
mlx5_pci: Size 0x is not power of 2, will be aligned to 0x1.
Interactive-mode selected
testpmd: create a new mbuf pool : n=2048, size=2176, socket=0
testpmd: preferred mempool ops selected: ring_mp_mc

Warning! port-topology=paired and odd forward ports number, the last
port will pair with itself.

Configuring Port 0 (socket 0)
Port 0: 0C:42:A1:A4:89:B4
Checking link statuses...
Done
testpmd>
```

Is the problem related to the failure to allocate Tx and Rx DevX UAR?
How can I fix this?


I've also tried to set the Bluefield configuration parameters from dpdk
(https://github.com/DPDK/dpdk/blob/e2a234488854fdeee267a2aa582aa082fce01d6
e/config/defconfig_arm64-bluefield-linuxapp-gcc)
as follows:
```
diff --git a/build-data/packages/vpp.mk b/build-data/packages/vpp.mk
index 7db450e05..91017dda0 100644
--- a/build-data/packages/vpp.mk
+++ b/build-data/packages/vpp.mk
@@ -32,7 +32,8 @@ vpp_cmake_args += -DCMAKE_VERBOSE_MAKEFILE:BOOL=ON
   endif
   ifeq (,$(TARGET_PLATFORM))
   ifeq ($(MACHINE),aarch64)
-vpp_cmake_args += -DVPP_LOG2_CACHE_LINE_SIZE=7

Re: [vpp-dev] next-hop-table between two FIB tables results in punt and 'unknown ip protocol'

2021-07-01 Thread Neale Ranns


From: vpp-dev@lists.fd.io  on behalf of Benoit Ganne 
(bganne) via lists.fd.io 
Date: Thursday, 1 July 2021 at 10:38
To: Mechthild Buescher , vpp-dev@lists.fd.io 

Subject: Re: [vpp-dev] next-hop-table between two FIB tables results in punt 
and 'unknown ip protocol'
I think the issue is the way you populate the route:
ip route add 198.19.255.248/29 table 1 via 198.19.255.249 next-hop-table 4093

As 198.19.255.249 is the IP of host-Vpp2Host.4093, VPP interprets it as you 
want to deliver the packet locally instead of forwarding it. Try changing it to:
ip route add 198.19.255.248/29 table 1 via 0.0.0.0 next-hop-table 4093
0.0.0.0/32 in any table is a drop. One cannot specify a route to be recursive 
via another network, i.e. via 0.0.0.0/0, VPP doesn’t support that.
If the semantics you’re after is ‘do a second lookup in table 4093’ then you 
want ‘via ip4-lookup-in-table 4093’.


But as Neale pointed out, if you only have a few routes in 4093, it would be 
even better to just do:
ip route add 198.19.255.248/29 table 1 via host-Vpp2Host.4093

I suspect if you don’t do it this way, then a ping from another device in the 
198.19.255.248/29 subnet won’t work, because the return path is not in table 1.
/neale


That would save you 1 additional fib lookup.

Best
ben

> -Original Message-
> From: Mechthild Buescher 
> Sent: jeudi 1 juillet 2021 09:39
> To: Benoit Ganne (bganne) ; vpp-dev@lists.fd.io
> Subject: RE: next-hop-table between two FIB tables results in punt and
> 'unknown ip protocol'
>
> Hi Ben,
>
> Sorry, I sent the wrong output for the fib table - with the 'next-hop-
> table' configuration it looks as follows:
> ipv4-VRF:0, fib_index:0, flow hash:[src dst sport dport proto flowlabel ]
> epoch:0 flags:none locks:[default-route:1, ]
> 0.0.0.0/0 fib:0 index:0 locks:2
>   default-route refs:1 entry-flags:drop, src-
> flags:added,contributing,active,
> path-list:[0] locks:2 flags:drop, uPRF-list:0 len:0 itfs:[]
>   path:[0] pl-index:0 ip4 weight=1 pref=0 special:  cfg-flags:drop,
> [@0]: dpo-drop ip4
>
>  forwarding:   unicast-ip4-chain
>   [@0]: dpo-load-balance: [proto:ip4 index:1 buckets:1 uRPF:0 to:[0:0]]
> [0] [@0]: dpo-drop ip4
> :
> ipv4-VRF:4093, fib_index:3, flow hash:[src dst sport dport proto flowlabel
> ] epoch:0 flags:none locks:[CLI:2, adjacency:1, recursive-resolution:1, ]
> 198.19.255.253/32 fib:3 index:170 locks:2
>   adjacency refs:1 entry-flags:attached, src-
> flags:added,contributing,active, cover:53
> path-list:[188] locks:2 uPRF-list:191 len:1 itfs:[14, ]
>   path:[224] pl-index:188 ip4 weight=1 pref=0 attached-nexthop:  oper-
> flags:resolved,
> 198.19.255.253 host-Vpp2Host.4093
>   [@0]: ipv4 via 198.19.255.253 host-Vpp2Host.4093: mtu:9000 next:4
> flags:[] a215f39524f302fee89a0ec381000ffd0800
> Extensions:
>  path:224 adj-flags:[refines-cover]
>  forwarding:   unicast-ip4-chain
>   [@0]: dpo-load-balance: [proto:ip4 index:171 buckets:1 uRPF:191
> to:[4:384]]
> [0] [@5]: ipv4 via 198.19.255.253 host-Vpp2Host.4093: mtu:9000 next:4
> flags:[] a215f39524f302fee89a0ec381000ffd0800
> ipv4-VRF:1, fib_index:4, flow hash:[src dst sport dport proto flowlabel ]
> epoch:0 flags:none locks:[API:1, CLI:2, adjacency:1, recursive-
> resolution:1, ]
> 198.19.255.248/29 fib:4 index:66 locks:2
>   CLI refs:1 src-flags:added,contributing,active,
> path-list:[75] locks:20 flags:shared, uPRF-list:75 len:0 itfs:[]
>   path:[89] pl-index:75 ip4 weight=1 pref=0 recursive:  oper-
> flags:resolved,
> via 198.19.255.249 in fib:3 via-fib:56 via-dpo:[dpo-load-
> balance:57]
>
>  forwarding:   unicast-ip4-chain
>   [@0]: dpo-load-balance: [proto:ip4 index:67 buckets:1 uRPF:75 to:[0:0]]
> [0] [@11]: dpo-load-balance: [proto:ip4 index:57 buckets:1 uRPF:65
> to:[4:384]]
>   [0] [@2]: dpo-receive: 198.19.255.249 on host-Vpp2Host.4093
> ipv4-VRF:10, fib_index:5, flow hash:[src dst sport dport proto flowlabel ]
> epoch:0 flags:none locks:[CLI:2, ]
>
>
> The output below is when we replaced the 'next-hop table' routing with
> direct routing to /32 Ips. That direct routing is working but is regarded
> as work-around as long as we don't find a better solution.
>
> BR/Mechthild
>
> -Original Message-
> From: Mechthild Buescher
> Sent: Wednesday, 30 June 2021 18:32
> To: Benoit Ganne (bganne) ; vpp-dev@lists.fd.io
> Subject: RE: next-hop-table between two FIB tables results in punt and
> 'unknown ip protocol'
>
> Hi Ben,
>
> Thanks for your fast reply. Here is the requested output (I skipped config
> for other interfaces and VLANs)
>
> vppctl show int addr
> NCIC-1-v1 (up):
> NCIC-1-v1.1 (up):
>   L3 10.10.203.1/29 ip4 table-id 1 fib-idx 4 host-Vpp2Host (up):
> host-Vpp2Host.4093 (up):
>   L3 198.19.255.249/29 ip4 table-id 4093 fib-idx 3
> local0 (dn):
>
> and:
> vppctl sh ip fib 198.19.255.253
> pv4-VRF:0, fib_index:0, flow hash:[src dst sport dport proto flowlabel ]
> epoch:0 flags:none locks:[default-route:1, ]
> 

Re: [vpp-dev] VPP on a Bluefield-2 smartNIC

2021-07-01 Thread Benoit Ganne (bganne) via lists.fd.io
Please try https://gerrit.fd.io/r/c/vpp/+/32965 and reports if it works.

Best
ben

> -Original Message-
> From: vpp-dev@lists.fd.io  On Behalf Of Pierre Louis
> Aublin
> Sent: jeudi 1 juillet 2021 07:36
> To: vpp-dev@lists.fd.io
> Subject: [vpp-dev] VPP on a Bluefield-2 smartNIC
> 
> Dear VPP developers
> 
> I would like to run VPP on the Bluefield-2 smartNIC, but even though I
> managed to compile it the interface doesn't show up inside the CLI. By
> any chance, would you know how to compile and configure vpp for this
> device?
> 
> I am using VPP v21.06-rc2 and did the following modifications so that it
> can compile:
> ```
> diff --git a/build/external/packages/dpdk.mk
> b/build/external/packages/dpdk.mk
> index c7eb0fc3f..31a5c764e 100644
> --- a/build/external/packages/dpdk.mk
> +++ b/build/external/packages/dpdk.mk
> @@ -15,8 +15,8 @@ DPDK_PKTMBUF_HEADROOM    ?= 128
>   DPDK_USE_LIBBSD  ?= n
>   DPDK_DEBUG   ?= n
>   DPDK_MLX4_PMD    ?= n
> -DPDK_MLX5_PMD    ?= n
> -DPDK_MLX5_COMMON_PMD ?= n
> +DPDK_MLX5_PMD    ?= y
> +DPDK_MLX5_COMMON_PMD ?= y
>   DPDK_TAP_PMD ?= n
>   DPDK_FAILSAFE_PMD    ?= n
>   DPDK_MACHINE ?= default
> diff --git a/build/external/packages/ipsec-mb.mk
> b/build/external/packages/ipsec-mb.mk
> index d0bd2af19..119eb5219 100644
> --- a/build/external/packages/ipsec-mb.mk
> +++ b/build/external/packages/ipsec-mb.mk
> @@ -34,7 +34,7 @@ define  ipsec-mb_build_cmds
>    SAFE_DATA=n \
>    PREFIX=$(ipsec-mb_install_dir) \
>    NASM=$(ipsec-mb_install_dir)/bin/nasm \
> - EXTRA_CFLAGS="-g -msse4.2" > $(ipsec-mb_build_log)
> + EXTRA_CFLAGS="-g" > $(ipsec-mb_build_log)
>   endef
> 
>   define  ipsec-mb_install_cmds
> ```
> 
> 
> However, when running the VPP CLI, the network interface does not show up:
> ```
> $ sudo -E make run
> clib_sysfs_prealloc_hugepages:261: pre-allocating 6 additional 2048K
> hugepages on numa node 0
> dpdk   [warn  ]: Unsupported PCI device 0x15b3:0xa2d6 found
> at PCI address :03:00.0
> 
> dpdk/cryptodev [warn  ]: dpdk_cryptodev_init: Failed to configure
> cryptodev
> vat-plug/load  [error ]: vat_plugin_register: oddbuf plugin not
> loaded...
>      ___    _    _   _  ___
>   __/ __/ _ \  (_)__    | | / / _ \/ _ \
>   _/ _// // / / / _ \   | |/ / ___/ ___/
>   /_/ /(_)_/\___/   |___/_/  /_/
> 
> DBGvpp# show int
>    Name   Idx    State  MTU
> (L3/IP4/IP6/MPLS) Counter  Count
> local0    0 down 0/0/0/0
> DBGvpp# sh hard
>    Name    Idx   Link  Hardware
> local0 0    down  local0
>    Link speed: unknown
>    local
> ```
> 
> 
> The dpdk-testpmd application seems to start correctly though:
> ```
> $ sudo ./build-root/install-vpp_debug-native/external/bin/dpdk-testpmd
> -l 0-2 -a :03:00.00 -- -i --nb-cores=2 --nb-ports=1
> --total-num-mbufs=2048
> EAL: Detected 8 lcore(s)
> EAL: Detected 1 NUMA nodes
> EAL: Detected static linkage of DPDK
> EAL: Multi-process socket /var/run/dpdk/rte/mp_socket
> EAL: Selected IOVA mode 'VA'
> EAL: No available 32768 kB hugepages reported
> EAL: No available 64 kB hugepages reported
> EAL: No available 1048576 kB hugepages reported
> EAL: Probing VFIO support...
> EAL: VFIO support initialized
> EAL:   Invalid NUMA socket, default to 0
> EAL: Probe PCI driver: mlx5_pci (15b3:a2d6) device: :03:00.0 (socket
> 0)
> mlx5_pci: Failed to allocate Tx DevX UAR (BF)
> mlx5_pci: Failed to allocate Rx DevX UAR (BF)
> mlx5_pci: Size 0x is not power of 2, will be aligned to 0x1.
> Interactive-mode selected
> testpmd: create a new mbuf pool : n=2048, size=2176, socket=0
> testpmd: preferred mempool ops selected: ring_mp_mc
> 
> Warning! port-topology=paired and odd forward ports number, the last
> port will pair with itself.
> 
> Configuring Port 0 (socket 0)
> Port 0: 0C:42:A1:A4:89:B4
> Checking link statuses...
> Done
> testpmd>
> ```
> 
> Is the problem related to the failure to allocate Tx and Rx DevX UAR?
> How can I fix this?
> 
> 
> I've also tried to set the Bluefield configuration parameters from dpdk
> (https://github.com/DPDK/dpdk/blob/e2a234488854fdeee267a2aa582aa082fce01d6
> e/config/defconfig_arm64-bluefield-linuxapp-gcc)
> as follows:
> ```
> diff --git a/build-data/packages/vpp.mk b/build-data/packages/vpp.mk
> index 7db450e05..91017dda0 100644
> --- a/build-data/packages/vpp.mk
> +++ b/build-data/packages/vpp.mk
> @@ -32,7 +32,8 @@ vpp_cmake_args += -DCMAKE_VERBOSE_MAKEFILE:BOOL=ON
>   endif
>   ifeq (,$(TARGET_PLATFORM))
>   ifeq ($(MACHINE),aarch64)
> -vpp_cmake_args += -DVPP_LOG2_CACHE_LINE_SIZE=7
> +vpp_cmake_args += -DVPP_LOG2_CACHE_LINE_SIZE=6
>   endif
>   endif
> 
> diff --git a/build/external/packages/dpdk.mk
> b/build/external/packages/dpdk.mk
> index 70ff5c90e..e2a64e67c 100644

Re: [vpp-dev] Issue in VPP v21.06 compilation

2021-07-01 Thread Benoit Ganne (bganne) via lists.fd.io
Yes, when enabling MLX5 driver in DPDK with VPP we use our rdma-core build. 
Looks like when built with your toolchain, something broke.
Can you share your environment (distro, compiler)?

Best
ben

> -Original Message-
> From: vpp-dev@lists.fd.io  On Behalf Of Chinmaya
> Aggarwal
> Sent: jeudi 1 juillet 2021 09:56
> To: vpp-dev@lists.fd.io
> Subject: Re: [vpp-dev] Issue in VPP v21.06 compilation
> 
> Hi,
> 
> We were able to find a workaround for NASM installation issue. We deleted
> nasm-2.14.02.tar.xz from /opt/vpp/build/external/downloads/ and executed
> "make install-ext-deps"  again. But, this time we see another issue : -
> 
> 
> [722/1977] Generating rte_bus_vdev_def with a custom command
> [723/1977] Generating rte_bus_vdev_mingw with a custom command
> [724/1977] Generating rte_bus_vdev.pmd.c with a custom command
> [725/1977] Compiling C object 'drivers/a715181@@rte_bus_vdev@sta/meson-
> generated_.._rte_bus_vdev.pmd.c.o'
> [726/1977] Linking static target drivers/librte_bus_vdev.a
> [727/1977] Compiling C object
> 'drivers/a715181@@tmp_rte_bus_vmbus@sta/bus_vmbus_vmbus_common.c.o'
> [728/1977] Compiling C object 'drivers/a715181@@rte_bus_vdev@sha/meson-
> generated_.._rte_bus_vdev.pmd.c.o'
> [729/1977] Generating rte_bus_vdev.sym_chk with a meson_exe.py custom
> command
> [730/1977] Linking target drivers/librte_bus_vdev.so.21.1
> [731/1977] Compiling C object
> 'drivers/a715181@@tmp_rte_bus_vmbus@sta/bus_vmbus_vmbus_channel.c.o'
> [732/1977] Generating symbol file
> 'drivers/a715181@@rte_bus_vdev@sha/librte_bus_vdev.so.21.1.symbols'
> [733/1977] Compiling C object
> 'drivers/a715181@@tmp_rte_bus_vmbus@sta/bus_vmbus_vmbus_bufring.c.o'
> [734/1977] Compiling C object
> 'drivers/a715181@@tmp_rte_bus_vmbus@sta/bus_vmbus_vmbus_common_uio.c.o'
> [735/1977] Compiling C object
> 'drivers/a715181@@tmp_rte_bus_vmbus@sta/bus_vmbus_linux_vmbus_bus.c.o'
> [736/1977] Generating rte_bus_vmbus_def with a custom command
> [737/1977] Generating rte_bus_vmbus_mingw with a custom command
> [738/1977] Compiling C object
> 'drivers/a715181@@tmp_rte_bus_vmbus@sta/bus_vmbus_linux_vmbus_uio.c.o'
> [739/1977] Linking static target drivers/libtmp_rte_bus_vmbus.a
> [740/1977] Generating rte_bus_vmbus.pmd.c with a custom command
> [741/1977] Compiling C object 'drivers/a715181@@rte_bus_vmbus@sta/meson-
> generated_.._rte_bus_vmbus.pmd.c.o'
> [742/1977] Linking static target drivers/librte_bus_vmbus.a
> [743/1977] Generating rte_common_sfc_efx.sym_chk with a meson_exe.py
> custom command
> [744/1977] Linking target drivers/librte_common_sfc_efx.so.21.1
> [745/1977] Compiling C object 'drivers/a715181@@rte_bus_vmbus@sha/meson-
> generated_.._rte_bus_vmbus.pmd.c.o'
> [746/1977] Compiling C object
> 'drivers/a715181@@tmp_rte_common_mlx5@sta/common_mlx5_mlx5_devx_cmds.c.o'
> [747/1977] Generating rte_bus_vmbus.sym_chk with a meson_exe.py custom
> command
> [748/1977] Linking target drivers/librte_bus_vmbus.so.21.1
> [749/1977] Compiling C object
> 'drivers/a715181@@tmp_rte_common_mlx5@sta/common_mlx5_mlx5_common.c.o'
> [750/1977] Compiling C object
> 'drivers/a715181@@tmp_rte_common_mlx5@sta/common_mlx5_mlx5_common_mp.c.o'
> [751/1977] Generating symbol file
> 'drivers/a715181@@rte_bus_vmbus@sha/librte_bus_vmbus.so.21.1.symbols'
> [752/1977] Compiling C object
> 'drivers/a715181@@tmp_rte_common_mlx5@sta/common_mlx5_mlx5_common_devx.c.o
> '
> [753/1977] Compiling C object
> 'drivers/a715181@@tmp_rte_common_mlx5@sta/common_mlx5_mlx5_malloc.c.o'
> [754/1977] Compiling C object
> 'drivers/a715181@@tmp_rte_common_mlx5@sta/common_mlx5_mlx5_common_mr.c.o'
> [755/1977] Compiling C object
> 'drivers/a715181@@tmp_rte_common_mlx5@sta/common_mlx5_linux_mlx5_nl.c.o'
> [756/1977] Compiling C object
> 'drivers/a715181@@tmp_rte_common_mlx5@sta/common_mlx5_mlx5_common_pci.c.o'
> [757/1977] Compiling C object
> 'drivers/a715181@@tmp_rte_common_mlx5@sta/common_mlx5_linux_mlx5_glue.c.o'
> [758/1977] Generating rte_common_mlx5_def with a custom command
> [759/1977] Generating rte_common_mlx5_mingw with a custom command
> [760/1977] Compiling C object
> 'drivers/a715181@@tmp_rte_common_mlx5@sta/common_mlx5_linux_mlx5_common_os
> .c.o'
> [761/1977] Compiling C object
> 'drivers/a715181@@tmp_rte_common_mlx5@sta/common_mlx5_linux_mlx5_common_ve
> rbs.c.o'
> [762/1977] Linking static target drivers/libtmp_rte_common_mlx5.a
> [763/1977] Compiling C object
> 'drivers/a715181@@tmp_rte_common_qat@sta/common_qat_qat_common.c.o'
> [764/1977] Generating rte_common_mlx5.pmd.c with a custom command
> [765/1977] Compiling C object 'drivers/a715181@@rte_common_mlx5@sta/meson-
> generated_.._rte_common_mlx5.pmd.c.o'
> [766/1977] Linking static target drivers/librte_common_mlx5.a
> [767/1977] Compiling C object
> 'drivers/a715181@@tmp_rte_common_qat@sta/common_qat_qat_qp.c.o'
> [768/1977] Compiling C object
> 'drivers/a715181@@tmp_rte_common_qat@sta/common_qat_qat_device.c.o'
> [769/1977] Compiling C object 'drivers/a715181@@rte_common_mlx5@sha/meson-
> 

[vpp-dev] memif problems

2021-07-01 Thread Venumadhav Josyula
Hi All,

Vpp1 ( docker )
---
vpp# show interface
  Name   IdxState  MTU (L3/IP4/IP6/MPLS)
Counter  Count
GigabitEthernet0/4/0  1 down 9000/0/0/0
gtpu_tunnel0  4  up   0/0/0/0
host-vpp2out  2  up  9000/0/0/0
local00 down  0/0/0/0   drops
   2
memif0/0  3  up  9000/0/0/0
tx-error   2
vpp# show node counters
   CountNode  Reason
  *   2 memif0/0-outputinterface is down*
vpp# show interface address
GigabitEthernet0/4/0 (dn):
gtpu_tunnel0 (up):
  L3 50.50.50.2/24
host-vpp2out (up):
  L3 22.2.2.2/24
local0 (dn):
memif0/0 (up):
  L3 10.10.2.2/24
vpp#
vpp# show memif
sockets
  id  listenerfilename
  0   no  /run/vpp/memif.sock

interface memif0/0
  socket-id 0 id 0 mode ethernet
  flags admin-up slave zero-copy
  listener-fd 0 conn-fd 0
  num-s2m-rings 0 num-m2s-rings 0 buffer-size 0 num-regions 0
vpp#

Vpp2 ( docker )


vpp# show interface
  Name   IdxState  MTU (L3/IP4/IP6/MPLS)
Counter  Count
GigabitEthernet0/6/0  1 down 9000/0/0/0
gtpu_tunnel0  4  up   0/0/0/0   tx
packets   101
tx
bytes   12120
host-vpp1out  2  up  9000/0/0/0 rx
packets   110
rx
bytes   10480
tx
packets 9
tx
bytes 582
drops
 103
ip4
 104
local00 down  0/0/0/0   drops
   1
memif0/0  3  up  9000/0/0/0 drops
   6

tx-error   1
vpp# show interface address
GigabitEthernet0/6/0 (dn):
gtpu_tunnel0 (up):
  L3 50.50.50.1/24
host-vpp1out (up):
  L3 11.1.1.2/24
local0 (dn):
memif0/0 (up):
  L3 10.10.2.1/24
vpp# show node counters
   CountNode  Reason
 6null-node   blackholed packets
   101   gtpu4-encap  good packets encapsulated
 5arp-reply   ARP replies sent
 1ip4-glean   ARP requests sent
   102 ethernet-input no error
   *  1 memif0/0-outputinterface is down*
vpp# show memif
sockets
  id  listenerfilename
  0   yes (1) /run/vpp/memif.sock

interface memif0/0
  socket-id 0 id 0 mode ethernet
  flags admin-up
  listener-fd 23 conn-fd 0
  num-s2m-rings 0 num-m2s-rings 0 buffer-size 0 num-regions 0
vpp#

I am not sure why interface is showing down.  Can somebody suggrest what
could be the issue.

Thanks,
Regards,
Venu

-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.
View/Reply Online (#19671): https://lists.fd.io/g/vpp-dev/message/19671
Mute This Topic: https://lists.fd.io/mt/83911784/21656
Group Owner: vpp-dev+ow...@lists.fd.io
Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub [arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-



Re: [vpp-dev] next-hop-table between two FIB tables results in punt and 'unknown ip protocol'

2021-07-01 Thread Benoit Ganne (bganne) via lists.fd.io
I think the issue is the way you populate the route:
ip route add 198.19.255.248/29 table 1 via 198.19.255.249 next-hop-table 4093

As 198.19.255.249 is the IP of host-Vpp2Host.4093, VPP interprets it as you 
want to deliver the packet locally instead of forwarding it. Try changing it to:
ip route add 198.19.255.248/29 table 1 via 0.0.0.0 next-hop-table 4093

But as Neale pointed out, if you only have a few routes in 4093, it would be 
even better to just do:
ip route add 198.19.255.248/29 table 1 via host-Vpp2Host.4093

That would save you 1 additional fib lookup.

Best
ben

> -Original Message-
> From: Mechthild Buescher 
> Sent: jeudi 1 juillet 2021 09:39
> To: Benoit Ganne (bganne) ; vpp-dev@lists.fd.io
> Subject: RE: next-hop-table between two FIB tables results in punt and
> 'unknown ip protocol'
> 
> Hi Ben,
> 
> Sorry, I sent the wrong output for the fib table - with the 'next-hop-
> table' configuration it looks as follows:
> ipv4-VRF:0, fib_index:0, flow hash:[src dst sport dport proto flowlabel ]
> epoch:0 flags:none locks:[default-route:1, ]
> 0.0.0.0/0 fib:0 index:0 locks:2
>   default-route refs:1 entry-flags:drop, src-
> flags:added,contributing,active,
> path-list:[0] locks:2 flags:drop, uPRF-list:0 len:0 itfs:[]
>   path:[0] pl-index:0 ip4 weight=1 pref=0 special:  cfg-flags:drop,
> [@0]: dpo-drop ip4
> 
>  forwarding:   unicast-ip4-chain
>   [@0]: dpo-load-balance: [proto:ip4 index:1 buckets:1 uRPF:0 to:[0:0]]
> [0] [@0]: dpo-drop ip4
> :
> ipv4-VRF:4093, fib_index:3, flow hash:[src dst sport dport proto flowlabel
> ] epoch:0 flags:none locks:[CLI:2, adjacency:1, recursive-resolution:1, ]
> 198.19.255.253/32 fib:3 index:170 locks:2
>   adjacency refs:1 entry-flags:attached, src-
> flags:added,contributing,active, cover:53
> path-list:[188] locks:2 uPRF-list:191 len:1 itfs:[14, ]
>   path:[224] pl-index:188 ip4 weight=1 pref=0 attached-nexthop:  oper-
> flags:resolved,
> 198.19.255.253 host-Vpp2Host.4093
>   [@0]: ipv4 via 198.19.255.253 host-Vpp2Host.4093: mtu:9000 next:4
> flags:[] a215f39524f302fee89a0ec381000ffd0800
> Extensions:
>  path:224 adj-flags:[refines-cover]
>  forwarding:   unicast-ip4-chain
>   [@0]: dpo-load-balance: [proto:ip4 index:171 buckets:1 uRPF:191
> to:[4:384]]
> [0] [@5]: ipv4 via 198.19.255.253 host-Vpp2Host.4093: mtu:9000 next:4
> flags:[] a215f39524f302fee89a0ec381000ffd0800
> ipv4-VRF:1, fib_index:4, flow hash:[src dst sport dport proto flowlabel ]
> epoch:0 flags:none locks:[API:1, CLI:2, adjacency:1, recursive-
> resolution:1, ]
> 198.19.255.248/29 fib:4 index:66 locks:2
>   CLI refs:1 src-flags:added,contributing,active,
> path-list:[75] locks:20 flags:shared, uPRF-list:75 len:0 itfs:[]
>   path:[89] pl-index:75 ip4 weight=1 pref=0 recursive:  oper-
> flags:resolved,
> via 198.19.255.249 in fib:3 via-fib:56 via-dpo:[dpo-load-
> balance:57]
> 
>  forwarding:   unicast-ip4-chain
>   [@0]: dpo-load-balance: [proto:ip4 index:67 buckets:1 uRPF:75 to:[0:0]]
> [0] [@11]: dpo-load-balance: [proto:ip4 index:57 buckets:1 uRPF:65
> to:[4:384]]
>   [0] [@2]: dpo-receive: 198.19.255.249 on host-Vpp2Host.4093
> ipv4-VRF:10, fib_index:5, flow hash:[src dst sport dport proto flowlabel ]
> epoch:0 flags:none locks:[CLI:2, ]
> 
> 
> The output below is when we replaced the 'next-hop table' routing with
> direct routing to /32 Ips. That direct routing is working but is regarded
> as work-around as long as we don't find a better solution.
> 
> BR/Mechthild
> 
> -Original Message-
> From: Mechthild Buescher
> Sent: Wednesday, 30 June 2021 18:32
> To: Benoit Ganne (bganne) ; vpp-dev@lists.fd.io
> Subject: RE: next-hop-table between two FIB tables results in punt and
> 'unknown ip protocol'
> 
> Hi Ben,
> 
> Thanks for your fast reply. Here is the requested output (I skipped config
> for other interfaces and VLANs)
> 
> vppctl show int addr
> NCIC-1-v1 (up):
> NCIC-1-v1.1 (up):
>   L3 10.10.203.1/29 ip4 table-id 1 fib-idx 4 host-Vpp2Host (up):
> host-Vpp2Host.4093 (up):
>   L3 198.19.255.249/29 ip4 table-id 4093 fib-idx 3
> local0 (dn):
> 
> and:
> vppctl sh ip fib 198.19.255.253
> pv4-VRF:0, fib_index:0, flow hash:[src dst sport dport proto flowlabel ]
> epoch:0 flags:none locks:[default-route:1, ]
> 0.0.0.0/0 fib:0 index:0 locks:2
>   default-route refs:1 entry-flags:drop, src-
> flags:added,contributing,active,
> path-list:[0] locks:2 flags:drop, uPRF-list:0 len:0 itfs:[]
>   path:[0] pl-index:0 ip4 weight=1 pref=0 special:  cfg-flags:drop,
> [@0]: dpo-drop ip4
> 
>  forwarding:   unicast-ip4-chain
>   [@0]: dpo-load-balance: [proto:ip4 index:1 buckets:1 uRPF:0 to:[0:0]]
> [0] [@0]: dpo-drop ip4
> ipv4-VRF:4093, fib_index:3, flow hash:[src dst sport dport proto flowlabel
> ] epoch:0 flags:none locks:[CLI:2, adjacency:1, recursive-resolution:1, ]
> 198.19.255.253/32 fib:3 index:189 locks:2
>   adjacency refs:1 entry-flags:attached, src-
> 

Re: [vpp-dev] Issue in VPP v21.06 compilation

2021-07-01 Thread Chinmaya Aggarwal
Hi,

We were able to find a workaround for NASM installation issue. We deleted 
nasm-2.14.02.tar.xz from /opt/vpp/build/external/downloads/ and executed "make 
install-ext-deps"  again. But, this time we see another issue : -

[722/1977] Generating rte_bus_vdev_def with a custom command
[723/1977] Generating rte_bus_vdev_mingw with a custom command
[724/1977] Generating rte_bus_vdev.pmd.c with a custom command
[725/1977] Compiling C object 
'drivers/a715181@@rte_bus_vdev@sta/meson-generated_.._rte_bus_vdev.pmd.c.o'
[726/1977] Linking static target drivers/librte_bus_vdev.a
[727/1977] Compiling C object 
'drivers/a715181@@tmp_rte_bus_vmbus@sta/bus_vmbus_vmbus_common.c.o'
[728/1977] Compiling C object 
'drivers/a715181@@rte_bus_vdev@sha/meson-generated_.._rte_bus_vdev.pmd.c.o'
[729/1977] Generating rte_bus_vdev.sym_chk with a meson_exe.py custom command
[730/1977] Linking target drivers/librte_bus_vdev.so.21.1
[731/1977] Compiling C object 
'drivers/a715181@@tmp_rte_bus_vmbus@sta/bus_vmbus_vmbus_channel.c.o'
[732/1977] Generating symbol file 
'drivers/a715181@@rte_bus_vdev@sha/librte_bus_vdev.so.21.1.symbols'
[733/1977] Compiling C object 
'drivers/a715181@@tmp_rte_bus_vmbus@sta/bus_vmbus_vmbus_bufring.c.o'
[734/1977] Compiling C object 
'drivers/a715181@@tmp_rte_bus_vmbus@sta/bus_vmbus_vmbus_common_uio.c.o'
[735/1977] Compiling C object 
'drivers/a715181@@tmp_rte_bus_vmbus@sta/bus_vmbus_linux_vmbus_bus.c.o'
[736/1977] Generating rte_bus_vmbus_def with a custom command
[737/1977] Generating rte_bus_vmbus_mingw with a custom command
[738/1977] Compiling C object 
'drivers/a715181@@tmp_rte_bus_vmbus@sta/bus_vmbus_linux_vmbus_uio.c.o'
[739/1977] Linking static target drivers/libtmp_rte_bus_vmbus.a
[740/1977] Generating rte_bus_vmbus.pmd.c with a custom command
[741/1977] Compiling C object 
'drivers/a715181@@rte_bus_vmbus@sta/meson-generated_.._rte_bus_vmbus.pmd.c.o'
[742/1977] Linking static target drivers/librte_bus_vmbus.a
[743/1977] Generating rte_common_sfc_efx.sym_chk with a meson_exe.py custom 
command
[744/1977] Linking target drivers/librte_common_sfc_efx.so.21.1
[745/1977] Compiling C object 
'drivers/a715181@@rte_bus_vmbus@sha/meson-generated_.._rte_bus_vmbus.pmd.c.o'
[746/1977] Compiling C object 
'drivers/a715181@@tmp_rte_common_mlx5@sta/common_mlx5_mlx5_devx_cmds.c.o'
[747/1977] Generating rte_bus_vmbus.sym_chk with a meson_exe.py custom command
[748/1977] Linking target drivers/librte_bus_vmbus.so.21.1
[749/1977] Compiling C object 
'drivers/a715181@@tmp_rte_common_mlx5@sta/common_mlx5_mlx5_common.c.o'
[750/1977] Compiling C object 
'drivers/a715181@@tmp_rte_common_mlx5@sta/common_mlx5_mlx5_common_mp.c.o'
[751/1977] Generating symbol file 
'drivers/a715181@@rte_bus_vmbus@sha/librte_bus_vmbus.so.21.1.symbols'
[752/1977] Compiling C object 
'drivers/a715181@@tmp_rte_common_mlx5@sta/common_mlx5_mlx5_common_devx.c.o'
[753/1977] Compiling C object 
'drivers/a715181@@tmp_rte_common_mlx5@sta/common_mlx5_mlx5_malloc.c.o'
[754/1977] Compiling C object 
'drivers/a715181@@tmp_rte_common_mlx5@sta/common_mlx5_mlx5_common_mr.c.o'
[755/1977] Compiling C object 
'drivers/a715181@@tmp_rte_common_mlx5@sta/common_mlx5_linux_mlx5_nl.c.o'
[756/1977] Compiling C object 
'drivers/a715181@@tmp_rte_common_mlx5@sta/common_mlx5_mlx5_common_pci.c.o'
[757/1977] Compiling C object 
'drivers/a715181@@tmp_rte_common_mlx5@sta/common_mlx5_linux_mlx5_glue.c.o'
[758/1977] Generating rte_common_mlx5_def with a custom command
[759/1977] Generating rte_common_mlx5_mingw with a custom command
[760/1977] Compiling C object 
'drivers/a715181@@tmp_rte_common_mlx5@sta/common_mlx5_linux_mlx5_common_os.c.o'
[761/1977] Compiling C object 
'drivers/a715181@@tmp_rte_common_mlx5@sta/common_mlx5_linux_mlx5_common_verbs.c.o'
[762/1977] Linking static target drivers/libtmp_rte_common_mlx5.a
[763/1977] Compiling C object 
'drivers/a715181@@tmp_rte_common_qat@sta/common_qat_qat_common.c.o'
[764/1977] Generating rte_common_mlx5.pmd.c with a custom command
[765/1977] Compiling C object 
'drivers/a715181@@rte_common_mlx5@sta/meson-generated_.._rte_common_mlx5.pmd.c.o'
[766/1977] Linking static target drivers/librte_common_mlx5.a
[767/1977] Compiling C object 
'drivers/a715181@@tmp_rte_common_qat@sta/common_qat_qat_qp.c.o'
[768/1977] Compiling C object 
'drivers/a715181@@tmp_rte_common_qat@sta/common_qat_qat_device.c.o'
[769/1977] Compiling C object 
'drivers/a715181@@rte_common_mlx5@sha/meson-generated_.._rte_common_mlx5.pmd.c.o'
[770/1977] Compiling C object 
'drivers/a715181@@tmp_rte_common_qat@sta/common_qat_qat_logs.c.o'
[771/1977] Compiling C object 
'drivers/a715181@@tmp_rte_common_qat@sta/crypto_qat_qat_asym_pmd.c.o'
[772/1977] Compiling C object 
'drivers/a715181@@tmp_rte_common_qat@sta/compress_qat_qat_comp_pmd.c.o'
[773/1977] Compiling C object 
'drivers/a715181@@tmp_rte_common_qat@sta/compress_qat_qat_comp.c.o'
[774/1977] Compiling C object 
'drivers/a715181@@tmp_rte_common_qat@sta/crypto_qat_qat_sym_pmd.c.o'
[775/1977] Compiling C object 

Re: [vpp-dev] next-hop-table between two FIB tables results in punt and 'unknown ip protocol'

2021-07-01 Thread Mechthild Buescher via lists.fd.io
Hi Ben,

Sorry, I sent the wrong output for the fib table - with the 'next-hop-table' 
configuration it looks as follows:
ipv4-VRF:0, fib_index:0, flow hash:[src dst sport dport proto flowlabel ] 
epoch:0 flags:none locks:[default-route:1, ]
0.0.0.0/0 fib:0 index:0 locks:2
  default-route refs:1 entry-flags:drop, src-flags:added,contributing,active,
path-list:[0] locks:2 flags:drop, uPRF-list:0 len:0 itfs:[]
  path:[0] pl-index:0 ip4 weight=1 pref=0 special:  cfg-flags:drop,
[@0]: dpo-drop ip4

 forwarding:   unicast-ip4-chain
  [@0]: dpo-load-balance: [proto:ip4 index:1 buckets:1 uRPF:0 to:[0:0]]
[0] [@0]: dpo-drop ip4
:
ipv4-VRF:4093, fib_index:3, flow hash:[src dst sport dport proto flowlabel ] 
epoch:0 flags:none locks:[CLI:2, adjacency:1, recursive-resolution:1, ]
198.19.255.253/32 fib:3 index:170 locks:2
  adjacency refs:1 entry-flags:attached, src-flags:added,contributing,active, 
cover:53
path-list:[188] locks:2 uPRF-list:191 len:1 itfs:[14, ]
  path:[224] pl-index:188 ip4 weight=1 pref=0 attached-nexthop:  
oper-flags:resolved,
198.19.255.253 host-Vpp2Host.4093
  [@0]: ipv4 via 198.19.255.253 host-Vpp2Host.4093: mtu:9000 next:4 
flags:[] a215f39524f302fee89a0ec381000ffd0800
Extensions:
 path:224 adj-flags:[refines-cover]
 forwarding:   unicast-ip4-chain
  [@0]: dpo-load-balance: [proto:ip4 index:171 buckets:1 uRPF:191 to:[4:384]]
[0] [@5]: ipv4 via 198.19.255.253 host-Vpp2Host.4093: mtu:9000 next:4 
flags:[] a215f39524f302fee89a0ec381000ffd0800
ipv4-VRF:1, fib_index:4, flow hash:[src dst sport dport proto flowlabel ] 
epoch:0 flags:none locks:[API:1, CLI:2, adjacency:1, recursive-resolution:1, ]
198.19.255.248/29 fib:4 index:66 locks:2
  CLI refs:1 src-flags:added,contributing,active,
path-list:[75] locks:20 flags:shared, uPRF-list:75 len:0 itfs:[]
  path:[89] pl-index:75 ip4 weight=1 pref=0 recursive:  oper-flags:resolved,
via 198.19.255.249 in fib:3 via-fib:56 via-dpo:[dpo-load-balance:57]

 forwarding:   unicast-ip4-chain
  [@0]: dpo-load-balance: [proto:ip4 index:67 buckets:1 uRPF:75 to:[0:0]]
[0] [@11]: dpo-load-balance: [proto:ip4 index:57 buckets:1 uRPF:65 
to:[4:384]]
  [0] [@2]: dpo-receive: 198.19.255.249 on host-Vpp2Host.4093
ipv4-VRF:10, fib_index:5, flow hash:[src dst sport dport proto flowlabel ] 
epoch:0 flags:none locks:[CLI:2, ]


The output below is when we replaced the 'next-hop table' routing with direct 
routing to /32 Ips. That direct routing is working but is regarded as 
work-around as long as we don't find a better solution.

BR/Mechthild

-Original Message-
From: Mechthild Buescher 
Sent: Wednesday, 30 June 2021 18:32
To: Benoit Ganne (bganne) ; vpp-dev@lists.fd.io
Subject: RE: next-hop-table between two FIB tables results in punt and 'unknown 
ip protocol'

Hi Ben,

Thanks for your fast reply. Here is the requested output (I skipped config for 
other interfaces and VLANs)

vppctl show int addr
NCIC-1-v1 (up):
NCIC-1-v1.1 (up):
  L3 10.10.203.1/29 ip4 table-id 1 fib-idx 4 host-Vpp2Host (up):
host-Vpp2Host.4093 (up):
  L3 198.19.255.249/29 ip4 table-id 4093 fib-idx 3
local0 (dn):

and:
vppctl sh ip fib 198.19.255.253
pv4-VRF:0, fib_index:0, flow hash:[src dst sport dport proto flowlabel ] 
epoch:0 flags:none locks:[default-route:1, ]
0.0.0.0/0 fib:0 index:0 locks:2
  default-route refs:1 entry-flags:drop, src-flags:added,contributing,active,
path-list:[0] locks:2 flags:drop, uPRF-list:0 len:0 itfs:[]
  path:[0] pl-index:0 ip4 weight=1 pref=0 special:  cfg-flags:drop,
[@0]: dpo-drop ip4

 forwarding:   unicast-ip4-chain
  [@0]: dpo-load-balance: [proto:ip4 index:1 buckets:1 uRPF:0 to:[0:0]]
[0] [@0]: dpo-drop ip4
ipv4-VRF:4093, fib_index:3, flow hash:[src dst sport dport proto flowlabel ] 
epoch:0 flags:none locks:[CLI:2, adjacency:1, recursive-resolution:1, ]
198.19.255.253/32 fib:3 index:189 locks:2
  adjacency refs:1 entry-flags:attached, src-flags:added,contributing,active, 
cover:53
path-list:[206] locks:2 uPRF-list:210 len:1 itfs:[14, ]
  path:[241] pl-index:206 ip4 weight=1 pref=0 attached-nexthop:  
oper-flags:resolved,
198.19.255.253 host-Vpp2Host.4093
  [@0]: ipv4 via 198.19.255.253 host-Vpp2Host.4093: mtu:9000 next:4 
flags:[] e2de891fcccb02fe33d4dc0d81000ffd0800
Extensions:
 path:241 adj-flags:[refines-cover]
 forwarding:   unicast-ip4-chain
  [@0]: dpo-load-balance: [proto:ip4 index:190 buckets:1 uRPF:210 to:[2:192] 
via:[6:504]]
[0] [@5]: ipv4 via 198.19.255.253 host-Vpp2Host.4093: mtu:9000 next:4 
flags:[] e2de891fcccb02fe33d4dc0d81000ffd0800
ipv4-VRF:1, fib_index:4, flow hash:[src dst sport dport proto flowlabel ] 
epoch:0 flags:none locks:[API:1, CLI:2, adjacency:1, recursive-resolution:1, ]
198.19.255.253/32 fib:4 index:190 locks:2
  CLI refs:1 entry-flags:attached, src-flags:added,contributing,active,
path-list:[207] locks:2 flags:shared, uPRF-list:211 len:1 itfs:[14, ]
  path:[243] pl-index:207 ip4 weight=1