Re: [vpp-dev] vpp refuses to use of see/use dpdk interfaces

2022-07-08 Thread Rupesh Raghuvaran
The device will still be available in linux, it is managed via  ib_uverbs
by dpdk.  Thats why it is required to have ib_uverbs, mlx5_core, mlx5_ib
modules required .

-Rupesh

On Fri, Jul 8, 2022 at 9:37 PM Dave Houser  wrote:

> OK I am not sure what combination got everything to work but the
> interfaces are showing up now in vppctl!
>
> also rdma commands work as well.
>
> I think it was a combination of the following:
>
>1. git clone the repo
>2. edit build/external/packages/dpdk.mk,
>   1. turn the following to "y"
>   (DPDK_MLX4_PMD, DPDK_MLX5_PMD, DPDK_MLX5_COMMON_PMD)
>   2. Set DPDK_MLX_IBV_LINK to static
>3. make make wipe-release;make build-release
>4. make pkg-deb
>5. sudo dpkg -i build-root/*.deb
>6. Create conf file without comments (optinal)
>   1. sudo cp /etc/vpp/startup.conf /etc/vpp/startup.conf.original;
>   cat /etc/vpp/startup.conf | grep -v "^#.*" | grep -v ".*#.*" | awk 'NF' 
> >
>   /etc/vpp/startup.conf
>   2. edit the file and add your dpdk interfaces make sure to add the
>   dpdk section
>7. restart vpp
>
> I suppose the only question I have is how is it possible the dpdk
> interfaces are showing up in vpp?
>
> I ask this as the interfaces I configured in startup.conf are still linked
> to the kernel and up. However they still show up in vpp as interfaces. If I
> deactivate the interfaces, unbind from the kernel, and bind to dpdk, then
> restart vpp, then they dont show up. Is this expected?
>
> 
>
>

-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.
View/Reply Online (#21637): https://lists.fd.io/g/vpp-dev/message/21637
Mute This Topic: https://lists.fd.io/mt/92231790/21656
Group Owner: vpp-dev+ow...@lists.fd.io
Unsubscribe: https://lists.fd.io/g/vpp-dev/leave/1480452/21656/631435203/xyzzy 
[arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-



Re: [vpp-dev] Fib entries as per show ip fib for prefix has forwarding UNRESOLVED though packet is forwarded.

2021-02-04 Thread Rupesh Raghuvaran
Hi Neale,

I will attempt this on this latest.

Yes ARP cache flush is one option as that seems to help out , I find that
arp flush per interface requires a walk through all the arp entries and
identify those that are specifically on the interface in question. Not sure
whether I am missing something better.

Thanks a lot for your help and valuable feedback.

Thanks
Rupesh

On Thu, Feb 4, 2021 at 6:01 PM Neale Ranns  wrote:

>
>
> Hi Rupesh,
>
>
>
> 19.08 is no longer supported, please upgrade.
>
>
>
> The down side of flushing the ARP cache on link down is that the ARP
> cached gets flushed 😉 you have to rebuild it when the link comes back
> up, this takes time. Otherwise I see no issue with doing so.
>
>
>
> The adj walk is there to remove that adj/path from the ECMP set when the
> interface goes down, it does not conflict with flushing the ARP cache,
> which will make the adj incomplete.
>
>
>
> /neale
>
>
>
>
>
> *From: *Rupesh Raghuvaran 
> *Date: *Thursday, 4 February 2021 at 12:58
> *To: *Neale Ranns 
> *Cc: *vpp-dev@lists.fd.io 
> *Subject: *Re: [vpp-dev] Fib entries as per show ip fib for prefix has
> forwarding UNRESOLVED though packet is forwarded.
>
> Hi Neale,
>
>
>
> This is version 19.08.
>
>
>
> Meanwhile, based on whatever I understood from the discussion so far, I
> experimented by adding a link down function that does an arp flush on the
> respective interface which resulted in arp delete and subsequent adj nbr
> removal. This seems to provide the desired result of the fib entry getting
> marked resolved. Note currently  there is only arp flush done on the admin
> state change in ethernet/arp.c. Do you see any issues or potential downside
> with this arp flush being done on link down ?
>
> There is also a backwalk associated with interface link up/down
> via adj_nbr_interface_state_change_one, I am not sure about what is
> expected from that walk, and the above handling from arp flush have any
> conflicts   ?
>
>
>
> Thanks
>
> Rupesh
>
>
>
> On Thu, Feb 4, 2021 at 3:50 PM Neale Ranns  wrote:
>
>
>
> What VPP version is this?
>
>
>
> /neale
>
>
>
> *From: *Rupesh Raghuvaran 
> *Date: *Wednesday, 3 February 2021 at 17:39
> *To: *Neale Ranns 
> *Cc: *vpp-dev@lists.fd.io 
> *Subject: *Re: [vpp-dev] Fib entries as per show ip fib for prefix has
> forwarding UNRESOLVED though packet is forwarded.
>
> Hi Neale,
>
>
>
> Looking at the show ip arp we see that the arp entries still remain the
> same even after the link is down.
>
>
>
> show ip arp
> Time   IP4   Flags  Ethernet  Interface
>   1.995410.0.0.14  Dde:ad:de:ad:00:04 GigabitEthernet0/3/0
>   1.963310.0.0.15  Dde:ad:de:ad:00:05 GigabitEthernet0/4/0
>   2.8591   10.0.1.132  S00:65:0d:98:00:00 loop10
>   2.8611   10.0.2.132  S00:65:0e:bf:00:00 loop10
>   2.862612.0.1.2   S00:65:0c:ca:00:00 loop10
> Proxy arps enabled for:
> Fib_index 0   0.0.0.0 - 255.255.255.255
>
> 
>
> show hardware GigabitEthernet0/4/0
>   NameIdx   Link  Hardware
> GigabitEthernet0/4/0   3down  GigabitEthernet0/4/0
>   Link speed: 10 Gbps
>   Ethernet address de:ad:de:ad:00:01
>   Red Hat Virtio
> carrier down
> flags: admin-up pmd maybe-multiseg
>
> 
> 
>
> show hardware GigabitEthernet0/3/0
>   NameIdx   Link  Hardware
> GigabitEthernet0/3/0   2 up   GigabitEthernet0/3/0
>   Link speed: 10 Gbps
>   Ethernet address de:ad:de:ad:00:01
>   Red Hat Virtio
> carrier up full duplex mtu 9206
> flags: admin-up pmd maybe-multiseg
> rx: queues 1 (max 1), desc 256 (min 0 max 65535 align 1)
> tx: queues 1 (max 1), desc 256 (min 0 max 65535 align 1)
>  
>
>
>
> Regarding the ARP responses dropped my understanding is that for an ARP
> reply are acceptable if we have a fib source via a api/cli on the interface
> enabling a successful arp reply src ip address lookup. That is essentially
> saying that the such a specific arp reply is valid as per the
> configuration, i am not sure about any harm with such a configuration as it
> is correct as per topology.
>
>
>
> Thanks
>
> Rupesh
>
>
>
> On Wed, Feb 3, 2021 at 7:15 PM Neale Ranns  wrote:
>
>
>
> Hi Rupesh,
>
>
>
> Dropping those ARP responses is a clue that you’re not doing something
> right 😉
>
>
>
> I would expect the ARP entry, adj and adj-source on the fib entry to be
> removed (in that order) when the link goes down. ‘sh ip eighbou

Re: [vpp-dev] Fib entries as per show ip fib for prefix has forwarding UNRESOLVED though packet is forwarded.

2021-02-04 Thread Rupesh Raghuvaran
Hi Neale,

This is version 19.08.

Meanwhile, based on whatever I understood from the discussion so far, I
experimented by adding a link down function that does an arp flush on the
respective interface which resulted in arp delete and subsequent adj nbr
removal. This seems to provide the desired result of the fib entry getting
marked resolved. Note currently  there is only arp flush done on the admin
state change in ethernet/arp.c. Do you see any issues or potential downside
with this arp flush being done on link down ?
There is also a backwalk associated with interface link up/down
via adj_nbr_interface_state_change_one, I am not sure about what is
expected from that walk, and the above handling from arp flush have any
conflicts   ?

Thanks
Rupesh

On Thu, Feb 4, 2021 at 3:50 PM Neale Ranns  wrote:

>
>
> What VPP version is this?
>
>
>
> /neale
>
>
>
> *From: *Rupesh Raghuvaran 
> *Date: *Wednesday, 3 February 2021 at 17:39
> *To: *Neale Ranns 
> *Cc: *vpp-dev@lists.fd.io 
> *Subject: *Re: [vpp-dev] Fib entries as per show ip fib for prefix has
> forwarding UNRESOLVED though packet is forwarded.
>
> Hi Neale,
>
>
>
> Looking at the show ip arp we see that the arp entries still remain the
> same even after the link is down.
>
>
>
> show ip arp
> Time   IP4   Flags  Ethernet  Interface
>   1.995410.0.0.14  Dde:ad:de:ad:00:04 GigabitEthernet0/3/0
>   1.963310.0.0.15  Dde:ad:de:ad:00:05 GigabitEthernet0/4/0
>   2.8591   10.0.1.132  S00:65:0d:98:00:00 loop10
>   2.8611   10.0.2.132  S00:65:0e:bf:00:00 loop10
>   2.862612.0.1.2   S00:65:0c:ca:00:00 loop10
> Proxy arps enabled for:
> Fib_index 0   0.0.0.0 - 255.255.255.255
>
> 
>
> show hardware GigabitEthernet0/4/0
>   NameIdx   Link  Hardware
> GigabitEthernet0/4/0   3down  GigabitEthernet0/4/0
>   Link speed: 10 Gbps
>   Ethernet address de:ad:de:ad:00:01
>   Red Hat Virtio
> carrier down
> flags: admin-up pmd maybe-multiseg
>
> 
> 
>
> show hardware GigabitEthernet0/3/0
>   NameIdx   Link  Hardware
> GigabitEthernet0/3/0   2 up   GigabitEthernet0/3/0
>   Link speed: 10 Gbps
>   Ethernet address de:ad:de:ad:00:01
>   Red Hat Virtio
> carrier up full duplex mtu 9206
> flags: admin-up pmd maybe-multiseg
> rx: queues 1 (max 1), desc 256 (min 0 max 65535 align 1)
> tx: queues 1 (max 1), desc 256 (min 0 max 65535 align 1)
>  
>
>
>
> Regarding the ARP responses dropped my understanding is that for an ARP
> reply are acceptable if we have a fib source via a api/cli on the interface
> enabling a successful arp reply src ip address lookup. That is essentially
> saying that the such a specific arp reply is valid as per the
> configuration, i am not sure about any harm with such a configuration as it
> is correct as per topology.
>
>
>
> Thanks
>
> Rupesh
>
>
>
> On Wed, Feb 3, 2021 at 7:15 PM Neale Ranns  wrote:
>
>
>
> Hi Rupesh,
>
>
>
> Dropping those ARP responses is a clue that you’re not doing something
> right 😉
>
>
>
> I would expect the ARP entry, adj and adj-source on the fib entry to be
> removed (in that order) when the link goes down. ‘sh ip eighbours’ please.
>
>
>
>
> https://github.com/FDio/vpp/blob/master/docs/gettingstarted/developers/fib20/routes.rst#adjacency-source-fib-entries
>
>
>
> /neale
>
>
>
>
>
> *From: *Rupesh Raghuvaran 
> *Date: *Wednesday, 3 February 2021 at 12:03
> *To: *Neale Ranns 
> *Cc: *vpp-dev@lists.fd.io 
> *Subject: *Re: [vpp-dev] Fib entries as per show ip fib for prefix has
> forwarding UNRESOLVED though packet is forwarded.
>
> Hi Neale,
>
>
>
> Thanks for your reply.
>
>
>
> I am adding a default route via 10.0.0.15 dev Ge0/4/0, a subnet route
> 10.0.0.0/24 via 10.0.0.15 dev Ge0/4/0  and a host route to the
> 10.0.0.15/32 on dev Ge0/4/0. No explicit arp entry is added.  Without the
> specific  10.0.0.15/32 on dev Ge0/4/0 link the arp ipv4 response for
> 10.0.0.15 gets dropped for the reason src subnet not local to interface.
>
>   On the link down of Ge0/4/0 all the routes specified above gets deleted.
> But on the other link Ge0/3/0 we have similar routes, default route and
> subnet route via 10.0.0.14 and host route to 10.0.0.14/32 on dev
> Ge0/3/0.
>
>
>
> As per the fib log the src API based path gets removed but the src adj
> based one remains. Even if the interface is down the adjacency source entry
> remains the cover seems to get updated. to 7 , which is on a different
> interface

Re: [vpp-dev] Fib entries as per show ip fib for prefix has forwarding UNRESOLVED though packet is forwarded.

2021-02-03 Thread Rupesh Raghuvaran
Hi Neale,

Looking at the show ip arp we see that the arp entries still remain the
same even after the link is down.

show ip arp
Time   IP4   Flags  Ethernet  Interface
  1.995410.0.0.14  Dde:ad:de:ad:00:04 GigabitEthernet0/3/0
  1.963310.0.0.15  Dde:ad:de:ad:00:05 GigabitEthernet0/4/0
  2.8591   10.0.1.132  S00:65:0d:98:00:00 loop10
  2.8611   10.0.2.132  S00:65:0e:bf:00:00 loop10
  2.862612.0.1.2   S00:65:0c:ca:00:00 loop10
Proxy arps enabled for:
Fib_index 0   0.0.0.0 - 255.255.255.255

show hardware GigabitEthernet0/4/0
  NameIdx   Link  Hardware
GigabitEthernet0/4/0   3down  GigabitEthernet0/4/0
  Link speed: 10 Gbps
  Ethernet address de:ad:de:ad:00:01
  Red Hat Virtio
carrier down
flags: admin-up pmd maybe-multiseg


show hardware GigabitEthernet0/3/0
  NameIdx   Link  Hardware
GigabitEthernet0/3/0   2 up   GigabitEthernet0/3/0
  Link speed: 10 Gbps
  Ethernet address de:ad:de:ad:00:01
  Red Hat Virtio
carrier up full duplex mtu 9206
flags: admin-up pmd maybe-multiseg
rx: queues 1 (max 1), desc 256 (min 0 max 65535 align 1)
tx: queues 1 (max 1), desc 256 (min 0 max 65535 align 1)
 


Regarding the ARP responses dropped my understanding is that for an ARP
reply are acceptable if we have a fib source via a api/cli on the interface
enabling a successful arp reply src ip address lookup. That is essentially
saying that the such a specific arp reply is valid as per the
configuration, i am not sure about any harm with such a configuration as it
is correct as per topology.

Thanks
Rupesh

On Wed, Feb 3, 2021 at 7:15 PM Neale Ranns  wrote:

>
>
> Hi Rupesh,
>
>
>
> Dropping those ARP responses is a clue that you’re not doing something
> right 😉
>
>
>
> I would expect the ARP entry, adj and adj-source on the fib entry to be
> removed (in that order) when the link goes down. ‘sh ip eighbours’ please.
>
>
>
>
> https://github.com/FDio/vpp/blob/master/docs/gettingstarted/developers/fib20/routes.rst#adjacency-source-fib-entries
>
>
>
> /neale
>
>
>
>
>
> *From: *Rupesh Raghuvaran 
> *Date: *Wednesday, 3 February 2021 at 12:03
> *To: *Neale Ranns 
> *Cc: *vpp-dev@lists.fd.io 
> *Subject: *Re: [vpp-dev] Fib entries as per show ip fib for prefix has
> forwarding UNRESOLVED though packet is forwarded.
>
> Hi Neale,
>
>
>
> Thanks for your reply.
>
>
>
> I am adding a default route via 10.0.0.15 dev Ge0/4/0, a subnet route
> 10.0.0.0/24 via 10.0.0.15 dev Ge0/4/0  and a host route to the
> 10.0.0.15/32 on dev Ge0/4/0. No explicit arp entry is added.  Without the
> specific  10.0.0.15/32 on dev Ge0/4/0 link the arp ipv4 response for
> 10.0.0.15 gets dropped for the reason src subnet not local to interface.
>
>   On the link down of Ge0/4/0 all the routes specified above gets deleted.
> But on the other link Ge0/3/0 we have similar routes, default route and
> subnet route via 10.0.0.14 and host route to 10.0.0.14/32 on dev
> Ge0/3/0.
>
>
>
> As per the fib log the src API based path gets removed but the src adj
> based one remains. Even if the interface is down the adjacency source entry
> remains the cover seems to get updated. to 7 , which is on a different
> interface  ie Ge0/3/0 than that is in the adjacency is still on Ge0/4/0.
>
>
>
> show ip fib 10.0.0.15
> ipv4-VRF:0, fib_index:0, flow hash:[src dst sport dport proto ]
> locks:[src:plugin-hi:2, src:DHCP:7, src:adjacency:5, src:default-route:1, ]
> 10.0.0.15/32 fib:0 index:8 locks:2
>   src:adjacency refs:1 entry-flags:attached,
> src-flags:added,contributing,active, cover:7
> path-list:[17] locks:2 uPRF-list:7 len:1 itfs:[3, ]
>   path:[18] pl-index:17 ip4 weight=1 pref=0 attached-nexthop:
> 10.0.0.15 GigabitEthernet0/4/0^M
>   [@0]: ipv4 via 10.0.0.15 GigabitEthernet0/4/0: mtu:9000
> deaddead0005deaddead00010800
> Extensions:
>  path:18
>  forwarding:   UNRESOLVED
>
>
>
> and cover fib entry 7 specified above is the following.
>
> 7@10.0.0.0/24
>   unicast-ip4-chain
>   [@0]: dpo-load-balance: [proto:ip4 index:9 buckets:1 uRPF:36
> to:[1770:254632]]
> [0] [@5]: ipv4 via 10.0.0.14 GigabitEthernet0/3/0: mtu:9000
> deaddead0004deaddead00010800
>
>
>
>
>
> When the interface is down is the corresponding fib entry sourced by the
> adjacency expected to be deleted. ?  Could you also explain a bit on the
> adjacency source refinement?
>
>
>
> Thanks
>
> Rupesh
>
>
>
>
>
>
>
>
>
> On Wed, Feb 3, 2021 at 2:40 PM Neale Ranns  wrote:
>
> Hi Rupesh,
>
>
>
> 10.0.0.15 remains

Re: [vpp-dev] Fib entries as per show ip fib for prefix has forwarding UNRESOLVED though packet is forwarded.

2021-02-03 Thread Rupesh Raghuvaran
Hi Neale,

Thanks for your reply.

I am adding a default route via 10.0.0.15 dev Ge0/4/0, a subnet route
10.0.0.0/24 via 10.0.0.15 dev Ge0/4/0  and a host route to the 10.0.0.15/32
on dev Ge0/4/0. No explicit arp entry is added.  Without the specific
10.0.0.15/32 on dev Ge0/4/0 link the arp ipv4 response for 10.0.0.15 gets
dropped for the reason src subnet not local to interface.  On the link down
of Ge0/4/0 all the routes specified above gets deleted. But on the other
link Ge0/3/0 we have similar routes, default route and subnet route via
10.0.0.14 and host route to 10.0.0.14/32 on dev Ge0/3/0.

As per the fib log the src API based path gets removed but the src adj
based one remains. Even if the interface is down the adjacency source entry
remains the cover seems to get updated. to 7 , which is on a different
interface  ie Ge0/3/0 than that is in the adjacency is still on Ge0/4/0.

show ip fib 10.0.0.15
ipv4-VRF:0, fib_index:0, flow hash:[src dst sport dport proto ]
locks:[src:plugin-hi:2, src:DHCP:7, src:adjacency:5, src:default-route:1, ]
10.0.0.15/32 fib:0 index:8 locks:2
  src:adjacency refs:1 entry-flags:attached,
src-flags:added,contributing,active, cover:7
path-list:[17] locks:2 uPRF-list:7 len:1 itfs:[3, ]
  path:[18] pl-index:17 ip4 weight=1 pref=0 attached-nexthop:
10.0.0.15 GigabitEthernet0/4/0^M
  [@0]: ipv4 via 10.0.0.15 GigabitEthernet0/4/0: mtu:9000
deaddead0005deaddead00010800
Extensions:
 path:18
 forwarding:   UNRESOLVED

and cover fib entry 7 specified above is the following.
7@10.0.0.0/24
  unicast-ip4-chain
  [@0]: dpo-load-balance: [proto:ip4 index:9 buckets:1 uRPF:36
to:[1770:254632]]
[0] [@5]: ipv4 via 10.0.0.14 GigabitEthernet0/3/0: mtu:9000
deaddead0004deaddead00010800


When the interface is down is the corresponding fib entry sourced by the
adjacency expected to be deleted. ?  Could you also explain a bit on the
adjacency source refinement?

Thanks
Rupesh




On Wed, Feb 3, 2021 at 2:40 PM Neale Ranns  wrote:

> Hi Rupesh,
>
>
>
> 10.0.0.15 remains unresolved after link down because there remains an
> adjacency/ARP-entry for it  on Ge0/4/0 – did you add a static one? It is
> unresolved because it fails the adjacency source refinement criteria.
> Packets to 10.0.0.15 are forwarded using the default route. This is
> expected behaviour.
>
>
>
> Your use of unnumbered is unconventional. You’d get a better experience if
> you used standard IP addressing. For example, add separate /31s on your
> gigEs, then another /32 on the loopbacks. Add routes to the peer’s
> loopbacks via your control plane agent. The tunnel endpoints should be the
> loopback addresses.
>
>
>
> /neale
>
>
>
>
>
> *From: *vpp-dev@lists.fd.io  on behalf of Rupesh
> Raghuvaran via lists.fd.io 
> *Date: *Tuesday, 2 February 2021 at 17:08
> *To: *vpp-dev@lists.fd.io 
> *Subject: *[vpp-dev] Fib entries as per show ip fib for prefix has
> forwarding UNRESOLVED though packet is forwarded.
>
>
>
> VPP is configured as router (R0) connects to two routers referred as R1
> and R2 , and there is direct link between R1 and R2 is connected.
>
>
>
>   R0 has a loopback interface loop0 configured with 10.0.0.16/32 and
> 10.0.0.18/32 , interface Ge0/3/0 and Ge0/4/0 is unnumbered to loop0
>
>   R0 Ge0/3/0 connects to R1
>
>   R0 Ge0/4/0 connecte to R2
>
>
>
>   R1 has a loopback interface with 10.0.0.14/32 and the R1-R0 interface
> is unnumbered to that interface.
>
>   R2 has a loopback interface with 10.0.0.15/32 and the R2-R0 interface
> is unnumbered to that interface.
>
>
>
>
>
> R0 is configured with following routes
>
>
>
>   0.0.0.0/0 via 10.0.0.14
>
>   0.0.0.0/0 via 10.0.0.15
>
>   10.0.0.0/24 via 10.0.0.14
>
>   10.0.0.0/24 via 10.0.0.15
>
>   10.0.0.14/32 via Ge0/3/0
>
>   10.0.0.15/32 via Ge0/4/0
>
>
>
>   with this configuration I am able to ping 10.0.0.15 and 10.0.0.14 on
> respective link.
>
>
>
> show ip fib 10.0.0.15
>
> ipv4-VRF:0, fib_index:0, flow hash:[src dst sport dport proto ]
> locks:[src:plugin-hi:2, src:DHCP:7, src:adjacency:5, src:default-route:1, ]
>
> 10.0.0.15/32 fib:0 index:8 locks:3
>
>   src:API refs:1 entry-flags:attached, src-flags:added,contributing,active,
>
> path-list:[12] locks:2 flags:shared, uPRF-list:14 len:1 itfs:[3, ]
>
>   path:[12] pl-index:12 ip4 weight=1 pref=0 attached-nexthop:
> oper-flags:resolved, cfg-flags:attached,
>
> 10.0.0.15 GigabitEthernet0/4/0
>
>   [@0]: ipv4 via 10.0.0.15 GigabitEthernet0/4/0: mtu:9000
> deaddead0005deaddead00010800
>
>
>
>   src:adjacency refs:1 entry-flags:attached, src-flags:added, cover:-1
>
> path-list:[17] locks:1 uPRF-list:7 len:1 itfs:[3, ]
>
>   path:[18] pl-in

[vpp-dev] Envoy transport socket support for vpp.

2019-08-05 Thread Rupesh Raghuvaran
Hi,

I would like to know the current state of the support for envoy over vpp host 
stack. Is there any open source transport socket support for vpp available for 
envoy.

Thanks
Rupesh
-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.

View/Reply Online (#13661): https://lists.fd.io/g/vpp-dev/message/13661
Mute This Topic: https://lists.fd.io/mt/32724370/21656
Group Owner: vpp-dev+ow...@lists.fd.io
Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub  [arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-


Re: [vpp-dev] tap interface up crashes

2019-03-26 Thread Rupesh Raghuvaran
Jon,

There was a patch Neale has provided earlier for a similar issue I faced 
earlier. Please see https://lists.fd.io/g/vpp-dev/message/12325

 Try out the patch https://gerrit.fd.io/r/#/c/1/

Thanks
Rupesh
-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.

View/Reply Online (#12640): https://lists.fd.io/g/vpp-dev/message/12640
Mute This Topic: https://lists.fd.io/mt/30722622/21656
Group Owner: vpp-dev+ow...@lists.fd.io
Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub  [arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-


Re: [vpp-dev] VPP coredump on handling IPv6 mDNS packets on interface with ipv6 not enabled explicilty

2019-02-22 Thread Rupesh Raghuvaran
Hi,

Thanks for patch. Please also help in undestanding the use of  ip6_input
and use of vnet_feature_arc_start function.

 In the ip6_input function  "next0" and "next1"  is to
LOOKUP_MULTICAST/LOOKUP before invoking  the vnet_feature_arc_start
function. As in the case of the interface feature are not initialized
appropriately as in the case of this bug, we end up the
"vnet_have_features" return 0 and the next0 and next1  does NOT get
updated, and will  next remain as LOOKUP_MULTICAST/LOOKUP and processed. Is
that correct? Should the next0 and next1 set to IP6_INPUT_NEXT_DROP  before
the   arc0 and arc1 values will lead to the appropriate update of next0 and
next1  with in the vnet_feature_arc_start call if interface has feature
else it is error dropped. That way we will not lead the buffers handled by
the code with un-initialized datastructures and tables etc leading to
crash.

Please see the relevant lines of the ip6_input function below.

  sw_if_index0 = vnet_buffer (p0)->sw_if_index[VLIB_RX];
  sw_if_index1 = vnet_buffer (p1)->sw_if_index[VLIB_RX];

  if (PREDICT_FALSE (ip6_address_is_multicast (&ip0->dst_address)))
{
  arc0 = lm->mcast_feature_arc_index;
  next0 = IP6_INPUT_NEXT_LOOKUP_MULTICAST;
}
  else
{
  arc0 = lm->ucast_feature_arc_index;
  next0 = IP6_INPUT_NEXT_LOOKUP;
}

  if (PREDICT_FALSE (ip6_address_is_multicast (&ip1->dst_address)))
{
  arc1 = lm->mcast_feature_arc_index;
  next1 = IP6_INPUT_NEXT_LOOKUP_MULTICAST;
}
  else
{
  arc1 = lm->ucast_feature_arc_index;
  next1 = IP6_INPUT_NEXT_LOOKUP;
}

  vnet_buffer (p0)->ip.adj_index[VLIB_RX] = ~0;
  vnet_buffer (p1)->ip.adj_index[VLIB_RX] = ~0;

  vnet_feature_arc_start (arc0, sw_if_index0, &next0, p0);
  vnet_feature_arc_start (arc1, sw_if_index1, &next1, p1);


Thanks
Rupesh

On Fri, Feb 22, 2019 at 6:13 PM Neale Ranns (nranns) 
wrote:

>
>
> Hi Rupesh,
>
>
>
> Thank you for spending the time to investigate. I have pushed a patch to
> fix the issue, and another found, after removing the cast that was masking
> the problem.
>
>   https://gerrit.fd.io/r/#/c/1/
>
>
>
> Regards,
>
> neale
>
>
>
> *De : *Rupesh Raghuvaran 
> *Date : *vendredi 22 février 2019 à 13:13
> *À : *"Neale Ranns (nranns)" 
> *Cc : *"vpp-dev@lists.fd.io" 
> *Objet : *Re: [vpp-dev] VPP coredump on handling IPv6 mDNS packets on
> interface with ipv6 not enabled explicilty
>
>
>
> Hi Neale,
>
>
>
> Thanks for  all the help.
>
>
>
> I was able to the root cause the issue happening with my private build.
> There was issue with one of the sw_interface_add_del  functions invoked via
> the call_elf_section_interface_callbacks during the interface create.
> The vnet_arp_delete_sw_interface definitied of void return type. but in the
> call_elf_section_interface_callbacks expects the callbacks to return
> clib_error_t type.  For some reason the execution seems to fail due a
> non-zero error type with the build using gcc7.3 which was build from the
> yocto framework. After fixing this code I was able to get the features
> enabled as required.  Even though most of the callbacks do not have any
> error paths and returns 0,  it would be good to have some logs with in
> call_elf_section_interface_callbacks to take not of any failure due to
> error.
>
>
>
> Thanks
>
> Rupesh
>
>
>
> On Thu, Feb 21, 2019 at 9:45 PM Neale Ranns (nranns) 
> wrote:
>
>
>
> Hi Rupesh,
>
>
>
> Those feature are enabled by default on interfaces that are newly created.
>
>
>
> DBGvpp# loop cre
>
> loop0
>
> DBGvpp# set int state loop0 up
>
> DBGvpp# sh int feat loop0
>
> Feature paths configured on loop0...
>
> 

>
> ip6-multicast:
>
>   ip6-not-enabled
>
>
>
> ip6-unicast:
>
>   ip6-not-enabled
>
>
>
> DBGvpp# create bridge 1
>
> DBGvpp# set int l2 bridge loop0 1 bvi
>
> DBGvpp# sh int feat loop0
>
> Feature paths configured on loop0...
>
>
>
> ip6-multicast:
>
>   ip6-not-enabled
>
>
>
> ip6-unicast:
>
>   ip6-not-enabled
>
>
>
>
>
> Add and remove an address:
>
>
>
> DBGvpp# set int ip address loop0 2001::1/64
>
> DBGvpp# sh int feat loop0
>
> Feature paths configured on loop0...
>
> 

>
> ip6-multicast:
>
>
>
> ip6-unicast:
>
>
>
> DBGvpp# set int ip address del loop0 2001::1/64
>
> DBGvpp# sh int feat loop0
>
> F

Re: [vpp-dev] VPP coredump on handling IPv6 mDNS packets on interface with ipv6 not enabled explicilty

2019-02-22 Thread Rupesh Raghuvaran
Hi Neale,

Thanks for  all the help.

I was able to the root cause the issue happening with my private build.
There was issue with one of the sw_interface_add_del  functions invoked via
the call_elf_section_interface_callbacks during the interface create.
The vnet_arp_delete_sw_interface definitied of void return type. but in the
call_elf_section_interface_callbacks expects the callbacks to return
clib_error_t type.  For some reason the execution seems to fail due a
non-zero error type with the build using gcc7.3 which was build from the
yocto framework. After fixing this code I was able to get the features
enabled as required.  Even though most of the callbacks do not have any
error paths and returns 0,  it would be good to have some logs with in
call_elf_section_interface_callbacks to take not of any failure due to
error.

Thanks
Rupesh

On Thu, Feb 21, 2019 at 9:45 PM Neale Ranns (nranns) 
wrote:

>
>
> Hi Rupesh,
>
>
>
> Those feature are enabled by default on interfaces that are newly created.
>
>
>
> DBGvpp# loop cre
>
> loop0
>
> DBGvpp# set int state loop0 up
>
> DBGvpp# sh int feat loop0
>
> Feature paths configured on loop0...
>
> 

>
> ip6-multicast:
>
>   ip6-not-enabled
>
>
>
> ip6-unicast:
>
>   ip6-not-enabled
>
>
>
> DBGvpp# create bridge 1
>
> DBGvpp# set int l2 bridge loop0 1 bvi
>
> DBGvpp# sh int feat loop0
>
> Feature paths configured on loop0...
>
>
>
> ip6-multicast:
>
>   ip6-not-enabled
>
>
>
> ip6-unicast:
>
>   ip6-not-enabled
>
>
>
>
>
> Add and remove an address:
>
>
>
> DBGvpp# set int ip address loop0 2001::1/64
>
> DBGvpp# sh int feat loop0
>
> Feature paths configured on loop0...
>
> 

>
> ip6-multicast:
>
>
>
> ip6-unicast:
>
>
>
> DBGvpp# set int ip address del loop0 2001::1/64
>
> DBGvpp# sh int feat loop0
>
> Feature paths configured on loop0...
>
> 

>
> ip6-multicast:
>
>   ip6-not-enabled
>
>
>
> ip6-unicast:
>
>   ip6-not-enabled
>
>
>
>
>
> regards,
>
> neale
>
>
>
>
>
> *De : *Rupesh Raghuvaran 
> *Date : *jeudi 21 février 2019 à 16:23
> *À : *"Neale Ranns (nranns)" 
> *Cc : *"vpp-dev@lists.fd.io" 
> *Objet : *Re: [vpp-dev] VPP coredump on handling IPv6 mDNS packets on
> interface with ipv6 not enabled explicilty
>
>
>
> Hi Neale,
>
> I do not see the interface get this ip6-not-enabled feature enabled on the
> interface on creation, the show interface feat  have the ip4/ip6
> unicast/multicast  have arc "none configured".  when the interface is
> created.
> Is this the default state with which interface is expected to come up ?
> How does one bring a created loopback into the desired ip6-not-enabled
> feature set for ip6-unicast/ip6-multicast arc ?
> From the cli commands using the "set interface feature 
>  arc " was able to bring the interface to that
> specific state, but  I could not find the api doing the same.
>
> Thanks
> Rupesh
>
>
>
> On Wed, Feb 20, 2019 at 12:02 AM Neale Ranns (nranns) 
> wrote:
>
>
>
> Hi Rupseh,
>
>
>
> Interfaces that are not ip6 enabled show these features enabled:
>
>
>
> ip6-multicast:
>
>   ip6-not-enabled
>
>
>
> ip6-unicast:
>
>   ip6-not-enabled
>
>
>
> it’s the ip6-not-enabled node/feature that is enabled on the interface as
> an input feature that drops the packets.
>
>
>
> /neale
>
>
>
> *De : * au nom de Rupesh Raghuvaran <
> rupesh.raghuva...@gmail.com>
> *Date : *mardi 19 février 2019 à 18:06
> *À : *"vpp-dev@lists.fd.io" 
> *Objet : *[vpp-dev] VPP coredump on handling IPv6 mDNS packets on
> interface with ipv6 not enabled explicilty
>
>
>
> Missed to do a reply all, adding vpp-dev back
>
>
>
> Thanks
>
> Rupesh
>
> -- Forwarded message -
> From: *Rupesh Raghuvaran* 
> Date: Tue, Feb 19, 2019 at 10:02 PM
> Subject: Re: [vpp-dev] VPP coredump on handling IPv6 mDNS packets on
> interface with ipv6 not enabled explicilty
> To: Neale Ranns (nranns) 
>
>
>
> Hi Neale,e
>
>
>
> I could not spot the specific code in ip6-input which drops the packet if
> the rx interface is not ipv6 enabled. Could you please point that to me.
>
>
>
>
>
> Please find the requested information below
>
>
>
> DBGvpp# show interface loop1
>
>   Name   IdxState  MTU (L3/IP4/IP6/MPLS)
>  Counter  Count
>
> loop1 5  up  9000/0/0/0 rx
> packets 

Re: [vpp-dev] VPP coredump on handling IPv6 mDNS packets on interface with ipv6 not enabled explicilty

2019-02-21 Thread Rupesh Raghuvaran
Hi Neale,

I do not see the interface get this ip6-not-enabled feature enabled on the
interface on creation, the show interface feat  have the ip4/ip6
unicast/multicast  have arc "none configured".  when the interface is
created.
Is this the default state with which interface is expected to come up ?
How does one bring a created loopback into the desired ip6-not-enabled
feature set for ip6-unicast/ip6-multicast arc ?
>From the cli commands using the "set interface feature 
 arc " was able to bring the interface to that
specific state, but  I could not find the api doing the same.

Thanks
Rupesh

On Wed, Feb 20, 2019 at 12:02 AM Neale Ranns (nranns) 
wrote:

>
>
> Hi Rupseh,
>
>
>
> Interfaces that are not ip6 enabled show these features enabled:
>
>
>
> ip6-multicast:
>
>   ip6-not-enabled
>
>
>
> ip6-unicast:
>
>   ip6-not-enabled
>
>
>
> it’s the ip6-not-enabled node/feature that is enabled on the interface as
> an input feature that drops the packets.
>
>
>
> /neale
>
>
>
> *De : * au nom de Rupesh Raghuvaran <
> rupesh.raghuva...@gmail.com>
> *Date : *mardi 19 février 2019 à 18:06
> *À : *"vpp-dev@lists.fd.io" 
> *Objet : *[vpp-dev] VPP coredump on handling IPv6 mDNS packets on
> interface with ipv6 not enabled explicilty
>
>
>
> Missed to do a reply all, adding vpp-dev back
>
>
>
> Thanks
>
> Rupesh
>
> -- Forwarded message -
> From: *Rupesh Raghuvaran* 
> Date: Tue, Feb 19, 2019 at 10:02 PM
> Subject: Re: [vpp-dev] VPP coredump on handling IPv6 mDNS packets on
> interface with ipv6 not enabled explicilty
> To: Neale Ranns (nranns) 
>
>
>
> Hi Neale,e
>
>
>
> I could not spot the specific code in ip6-input which drops the packet if
> the rx interface is not ipv6 enabled. Could you please point that to me.
>
>
>
>
>
> Please find the requested information below
>
>
>
> DBGvpp# show interface loop1
>
>   Name   IdxState  MTU (L3/IP4/IP6/MPLS)
>  Counter  Count
>
> loop1 5  up  9000/0/0/0 rx
> packets  2617
>
> rx
> bytes  130415
>
> tx
> packets  2068
>
> tx
> bytes  111686
>
> drops
>  3245
>
> punt
>170
>
> ip4
>   240
>
> DBGvpp# show interface address
>
> GigabitEthernet0/14/0 (up):
>
>   L2 bridge bd-id 1 idx 1 shg 0
>
> GigabitEthernet0/14/1 (dn):
>
> GigabitEthernet0/14/2 (up):
>
>   L2 bridge bd-id 1 idx 1 shg 0
>
> local0 (dn):
>
> loop1 (up):
>
>   L2 bridge bd-id 1 idx 1 shg 0 bvi
>
>   L3 192.168.10.128/24
>
> tuntap-0 (up):
>
> DBGvpp#
>
> DBGvpp# sh ip6 interface
>
> DBGvpp# sh ip6 interface loop1
>
> show ip6 interface: IPv6 not enabled on interface
>
> DBGvpp# show int feat loop1
>
> Feature paths configured on loop1...
>
>
>
> nsh-output:
>
>   none configured
>
>
>
> mpls-output:
>
>   none configured
>
>
>
> mpls-input:
>
>   mpls-not-enabled
>
>
>
> ip6-drop:
>
>   none configured
>
>
>
> ip6-punt:
>
>   none configured
>
>
>
> ip6-local:
>
>   none configured
>
>
>
> ip6-output:
>
>   none configured
>
>
>
> ip6-multicast:
>
>   none configured
>
> ip6-unicast:
>
>   none configured
>
>
>
> ip4-drop:
>
>   none configured
>
>
>
> ip4-punt:
>
>   none configured
>
>
>
> ip4-local:
>
>   none configured
>
>
>
> ip4-output:
>
>   none configured
>
>
>
> ip4-multicast:
>
>   none configured
>
>
>
> ip4-unicast:
>
>
>
> l2-output-nonip:
>
>   none configured
>
>
>
> l2-input-nonip:
>
>   none configured
>
>
>
> l2-output-ip6:
>
>   none configured
>
>
>
> l2-input-ip6:
>
>   none configured
>
>
>
> l2-output-ip4:
>
>   none configured
>
>
>
> l2-input-ip4:
>
>   none configured
>
>
>
> ethernet-output:
>
>   none configured
>
>
>
> interface-out

[vpp-dev] VPP core on IPv6 mDNS packets on interface with ipv6 not enabled explicitly.

2019-02-19 Thread Rupesh Raghuvaran
Hi,

The vpp core observed with the following trace, this seems to occur when a
mDNS ipv6 packet is getting handled in the mfib_forward_lookup. In the
mfib_lookup, access to  ip6_main.mfib_index_by_sw_if_index[sw_index]
results in assertion failure as  mfib_index_by_sw_if_index in not yet
initialized, note the sw_index is that of a loopback interface created and
added to the bridge domain. Note ipv6 is not explicitly enabled for this
loopback interface. The loopack interface vrf is set to default 0 using the
set_table api with is_ipv6 set as 0.  Earlier without interface set table
setting and enabling the dhcp client on the interface used to result in
assertion on ipv4_main.fib_index_by_sw_if_index ip4_lookup_inline
function.



My understanding so far regarding the issue is that the ip6 is enabled for
the “local0” interface using the ip6_sw_interface_enable_disable from
vnet_main_init unconditionally. This leads to the ip6-input feature
processing enabled. But note that case does not invoke a
ip6_add_del_address function for local0 , which would set up the fib and
mfib and invoke vec_validate for the mfib_index_by_sw_if_index that would
initialize the vector.


I am not able to find any code in ip6-input that would look up the
ip6_main.ip_enabled_by_sw_index[sw_index]  before passing the frame for
lookup. Is there any mechanism currently to handle the buffers only if the
interface interface has ipv6  enabled else drop stating  that ipv6 not
enabled on the interface. Looking for valuable inputs regarding this.


Should local0 interface explicitly enable ip4/ip6 implicitly or should this
be driven by some configuration ?


The device has two ethernet interface bound to the vpp by specifying the
pci-id in startup .conf, the ethernet interfaces are added to a
bridge-domain 1 and an additional loopback interface is added for l3.
please see the show interfaces and show bridge domain configuration below.


Please see the stack trace and other details below. Thanks in advance.


Thanks

Rupesh


(gdb) core-file vpp_main_core_6_20691
[New LWP 20691]
[New LWP 20693]
Core was generated by `/usr/bin/vpp -c /etc/vpp/startup.conf'.
Program terminated with signal SIGABRT, Aborted.
#0  0x003b00833bff in ?? ()
[Current thread is 1 (LWP 20691)]
(gdb) set solib-search-path lib:usr/lib
warning: Unable to find libthread_db matching inferior's thread library,
thread debugging will not be available.
(gdb) bt
#0  __GI_raise (sig=sig@entry=6) at
/usr/src/debug/glibc/2.26-r0/git/sysdeps/unix/sysv/linux/raise.c:51
#1  0x003b00834fe7 in __GI_abort () at
/usr/src/debug/glibc/2.26-r0/git/stdlib/abort.c:90
#2  0x004077cb in os_exit (code=1) at
/home/rraghuvaran/cnwcl-jan19/poky/build/tmp/work/core2-64-poky-linux/vpp/19.01-r0/git/src/vpp/vnet/main.c:359
#3  0x003b032acbd9 in unix_signal_handler (signum=6, si=0x7f57ef5ff470,
uc=0x7f57ef5ff340)
at
/home/rraghuvaran/cnwcl-jan19/poky/build/tmp/work/core2-64-poky-linux/vpp/19.01-r0/git/src/vlib/unix/main.c:156
#4  
#5  __GI_raise (sig=sig@entry=6) at
/usr/src/debug/glibc/2.26-r0/git/sysdeps/unix/sysv/linux/raise.c:51
#6  0x003b00834fe7 in __GI_abort () at
/usr/src/debug/glibc/2.26-r0/git/stdlib/abort.c:90
#7  0x00407786 in os_panic () at
/home/rraghuvaran/cnwcl-jan19/poky/build/tmp/work/core2-64-poky-linux/vpp/19.01-r0/git/src/vpp/vnet/main.c:335
#8  0x003b01c3a33d in debugger () at
/home/rraghuvaran/cnwcl-jan19/poky/build/tmp/work/core2-64-poky-linux/vpp/19.01-r0/git/src/vppinfra/error.c:84
#9  0x003b01c3a70c in _clib_error (how_to_die=2, function_name=0x0,
line_number=0, fmt=0x3b046d8bb0 "%s:%d (%s) assertion `%s' fails")
at
/home/rraghuvaran/cnwcl-jan19/poky/build/tmp/work/core2-64-poky-linux/vpp/19.01-r0/git/src/vppinfra/error.c:143
#10 0x003b0442315e in mfib_forward_lookup (vm=0x3b034dc300
, node=0x7f57ef1fef40, frame=0x7f57ef21a780, is_v4=0)
at
/home/rraghuvaran/cnwcl-jan19/poky/build/tmp/work/core2-64-poky-linux/vpp/19.01-r0/git/src/vnet/mfib/mfib_forward.c:165
#11 0x003b04423405 in ip6_mfib_forward_lookup (vm=0x3b034dc300
, node=0x7f57ef1fef40, frame=0x7f57ef21a780)
at
/home/rraghuvaran/cnwcl-jan19/poky/build/tmp/work/core2-64-poky-linux/vpp/19.01-r0/git/src/vnet/mfib/mfib_forward.c:216
#12 0x003b03257314 in dispatch_node (vm=0x3b034dc300
, node=0x7f57ef1fef40, type=VLIB_NODE_TYPE_INTERNAL,
dispatch_state=VLIB_NODE_STATE_POLLING, frame=0x7f57ef21a780,
last_time_stamp=826527506884080)
at
/home/rraghuvaran/cnwcl-jan19/poky/build/tmp/work/core2-64-poky-linux/vpp/19.01-r0/git/src/vlib/main.c:1201
#13 0x003b03257aac in dispatch_pending_node (vm=0x3b034dc300
, pending_frame_index=7, last_time_stamp=826527506884080)
at
/home/rraghuvaran/cnwcl-jan19/poky/build/tmp/work/core2-64-poky-linux/vpp/19.01-r0/git/src/vlib/main.c:1368
#14 0x003b0325944c in vlib_main_or_worker_loop (vm=0x3b034dc300
, is_main=1)
at
/home/rraghuvaran/cnwcl-jan19/poky/build/tmp/work/core2-64-poky-linux/vpp/19.01-r0/git/src/vlib/ma

[vpp-dev] VPP coredump on handling IPv6 mDNS packets on interface with ipv6 not enabled explicilty

2019-02-19 Thread Rupesh Raghuvaran
Missed to do a reply all, adding vpp-dev back

Thanks
Rupesh

-- Forwarded message -
From: Rupesh Raghuvaran 
Date: Tue, Feb 19, 2019 at 10:02 PM
Subject: Re: [vpp-dev] VPP coredump on handling IPv6 mDNS packets on
interface with ipv6 not enabled explicilty
To: Neale Ranns (nranns) 


Hi Neale,e

I could not spot the specific code in ip6-input which drops the packet if
the rx interface is not ipv6 enabled. Could you please point that to me.


Please find the requested information below

DBGvpp# show interface loop1
  Name   IdxState  MTU (L3/IP4/IP6/MPLS)
 Counter  Count
loop1 5  up  9000/0/0/0 rx
packets  2617
rx
bytes  130415
tx
packets  2068
tx
bytes  111686
drops
 3245
punt
 170
ip4
  240
DBGvpp# show interface address
GigabitEthernet0/14/0 (up):
  L2 bridge bd-id 1 idx 1 shg 0
GigabitEthernet0/14/1 (dn):
GigabitEthernet0/14/2 (up):
  L2 bridge bd-id 1 idx 1 shg 0
local0 (dn):
loop1 (up):
  L2 bridge bd-id 1 idx 1 shg 0 bvi
  L3 192.168.10.128/24
tuntap-0 (up):
DBGvpp#
DBGvpp# sh ip6 interface
DBGvpp# sh ip6 interface loop1
show ip6 interface: IPv6 not enabled on interface
DBGvpp# show int feat loop1
Feature paths configured on loop1...

nsh-output:
  none configured

mpls-output:
  none configured

mpls-input:
  mpls-not-enabled

ip6-drop:
  none configured

ip6-punt:
  none configured

ip6-local:
  none configured

ip6-output:
  none configured

ip6-multicast:
  none configured
ip6-unicast:
  none configured

ip4-drop:
  none configured

ip4-punt:
  none configured

ip4-local:
  none configured

ip4-output:
  none configured

ip4-multicast:
  none configured

ip4-unicast:

l2-output-nonip:
  none configured

l2-input-nonip:
  none configured

l2-output-ip6:
  none configured

l2-input-ip6:
  none configured

l2-output-ip4:
  none configured

l2-input-ip4:
  none configured

ethernet-output:
  none configured

interface-output:
  none configured

device-input:
  none configured

l2-input:
  FWD (l2-fwd)
 UU_FLOOD (l2-flood)
FLOOD (l2-flood)

l2-output:
   OUTPUT (interface-output)
DBGvpp#



Thanks
Rupesh

On Tue, Feb 19, 2019 at 9:43 PM Neale Ranns (nranns) 
wrote:

> Hi Rupesh,
>
>
>
> An IPv6 packet arriving on an interface that is not IPv6 enabled should be
> dropped in ip6-input.
>
>
>
> Can you please show me:
>
>   sh int feat loop0
>
>   sh ip6 interface loop0
>
>
>
> local0 is a special case. Think of it as a means for VPP to consume the ID
> 0 so that we can be sure that no other interface can use that ID. No
> packets should tx nor rx on local0.
>
>
>
> /neale
>
>
>
>
>
>
>
> *De : * au nom de Rupesh Raghuvaran <
> rupesh.raghuva...@gmail.com>
> *Date : *mardi 19 février 2019 à 16:06
> *À : *"vpp-dev@lists.fd.io" 
> *Objet : *[vpp-dev] VPP coredump on handling IPv6 mDNS packets on
> interface with ipv6 not enabled explicilty
>
>
>
> Hi,
>
>
>
> The vpp core observed with the following trace, this seems to occur when a
> mDNS ipv6 packet is getting handled in the mfib_forward_lookup. In the
> mfib_lookup, access to  ip6_main.mfib_index_by_sw_if_index[sw_index]
> results in assertion failure as  mfib_index_by_sw_if_index in not yet
> initialized, note the sw_index is that of a loopback interface created and
> added to the bridge domain. Note ipv6 is not explicitly enabled for this
> loopback interface. The loopack interface vrf is set to default 0 using the
> set_table api with is_ipv6 set as 0.  Earlier without interface set table
> setting and enabling the dhcp client on the interface used to result in
> assertion on ipv4_main.fib_index_by_sw_if_index ip4_lookup_inline
> function.
>
>
>
> My understanding so far regarding the issue is that the ip6 is enabled for
> the “local0” interface using the ip6_sw_interface_enable_disable from
> vnet_main_init unconditionally. This leads to the ip6-input feature
> processing enabled. But note that case does not invoke a
> ip6_add_del_address function for local0 , which would set up the fib and
> mfib and invoke vec_validate for the mfib_index_by_sw_if_index that would
> initialize the vector.
>
>
>
> I am not able to find any code in ip6-input that would look up the
> ip6_

[vpp-dev] VPP coredump on handling IPv6 mDNS packets on interface with ipv6 not enabled explicilty

2019-02-19 Thread Rupesh Raghuvaran
Hi,

The vpp core observed with the following trace, this seems to occur when a
mDNS ipv6 packet is getting handled in the mfib_forward_lookup. In the
mfib_lookup, access to  ip6_main.mfib_index_by_sw_if_index[sw_index]
results in assertion failure as  mfib_index_by_sw_if_index in not yet
initialized, note the sw_index is that of a loopback interface created and
added to the bridge domain. Note ipv6 is not explicitly enabled for this
loopback interface. The loopack interface vrf is set to default 0 using the
set_table api with is_ipv6 set as 0.  Earlier without interface set table
setting and enabling the dhcp client on the interface used to result in
assertion on ipv4_main.fib_index_by_sw_if_index ip4_lookup_inline
function.



My understanding so far regarding the issue is that the ip6 is enabled for
the “local0” interface using the ip6_sw_interface_enable_disable from
vnet_main_init unconditionally. This leads to the ip6-input feature
processing enabled. But note that case does not invoke a
ip6_add_del_address function for local0 , which would set up the fib and
mfib and invoke vec_validate for the mfib_index_by_sw_if_index that would
initialize the vector.


I am not able to find any code in ip6-input that would look up the
ip6_main.ip_enabled_by_sw_index[sw_index]  before passing the frame for
lookup.


 Is there any mechanism currently to handle the buffers only if the
interface interface has ipv6  enabled else drop stating  that ipv6 not
enabled on the interface ?


Should local0 interface explicitly enable ip4/ip6 implicitly or should this
be driven by some configuration ?


The device has two ethernet interface bound to the vpp by specifying the
pci-id in startup .conf, the ethernet interfaces are added to a
bridge-domain 1 and an additional loopback interface is added for l3.
please see the show interfaces and show bridge domain configuration below.


Please see the stack trace and other details below. Looking for valuable
inputs regarding this. Thanks in advance.


Thanks

Rupesh


(gdb) core-file vpp_main_core_6_20691
[New LWP 20691]
[New LWP 20693]
Core was generated by `/usr/bin/vpp -c /etc/vpp/startup.conf'.
Program terminated with signal SIGABRT, Aborted.
#0  0x003b00833bff in ?? ()
[Current thread is 1 (LWP 20691)]
(gdb) set solib-search-path lib:usr/lib
warning: Unable to find libthread_db matching inferior's thread library,
thread debugging will not be available.
(gdb) bt
#0  __GI_raise (sig=sig@entry=6) at
/usr/src/debug/glibc/2.26-r0/git/sysdeps/unix/sysv/linux/raise.c:51
#1  0x003b00834fe7 in __GI_abort () at
/usr/src/debug/glibc/2.26-r0/git/stdlib/abort.c:90
#2  0x004077cb in os_exit (code=1) at
/home/rraghuvaran/cnwcl-jan19/poky/build/tmp/work/core2-64-poky-linux/vpp/19.01-r0/git/src/vpp/vnet/main.c:359
#3  0x003b032acbd9 in unix_signal_handler (signum=6, si=0x7f57ef5ff470,
uc=0x7f57ef5ff340)
at
/home/rraghuvaran/cnwcl-jan19/poky/build/tmp/work/core2-64-poky-linux/vpp/19.01-r0/git/src/vlib/unix/main.c:156
#4  
#5  __GI_raise (sig=sig@entry=6) at
/usr/src/debug/glibc/2.26-r0/git/sysdeps/unix/sysv/linux/raise.c:51
#6  0x003b00834fe7 in __GI_abort () at
/usr/src/debug/glibc/2.26-r0/git/stdlib/abort.c:90
#7  0x00407786 in os_panic () at
/home/rraghuvaran/cnwcl-jan19/poky/build/tmp/work/core2-64-poky-linux/vpp/19.01-r0/git/src/vpp/vnet/main.c:335
#8  0x003b01c3a33d in debugger () at
/home/rraghuvaran/cnwcl-jan19/poky/build/tmp/work/core2-64-poky-linux/vpp/19.01-r0/git/src/vppinfra/error.c:84
#9  0x003b01c3a70c in _clib_error (how_to_die=2, function_name=0x0,
line_number=0, fmt=0x3b046d8bb0 "%s:%d (%s) assertion `%s' fails")
at
/home/rraghuvaran/cnwcl-jan19/poky/build/tmp/work/core2-64-poky-linux/vpp/19.01-r0/git/src/vppinfra/error.c:143
#10 0x003b0442315e in mfib_forward_lookup (vm=0x3b034dc300
, node=0x7f57ef1fef40, frame=0x7f57ef21a780, is_v4=0)
at
/home/rraghuvaran/cnwcl-jan19/poky/build/tmp/work/core2-64-poky-linux/vpp/19.01-r0/git/src/vnet/mfib/mfib_forward.c:165
#11 0x003b04423405 in ip6_mfib_forward_lookup (vm=0x3b034dc300
, node=0x7f57ef1fef40, frame=0x7f57ef21a780)
at
/home/rraghuvaran/cnwcl-jan19/poky/build/tmp/work/core2-64-poky-linux/vpp/19.01-r0/git/src/vnet/mfib/mfib_forward.c:216
#12 0x003b03257314 in dispatch_node (vm=0x3b034dc300
, node=0x7f57ef1fef40, type=VLIB_NODE_TYPE_INTERNAL,
dispatch_state=VLIB_NODE_STATE_POLLING, frame=0x7f57ef21a780,
last_time_stamp=826527506884080)
at
/home/rraghuvaran/cnwcl-jan19/poky/build/tmp/work/core2-64-poky-linux/vpp/19.01-r0/git/src/vlib/main.c:1201
#13 0x003b03257aac in dispatch_pending_node (vm=0x3b034dc300
, pending_frame_index=7, last_time_stamp=826527506884080)
at
/home/rraghuvaran/cnwcl-jan19/poky/build/tmp/work/core2-64-poky-linux/vpp/19.01-r0/git/src/vlib/main.c:1368
#14 0x003b0325944c in vlib_main_or_worker_loop (vm=0x3b034dc300
, is_main=1)
at
/home/rraghuvaran/cnwcl-jan19/poky/build/tmp/work/core2-64-poky-linux/vpp/19.01-r0/git/src/vli