Re: [vpp-dev] VRRP issue

2020-08-14 Thread Matthew Smith via lists.fd.io
Hi Naveen,

See replies inline below...

On Fri, Aug 14, 2020 at 12:53 PM Naveen Joy (najoy)  wrote:

> Thanks, Matthew. I am seeing the same behavior with the default
> advertisement interval of 1s.
>
> Tcpdump on a linux tap interface plugged into the same BD as the backup VR
> shows VRRP advertisements arriving at the configured rate of 1s (100cs),
>
> So, there is no packet loss of advertisements or delays in sending
> advertisements by the master VR.
>
>
>
> 10:37:19.991540 IP 10.4.4.3 > vrrp.mcast.net: VRRPv3, Advertisement, vrid
> 10, prio 110, intvl 100cs, length 12
>
> 10:37:20.991619 IP 10.4.4.3 > vrrp.mcast.net: VRRPv3, Advertisement, vrid
> 10, prio 110, intvl 100cs, length 12
>
> 10:37:21.991783 IP 10.4.4.3 > vrrp.mcast.net: VRRPv3, Advertisement, vrid
> 10, prio 110, intvl 100cs, length 12
>
> 10:37:22.991792 IP 10.4.4.3 > vrrp.mcast.net: VRRPv3, Advertisement, vrid
> 10, prio 110, intvl 100cs, length 12
>
> 10:37:23.991926 IP 10.4.4.3 > vrrp.mcast.net: VRRPv3, Advertisement, vrid
> 10, prio 110, intvl 100cs, length 12
>
> 10:37:24.991976 IP 10.4.4.3 > vrrp.mcast.net: VRRPv3, Advertisement, vrid
> 10, prio 110, intvl 100cs, length 12
>
> 10:37:25.992057 IP 10.4.4.3 > vrrp.mcast.net: VRRPv3, Advertisement, vrid
> 10, prio 110, intvl 100cs, length 12
>
> 10:37:26.992131 IP 10.4.4.3 > vrrp.mcast.net: VRRPv3, Advertisement, vrid
> 10, prio 110, intvl 100cs, length 12
>
> 10:37:27.992257 IP 10.4.4.3 > vrrp.mcast.net: VRRPv3, Advertisement, vrid
> 10, prio 110, intvl 100cs, length 12
>
> 10:37:28.992311 IP 10.4.4.3 > vrrp.mcast.net: VRRPv3, Advertisement, vrid
> 10, prio 110, intvl 100cs, length 12
>
> 10:37:29.992402 IP 10.4.4.3 > vrrp.mcast.net: VRRPv3, Advertisement, vrid
> 10, prio 110, intvl 100cs, length 12
>
> 10:37:30.992513 IP 10.4.4.3 > vrrp.mcast.net: VRRPv3, Advertisement, vrid
> 10, prio 110, intvl 100cs, length 12
>
> 10:37:31.992586 IP 10.4.4.3 > vrrp.mcast.net: VRRPv3, Advertisement, vrid
> 10, prio 110, intvl 100cs, length 12
>
> 
>
>
>

Ok, its good to know that the packets are all arriving on the backup.

The diagram attached to the first message in this thread said that the
master has address 10.4.4.1 and the backup has 10.4.4.3. Is that diagram
still accurate? The packets in the capture have a source address of
10.4.4.3, which is the address that is supposed to be configured on the
backup according to the diagram. If the diagram is still accurate, it seems
like those packets should be dropped by the backup as 'spoofed' packets
since their source address is configured locally on the backup.

If the diagram is not accurate and the master VR is truly supposed to be
advertising with a source address of 10.4.4.3, can you please use vppctl to
generate a packet capture on the backup VR? You could run something like
'vppctl trace add dpdk-input 50; sleep 10; vppctl show trace'. Depending on
how noisy your network is, that ought to capture a few inbound
advertisements.




> However, it appears that there is a delay in VRRP packet processing at the
> backup VR resulting in frequent state transitions.
>
>
>
> On the backup VR:
>
> vpp# show err
>
>CountNode  Reason
>
> 120347   vrrp4-input  VRRP packets processed
>
>
>
> vpp# show err (after 1 sec)
>
>CountNode  Reason
>
> 120347   vrrp4-input  VRRP packets processed
>
>
>

Is that the only output from 'show err' or did you clean up the output to
only include the counters which looked like they are related to VRRP? If
there are other counters displayed by 'show err' that you omitted from the
output, it would be helpful to see the full output.




> Also, log on the backup VR shows that VRRP advertisements from master are
> received every 4s
>
>
>
> Aug 14 10:43:57 ml-ucs-01 vnet[5504]: vrrp_input_process:223: Received
> advertisement for master VR [0] sw_if_index 14 VR ID 10 IPv4
>
> Aug 14 10:43:57 ml-ucs-01 vnet[5504]: vrrp_vr_transition:283: VR [0]
> sw_if_index 14 VR ID 10 IPv4 transitioning to Backup
>
> Aug 14 10:43:57 ml-ucs-01 vnet[5504]: vrrp_vr_transition_addrs:238:
> Deleting VR addresses on sw_if_index 14
>
> Aug 14 10:43:57 ml-ucs-01 vnet[5504]: vrrp_vr_transition_vmac:123:
> Deleting virtual MAC address 00:00:5e:00:01:0a on hardware interface 13
>
> Aug 14 10:44:00 ml-ucs-01 vnet[5504]: vrrp_vr_transition:283: VR [0]
> sw_if_index 14 VR ID 10 IPv4 transitioning to Master
>
> Aug 14 10:44:00 ml-ucs-01 vnet[5504]: vrrp_vr_transition_addrs:238: Adding
> VR addresses on sw_if_index 14
>
> Aug 14 10:44:00 ml-ucs-01 vnet[5504]: vrrp_vr_transition_vmac:123: Adding
> virtual MAC address 00:00:5e:00:01:0a on hardware interface 13
>
> Aug 14 10:44:01 ml-ucs-01 vnet[5504]: vrrp_input_process:223: Received
> advertisement for master VR [0] sw_if_index 14 VR ID 10 IPv4
>
> Aug 14 10:44:01 ml-ucs-01 vnet[5504]: vrrp_vr_transition:283: VR [0]
> sw_if_index 14 VR ID 10 IPv4 transitioning to Backup
>
> Aug 

[vpp-dev] FD.io Jenkins Incident

2020-08-14 Thread Vanessa Valderrama
Description: The Jenkins executors were not spinning up as expected.

Cause: A vendor SSL was changed at the service endpoint and because the
TTL was not updated Jenkins had the old DNS cache.

Solution: We cleared the LF cache and restarted Jenkins which restored
service.

Follow-up: We are working with the vendor to improve advanced
communication regarding these types of changes in the future to prevent
unplanned downtime.



-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.

View/Reply Online (#17240): https://lists.fd.io/g/vpp-dev/message/17240
Mute This Topic: https://lists.fd.io/mt/76195601/21656
Group Owner: vpp-dev+ow...@lists.fd.io
Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub  [arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-


Re: [vpp-dev] #vpp-memif Send packets out on physical interface controlled by vpp(DPDK) once they are received through memif

2020-08-14 Thread Mrityunjay Kumar
Hi Techi,
let me try to help you.
Inline please :)

*Regards*,
Mrityunjay Kumar.
Mobile: +91 - 9731528504



On Sat, Aug 15, 2020 at 12:16 AM  wrote:

> [Edited Message Follows]
> Hello all, Thank you for your inputs.
>
> Let me elaborate my use case. I currently have DPDK router pipeline where
> DPDK-APP-A controls intel NICs through DPDK drivers. DPDK-APP-A is also
> responsible for routing between multiple physical interfaces(NICs).
> DPDK-APP-B is a packet inspection application which does not have(need)
> control over NICs and hence receives packets from DPDK rte_ring(s).
> Current working > After rx on NICs, DPDK-APP-A sends packets(after
> processing for defrag, conntrack) to DPDK-APP-B through rte_ring and waits
> for packets on rte_ring (DPDK-APP-B should send them back after
> inspection). Once packets are received through rte_ring then DPDK-APP-A
> sends them out through one of the chosen NICs (according to destination
> address). This operation is inline i.e. if APP-B wants to drop some
> packets, it can, so that packets won't  traverse further through APP-A.
>
> APP-B does not listen/wait on any particular address since it needs to
> inspect all the traffic.
> Now I want to replace DPDK-APP-A with VPP(DPDK) and need a mechanism with
> inline support to send packets to APP-B for inspection. VPP standalone is
> working perfectly with my NAT, static routing etc requirements.
>

[MJ]: from [your dpdk-app as part of vpp]-> send IP offset to APP-B. [do
xyz in app-B], when sending back tx on memif as IP packet towards VPP [No
MAC address required]. vpp will take care about ARP and l2 header stuff.



>
> I have explored VPP-plugins a bit but APP-B won't fit as plugin (.so) for
> some internal reasons. Hence I started looking into memif (shared memory
> packet interfaces)
>

[MJ] : you can look on VPP process node, Hope this input is helpful if you
are planning to add APP-B as part of VPP. :)



>
> Hope this clears my requirement. 
>
-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.

View/Reply Online (#17239): https://lists.fd.io/g/vpp-dev/message/17239
Mute This Topic: https://lists.fd.io/mt/76099289/21656
Mute #vpp-memif: https://lists.fd.io/g/fdio+vpp-dev/mutehashtag/vpp-memif
Group Owner: vpp-dev+ow...@lists.fd.io
Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub  [arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-


Re: [vpp-dev] #vpp-memif Send packets out on physical interface controlled by vpp(DPDK) once they are received through memif

2020-08-14 Thread techiek7
[Edited Message Follows]

Hello all, Thank you for your inputs.

Let me elaborate my use case. I currently have DPDK router pipeline where 
DPDK-APP-A controls intel NICs through DPDK drivers. DPDK-APP-A is also 
responsible for routing between multiple physical interfaces(NICs). DPDK-APP-B 
is a packet inspection application which does not have(need) control over NICs 
and hence receives packets from DPDK rte_ring(s).
Current working > After rx on NICs, DPDK-APP-A sends packets(after 
processing for defrag, conntrack) to DPDK-APP-B through rte_ring and waits for 
packets on rte_ring (DPDK-APP-B should send them back after inspection). Once 
packets are received through rte_ring then DPDK-APP-A sends them out through 
one of the chosen NICs (according to destination address). This operation is 
inline i.e. if APP-B wants to drop some packets, it can, so that packets won't  
traverse further through APP-A.

APP-B does not listen/wait on any particular address since it needs to inspect 
all the traffic.
Now I want to replace DPDK-APP-A with VPP(DPDK) and need a mechanism with 
inline support to send packets to APP-B for inspection. VPP standalone is 
working perfectly with my NAT, static routing etc requirements.

I have explored VPP-plugins a bit but APP-B won't fit as plugin (.so) for some 
internal reasons. Hence I started looking into memif (shared memory packet 
interfaces)

Hope this clears my requirement.
-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.

View/Reply Online (#17238): https://lists.fd.io/g/vpp-dev/message/17238
Mute This Topic: https://lists.fd.io/mt/76099289/21656
Group Owner: vpp-dev+ow...@lists.fd.io
Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub  [arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-


Re: [vpp-dev] #vpp-memif Send packets out on physical interface controlled by vpp(DPDK) once they are received through memif

2020-08-14 Thread techiek7
Hello all, Thank you for your inputs.

Let me elaborate my use case. I currently have DPDK router pipeline where 
DPDK-APP-A controls intel NICs through DPDK drivers. DPDK-APP-A is also 
responsible for routing between multiple physical interfaces(NICs). DPDK-APP-B 
is a packet inspection application which does not have(need) control over NICs 
and hence receives packets from DPDK rte_ring(s).
Current working > After rx on NICs, DPDK-APP-A sends packets(after 
processing for defrag, conntrack) to DPDK-APP-B through rte_ring and waits for 
packets on rte_ring (DPDK-APP-B should send them back after inspection). Once 
packets are received through rte_ring then DPDK-APP-A sends them out through 
one of the chosen NICs (according to destination address). This operation is 
inline i.e. if APP-B wants to drop some packets, it can.

APP-B does not listen/wait on any particular address since it needs to inspect 
all the traffic.
Now I want to replace DPDK-APP-A with VPP(DPDK) and need a mechanism to send 
packets to APP-B for inspection.

I have explored VPP-plugins a bit but APP-B won't fit as plugin (.so) for some 
internal reasons. Hence I started looking into memif (shared memory packet 
interfaces)

Hope this clears my requirement.
-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.

View/Reply Online (#17238): https://lists.fd.io/g/vpp-dev/message/17238
Mute This Topic: https://lists.fd.io/mt/76099289/21656
Mute #vpp-memif: https://lists.fd.io/g/fdio+vpp-dev/mutehashtag/vpp-memif
Group Owner: vpp-dev+ow...@lists.fd.io
Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub  [arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-


Re: [vpp-dev] VRRP issue

2020-08-14 Thread Naveen Joy via lists.fd.io
Thanks, Matthew. I am seeing the same behavior with the default advertisement 
interval of 1s.
Tcpdump on a linux tap interface plugged into the same BD as the backup VR 
shows VRRP advertisements arriving at the configured rate of 1s (100cs),
So, there is no packet loss of advertisements or delays in sending 
advertisements by the master VR.

10:37:19.991540 IP 10.4.4.3 > vrrp.mcast.net: VRRPv3, Advertisement, vrid 10, 
prio 110, intvl 100cs, length 12
10:37:20.991619 IP 10.4.4.3 > vrrp.mcast.net: VRRPv3, Advertisement, vrid 10, 
prio 110, intvl 100cs, length 12
10:37:21.991783 IP 10.4.4.3 > vrrp.mcast.net: VRRPv3, Advertisement, vrid 10, 
prio 110, intvl 100cs, length 12
10:37:22.991792 IP 10.4.4.3 > vrrp.mcast.net: VRRPv3, Advertisement, vrid 10, 
prio 110, intvl 100cs, length 12
10:37:23.991926 IP 10.4.4.3 > vrrp.mcast.net: VRRPv3, Advertisement, vrid 10, 
prio 110, intvl 100cs, length 12
10:37:24.991976 IP 10.4.4.3 > vrrp.mcast.net: VRRPv3, Advertisement, vrid 10, 
prio 110, intvl 100cs, length 12
10:37:25.992057 IP 10.4.4.3 > vrrp.mcast.net: VRRPv3, Advertisement, vrid 10, 
prio 110, intvl 100cs, length 12
10:37:26.992131 IP 10.4.4.3 > vrrp.mcast.net: VRRPv3, Advertisement, vrid 10, 
prio 110, intvl 100cs, length 12
10:37:27.992257 IP 10.4.4.3 > vrrp.mcast.net: VRRPv3, Advertisement, vrid 10, 
prio 110, intvl 100cs, length 12
10:37:28.992311 IP 10.4.4.3 > vrrp.mcast.net: VRRPv3, Advertisement, vrid 10, 
prio 110, intvl 100cs, length 12
10:37:29.992402 IP 10.4.4.3 > vrrp.mcast.net: VRRPv3, Advertisement, vrid 10, 
prio 110, intvl 100cs, length 12
10:37:30.992513 IP 10.4.4.3 > vrrp.mcast.net: VRRPv3, Advertisement, vrid 10, 
prio 110, intvl 100cs, length 12
10:37:31.992586 IP 10.4.4.3 > vrrp.mcast.net: VRRPv3, Advertisement, vrid 10, 
prio 110, intvl 100cs, length 12


However, it appears that there is a delay in VRRP packet processing at the 
backup VR resulting in frequent state transitions.

On the backup VR:
vpp# show err
   CountNode  Reason
120347   vrrp4-input  VRRP packets processed

vpp# show err (after 1 sec)
   CountNode  Reason
120347   vrrp4-input  VRRP packets processed

Also, log on the backup VR shows that VRRP advertisements from master are 
received every 4s

Aug 14 10:43:57 ml-ucs-01 vnet[5504]: vrrp_input_process:223: Received 
advertisement for master VR [0] sw_if_index 14 VR ID 10 IPv4
Aug 14 10:43:57 ml-ucs-01 vnet[5504]: vrrp_vr_transition:283: VR [0] 
sw_if_index 14 VR ID 10 IPv4 transitioning to Backup
Aug 14 10:43:57 ml-ucs-01 vnet[5504]: vrrp_vr_transition_addrs:238: Deleting VR 
addresses on sw_if_index 14
Aug 14 10:43:57 ml-ucs-01 vnet[5504]: vrrp_vr_transition_vmac:123: Deleting 
virtual MAC address 00:00:5e:00:01:0a on hardware interface 13
Aug 14 10:44:00 ml-ucs-01 vnet[5504]: vrrp_vr_transition:283: VR [0] 
sw_if_index 14 VR ID 10 IPv4 transitioning to Master
Aug 14 10:44:00 ml-ucs-01 vnet[5504]: vrrp_vr_transition_addrs:238: Adding VR 
addresses on sw_if_index 14
Aug 14 10:44:00 ml-ucs-01 vnet[5504]: vrrp_vr_transition_vmac:123: Adding 
virtual MAC address 00:00:5e:00:01:0a on hardware interface 13
Aug 14 10:44:01 ml-ucs-01 vnet[5504]: vrrp_input_process:223: Received 
advertisement for master VR [0] sw_if_index 14 VR ID 10 IPv4
Aug 14 10:44:01 ml-ucs-01 vnet[5504]: vrrp_vr_transition:283: VR [0] 
sw_if_index 14 VR ID 10 IPv4 transitioning to Backup
Aug 14 10:44:01 ml-ucs-01 vnet[5504]: vrrp_vr_transition_addrs:238: Deleting VR 
addresses on sw_if_index 14
Aug 14 10:44:01 ml-ucs-01 vnet[5504]: vrrp_vr_transition_vmac:123: Deleting 
virtual MAC address 00:00:5e:00:01:0a on hardware interface 13
Aug 14 10:44:04 ml-ucs-01 vnet[5504]: vrrp_vr_transition:283: VR [0] 
sw_if_index 14 VR ID 10 IPv4 transitioning to Master
Aug 14 10:44:04 ml-ucs-01 vnet[5504]: vrrp_vr_transition_addrs:238: Adding VR 
addresses on sw_if_index 14
Aug 14 10:44:04 ml-ucs-01 vnet[5504]: vrrp_vr_transition_vmac:123: Adding 
virtual MAC address 00:00:5e:00:01:0a on hardware interface 13
Aug 14 10:44:05 ml-ucs-01 vnet[5504]: vrrp_input_process:223: Received 
advertisement for master VR [0] sw_if_index 14 VR ID 10 IPv4
Aug 14 10:44:05 ml-ucs-01 vnet[5504]: vrrp_vr_transition:283: VR [0] 
sw_if_index 14 VR ID 10 IPv4 transitioning to Backup
Aug 14 10:44:05 ml-ucs-01 vnet[5504]: vrrp_vr_transition_addrs:238: Deleting VR 
addresses on sw_if_index 14
Aug 14 10:44:05 ml-ucs-01 vnet[5504]: vrrp_vr_transition_vmac:123: Deleting 
virtual MAC address 00:00:5e:00:01:0a on hardware interface 13
Aug 14 10:44:08 ml-ucs-01 vnet[5504]: vrrp_vr_transition:283: VR [0] 
sw_if_index 14 VR ID 10 IPv4 transitioning to Master
Aug 14 10:44:08 ml-ucs-01 vnet[5504]: vrrp_vr_transition_addrs:238: Adding VR 
addresses on sw_if_index 14
Aug 14 10:44:08 ml-ucs-01 vnet[5504]: vrrp_vr_transition_vmac:123: Adding 
virtual MAC address 00:00:5e:00:01:0a on hardware interface 13
Aug 14 10:44:09 ml-ucs-01 

Re: [vpp-dev] #vpp-memif Send packets out on physical interface controlled by vpp(DPDK) once they are received through memif

2020-08-14 Thread Mrityunjay Kumar
Hi Techiek
*If you are clear about routing & Networking then below is the advice that
can work for you.*

*Create mem interface in IP mode. when you are sending n/w packet from your
non-vpp application to vpp, send an IP packet on mem interface. rest thing
VPP will take care. *

*Hope this will work for you. *
*:)*
*//MJ *




*Regards*,
Mrityunjay Kumar.
Mobile: +91 - 9731528504



On Mon, Aug 10, 2020 at 11:03 AM  wrote:

> Hello Team,
>
> How do I send packets out on a physical interface after I received them
> through libmemif in non-vpp application from VPP(DPDK)? Do I need to send
> them back to vpp so that VPP can send them out on a physical interface.
> 
>
-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.

View/Reply Online (#17236): https://lists.fd.io/g/vpp-dev/message/17236
Mute This Topic: https://lists.fd.io/mt/76099289/21656
Mute #vpp-memif: https://lists.fd.io/g/fdio+vpp-dev/mutehashtag/vpp-memif
Group Owner: vpp-dev+ow...@lists.fd.io
Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub  [arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-


[vpp-dev] Coverity run FAILED as of 2020-08-14 15:33:18 UTC

2020-08-14 Thread Noreply Jenkins
Coverity run failed today.

ERROR: File 'output.txt' does not exist
-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.

View/Reply Online (#17235): https://lists.fd.io/g/vpp-dev/message/17235
Mute This Topic: https://lists.fd.io/mt/76190219/21656
Group Owner: vpp-dev+ow...@lists.fd.io
Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub  [arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-


Re: [vpp-dev] VRRP issue

2020-08-14 Thread Matthew Smith via lists.fd.io
Hi Naveen,

Generally a transition from backup to master occurs if the master down
timer expires and no advertisement has been received. So it seems like some
advertisement packets from the higher priority VR are not being received or
are not being processed before the timer expires. Since the lower priority
VR keeps switching from master to backup after receiving an advertisement
from the higher priority VR, at least some of the advertisements from the
higher priority VR are clearly being received.

Packet loss of advertisements is the first thing that I suggest
you investigate. A packet trace or capture on the lower priority VR would
probably be helpful in determining whether the advertisements are arriving.
It also might be helpful to see what 'vppctl show errors' says.

Another possibility is that there are delays in sending advertisements by
the higher priority VR or in receiving/processing them on the lower
priority VR. The advertisement interval appears to be set to 0.3s and the
master down timer is 1.08s. It seems unlikely that with packets being sent
every 0.3s that either sending or receiving could be delayed by enough that
the receiving side would not process one within 1.08s. Just to rule out
that possibility, you could try increasing the advertisement interval to a
higher value and see if the situation improves. Do you still see the same
behavior if you configure the default advertisement interval of 1s (100cs)
on both VRs?

Thanks,
-Matt


On Thu, Aug 13, 2020 at 5:34 PM Naveen Joy (najoy)  wrote:

> Hi Matthew/All,
>
>
>
> I am facing an issue with VRRP in VPP and would appreciate your help.
>
>
>
> (Attached - architecture diagram)
>
>
>
>1. I have 2 nodes with VPP & in each node, VRRP is configured to back
>up a router BVI interface in a bridge domain.
>2. The VRRP VRs are speaking VRRP (multicast) over an uplink VLAN
>interface connected to an external switch.
>3. The active router has a VR priority of 110 and is set to preempt.
>
> The backup router has a VR priority of 100 and is not in preempt.
>
>
>
>1. The issue is that VRRP in the backup router is unstable and keeps
>transitioning between the master and backup states every second.
>
> However, the VRRP in the master node is stable.
>
>
>
> I am running the  latest VPP release installed from master  this week.
>
>
>
> vpp# show version verbose
>
> Version:  v20.09-rc0~283-g40c07ce7a~b1542
>
> Compiled by:  root
>
> Compile host: 1f7cd9b19229
>
> Compile date: 2020-08-11T20:40:47
>
> Compile location: /w/workspace/vpp-merge-master-ubuntu1804
>
> Compiler: Clang/LLVM 9.0.0 (tags/RELEASE_900/final)
>
> Current PID:  5504
>
>
>
> *On the backup node –*
>
>
>
> Aug 13 12:59:48 ml-ucs-03 vnet[4023]: vrrp_vr_transition_addrs:238:
> Deleting VR addresses on sw_if_index 11
>
> Aug 13 12:59:48 ml-ucs-03 vnet[4023]: vrrp_vr_transition_vmac:123:
> Deleting virtual MAC address 00:00:5e:00:01:0a on hardware interface 10
>
> Aug 13 12:59:50 ml-ucs-03 vnet[4023]: vrrp_vr_transition:283: VR [0]
> sw_if_index 11 VR ID 10 IPv4 transitioning to Master
>
> Aug 13 12:59:50 ml-ucs-03 vnet[4023]: vrrp_vr_transition_addrs:238: Adding
> VR addresses on sw_if_index 11
>
> Aug 13 12:59:50 ml-ucs-03 vnet[4023]: vrrp_vr_transition_vmac:123: Adding
> virtual MAC address 00:00:5e:00:01:0a on hardware interface 10
>
> Aug 13 12:59:50 ml-ucs-03 vnet[4023]: vrrp_input_process:223: Received
> advertisement for master VR [0] sw_if_index 11 VR ID 10 IPv4
>
> Aug 13 12:59:50 ml-ucs-03 vnet[4023]: vrrp_vr_transition:283: VR [0]
> sw_if_index 11 VR ID 10 IPv4 transitioning to Backup
>
> Aug 13 12:59:50 ml-ucs-03 vnet[4023]: vrrp_vr_transition_addrs:238:
> Deleting VR addresses on sw_if_index 11
>
> Aug 13 12:59:50 ml-ucs-03 vnet[4023]: vrrp_vr_transition_vmac:123:
> Deleting virtual MAC address 00:00:5e:00:01:0a on hardware interface 10
>
> Aug 13 12:59:51 ml-ucs-03 vnet[4023]: vrrp_vr_transition:283: VR [0]
> sw_if_index 11 VR ID 10 IPv4 transitioning to Master
>
> Aug 13 12:59:51 ml-ucs-03 vnet[4023]: vrrp_vr_transition_addrs:238: Adding
> VR addresses on sw_if_index 11
>
> Aug 13 12:59:51 ml-ucs-03 vnet[4023]: vrrp_vr_transition_vmac:123: Adding
> virtual MAC address 00:00:5e:00:01:0a on hardware interface 10
>
> Aug 13 12:59:51 ml-ucs-03 vnet[4023]: vrrp_input_process:223: Received
> advertisement for master VR [0] sw_if_index 11 VR ID 10 IPv4
>
> Aug 13 12:59:51 ml-ucs-03 vnet[4023]: vrrp_vr_transition:283: VR [0]
> sw_if_index 11 VR ID 10 IPv4 transitioning to Backup
>
> Aug 13 12:59:51 ml-ucs-03 vnet[4023]: vrrp_vr_transition_addrs:238:
> Deleting VR addresses on sw_if_index 11
>
> Aug 13 12:59:51 ml-ucs-03 vnet[4023]: vrrp_vr_transition_vmac:123:
> Deleting virtual MAC address 00:00:5e:00:01:0a on hardware interface 10
>
> Aug 13 12:59:52 ml-ucs-03 vnet[4023]: vrrp_vr_transition:283: VR [0]
> sw_if_index 11 VR ID 10 IPv4 transitioning to Master
>
> 

[vpp-dev] Jenkins.fd.io is currently wedged -- LF ticket was opened last

2020-08-14 Thread Dave Wallace

Folks,

While merging the stream of stable/1908 cherry-picks for the 19.08.3 
release, jenkins.fd.io locked up with ~300 merge jobs in the build 
queue. A ticket [0] was opened last night when the stoppage was detected 
and jenkins will likely need to be rebooted after some investigation.


Thank you for your patience.
-daw-
-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.

View/Reply Online (#17232): https://lists.fd.io/g/vpp-dev/message/17232
Mute This Topic: https://lists.fd.io/mt/76186084/21656
Group Owner: vpp-dev+ow...@lists.fd.io
Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub  [arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-


Re: [vpp-dev] Jenkins.fd.io is currently wedged -- LF ticket was opened last night

2020-08-14 Thread Dave Wallace

[0] https://jira.linuxfoundation.org/plugins/servlet/theme/portal/2/IT-20423

On 8/14/2020 7:56 AM, Dave Wallace wrote:

Folks,

While merging the stream of stable/1908 cherry-picks for the 19.08.3 
release, jenkins.fd.io locked up with ~300 merge jobs in the build 
queue. A ticket [0] was opened last night when the stoppage was 
detected and jenkins will likely need to be rebooted after some 
investigation.


Thank you for your patience.
-daw-


-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.

View/Reply Online (#17233): https://lists.fd.io/g/vpp-dev/message/17233
Mute This Topic: https://lists.fd.io/mt/76186092/21656
Group Owner: vpp-dev+ow...@lists.fd.io
Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub  [arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-


Re: [vpp-dev] Deterministic CGNAT vectors

2020-08-14 Thread Ole Troan
Hi Listas,

> First of all, thanks for your contribution to the community. We use 
> Deterministic NAT feature on a huge demand, without any big problems.
> 
> It´s about the harcoded limit of 1.000 sessions ( preallocated vectors) 
> per host. I would like increase this value, it's safe to increase it to 4.000 
> sessions ?
> 
> I'm aware of memory requirements to use this setting.
> 
> Anyone had tried such setting ?

I haven't tried, but I don't see any reason why that shouldn't work well.

Best regards,
Ole-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.

View/Reply Online (#17231): https://lists.fd.io/g/vpp-dev/message/17231
Mute This Topic: https://lists.fd.io/mt/76135768/21656
Group Owner: vpp-dev+ow...@lists.fd.io
Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub  [arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-


Re: [vpp-dev] #vpp-memif Send packets out on physical interface controlled by vpp(DPDK) once they are received through memif

2020-08-14 Thread Benoit Ganne (bganne) via lists.fd.io
> How libmemif is supposed to be used then ? Did you get a chance to look
> into use case above ?

As poited by Neale you should not set 192.168.1.1 on memif/0/0 and use it as 
next hop at the same time.
Memif are just interfaces. You can set addresses, route traffic etc., but you 
need to abide by how ip routing works. Memif is no different than any kind of 
other interfaces.
What are you trying to achieve? What is the app on the other end of memif is 
expecting?
If you app is answering to 192.168.1.1, then maybe you should just set 
192.168.1.2 on memif/0/0.

ben
-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.

View/Reply Online (#17230): https://lists.fd.io/g/vpp-dev/message/17230
Mute This Topic: https://lists.fd.io/mt/76099289/21656
Mute #vpp-memif: https://lists.fd.io/g/fdio+vpp-dev/mutehashtag/vpp-memif
Group Owner: vpp-dev+ow...@lists.fd.io
Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub  [arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-


Re: [vpp-dev] #vpp-memif Send packets out on physical interface controlled by vpp(DPDK) once they are received through memif

2020-08-14 Thread techiek7
Hello Neale Ranns,

How libmemif is supposed to be used then ? Did you get a chance to look into 
use case above ?
-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.

View/Reply Online (#17229): https://lists.fd.io/g/vpp-dev/message/17229
Mute This Topic: https://lists.fd.io/mt/76099289/21656
Mute #vpp-memif: https://lists.fd.io/g/fdio+vpp-dev/mutehashtag/vpp-memif
Group Owner: vpp-dev+ow...@lists.fd.io
Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub  [arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-