[vpp-dev] vpp crashes on deleting route 0.0.0.0/0 via interface #vpp

2020-01-13 Thread elantsev . s
Hello Everyone!

I've encountered an issue with deleting route to 0.0.0.0/0 via some virtual 
interface: vpp crashed with a SIGABRT. This issue can be reproduced with gre 
interface on the current master 1c6486f7b8a00a1358d5c8f4ea1d874073bbcd6c:

```
DBGvpp# ip table add 10
DBGvpp# create gre tunnel src 1.1.1.1 dst 2.2.2.2
gre0

DBGvpp# ip route add 0.0.0.0/0 table 10 via gre0
DBGvpp# sh ip fib table 10
ipv4-VRF:10, fib_index:1, flow hash:[src dst sport dport proto ] epoch:0 
flags:none locks:[CLI:1, ]
0.0.0.0/0
  unicast-ip4-chain
  [@0]: dpo-load-balance: [proto:ip4 index:9 buckets:1 uRPF:12 to:[0:0]]
[0] [@0]: dpo-drop ip4
0.0.0.0/32
  unicast-ip4-chain
  [@0]: dpo-load-balance: [proto:ip4 index:10 buckets:1 uRPF:8 to:[0:0]]
[0] [@0]: dpo-drop ip4
224.0.0.0/4
  unicast-ip4-chain
  [@0]: dpo-load-balance: [proto:ip4 index:12 buckets:1 uRPF:10 to:[0:0]]
[0] [@0]: dpo-drop ip4
240.0.0.0/4
  unicast-ip4-chain
  [@0]: dpo-load-balance: [proto:ip4 index:11 buckets:1 uRPF:9 to:[0:0]]
[0] [@0]: dpo-drop ip4
255.255.255.255/32
  unicast-ip4-chain
  [@0]: dpo-load-balance: [proto:ip4 index:13 buckets:1 uRPF:11 to:[0:0]]
[0] [@0]: dpo-drop ip4

/home/elantsev/vpp/src/vnet/fib/fib_attached_export.c:367 
(fib_attached_export_purge) assertion `NULL != fed' fails

Thread 1 "vpp_main" received signal SIGABRT, Aborted.
__GI_raise (sig=sig@entry=6) at ../sysdeps/unix/sysv/linux/raise.c:51
51  ../sysdeps/unix/sysv/linux/raise.c: No such file or directory.

(gdb) bt
#0  __GI_raise (sig=sig@entry=6) at ../sysdeps/unix/sysv/linux/raise.c:51
#1  0x75ac9801 in __GI_abort () at abort.c:79
#2  0xbe0b in os_panic () at 
/home/elantsev/vpp/src/vpp/vnet/main.c:355
#3  0x75eacde9 in debugger () at 
/home/elantsev/vpp/src/vppinfra/error.c:84
#4  0x75ead1b8 in _clib_error (how_to_die=2, function_name=0x0, 
line_number=0, fmt=0x77743bb8 "%s:%d (%s) assertion `%s' fails") at 
/home/elantsev/vpp/src/vppinfra/error.c:143
#5  0x774cbbe0 in fib_attached_export_purge (fib_entry=0x7fffb4bd2dd0) 
at /home/elantsev/vpp/src/vnet/fib/fib_attached_export.c:367
#6  0x774919de in fib_entry_post_flag_update_actions 
(fib_entry=0x7fffb4bd2dd0, old_flags=(FIB_ENTRY_FLAG_ATTACHED | 
FIB_ENTRY_FLAG_IMPORT)) at /home/elantsev/vpp/src/vnet/fib/fib_entry.c:674
#7  0x77491a3c in fib_entry_post_install_actions 
(fib_entry=0x7fffb4bd2dd0, source=FIB_SOURCE_DEFAULT_ROUTE, 
old_flags=(FIB_ENTRY_FLAG_ATTACHED | FIB_ENTRY_FLAG_IMPORT))
at /home/elantsev/vpp/src/vnet/fib/fib_entry.c:709
#8  0x77491d78 in fib_entry_post_update_actions 
(fib_entry=0x7fffb4bd2dd0, source=FIB_SOURCE_DEFAULT_ROUTE, 
old_flags=(FIB_ENTRY_FLAG_ATTACHED | FIB_ENTRY_FLAG_IMPORT))
at /home/elantsev/vpp/src/vnet/fib/fib_entry.c:804
#9  0x774923f9 in fib_entry_source_removed (fib_entry=0x7fffb4bd2dd0, 
old_flags=(FIB_ENTRY_FLAG_ATTACHED | FIB_ENTRY_FLAG_IMPORT)) at 
/home/elantsev/vpp/src/vnet/fib/fib_entry.c:992
#10 0x774925e7 in fib_entry_path_remove (fib_entry_index=7, 
source=FIB_SOURCE_CLI, rpaths=0x7fffb83a3520) at 
/home/elantsev/vpp/src/vnet/fib/fib_entry.c:1072
#11 0x7747980b in fib_table_entry_path_remove2 (fib_index=1, 
prefix=0x7fffb8382b00, source=FIB_SOURCE_CLI, rpaths=0x7fffb83a3520) at 
/home/elantsev/vpp/src/vnet/fib/fib_table.c:680
#12 0x76fb870c in vnet_ip_route_cmd (vm=0x766b6680 
, main_input=0x7fffb8382f00, cmd=0x7fffb50373b8) at 
/home/elantsev/vpp/src/vnet/ip/lookup.c:449
#13 0x763d402f in vlib_cli_dispatch_sub_commands (vm=0x766b6680 
, cm=0x766b68b0 , 
input=0x7fffb8382f00, parent_command_index=431)
at /home/elantsev/vpp/src/vlib/cli.c:568
#14 0x763d3ead in vlib_cli_dispatch_sub_commands (vm=0x766b6680 
, cm=0x766b68b0 , 
input=0x7fffb8382f00, parent_command_index=0)
at /home/elantsev/vpp/src/vlib/cli.c:528
#15 0x763d4434 in vlib_cli_input (vm=0x766b6680 , 
input=0x7fffb8382f00, function=0x7646dc89 , 
function_arg=0) at /home/elantsev/vpp/src/vlib/cli.c:667
#16 0x7647476d in unix_cli_process_input (cm=0x766b7020 
, cli_file_index=0) at 
/home/elantsev/vpp/src/vlib/unix/cli.c:2572
#17 0x7647540e in unix_cli_process (vm=0x766b6680 
, rt=0x7fffb8342000, f=0x0) at 
/home/elantsev/vpp/src/vlib/unix/cli.c:2688
#18 0x764161d4 in vlib_process_bootstrap (_a=140736272894320) at 
/home/elantsev/vpp/src/vlib/main.c:1475
#19 0x75eccfbc in clib_calljmp () at 
/home/elantsev/vpp/src/vppinfra/longjmp.S:123
#20 0x7fffb78d8940 in ?? ()
#21 0x764162dc in vlib_process_startup (vm=0x0, p=0x8, f=0x766b6680 
) at /home/elantsev/vpp/src/vlib/main.c:1497
Backtrace stopped: previous frame inner to this frame (corrupt stack?)


#4  0x75ead1b8 in _clib_error (how_to_die=2, function_name=0x0, 
line_number=0, fmt=0x77743bb8 "%s:%d (%s) assertion `%s' fails") at 
/home/elantsev/vpp/src/vppinfra/error.c:143
msg = 0x0
va = 

Re: [vpp-dev] (Vpp 19,08)Facing issue with vmxnet3 with 1rx/tx queue

2020-01-13 Thread chetan bhasin
Thanks Benoit! I will try the above mentioned steps.

I am not sure why it works fine with 2Rx and 2Tx queue configuration


 GigabitEthernet13/0/0  1 up   GigabitEthernet13/0/0
  Link speed: 10 Gbps
  Ethernet address 00:50:56:9b:f5:c5
  VMware VMXNET3
carrier up full duplex mtu 9206
flags: admin-up pmd maybe-multiseg rx-ip4-cksum
Devargs:
rx: queues 2 (max 16), desc 1024 (min 128 max 4096 align 1)
tx: queues 2 (max 8), desc 1024 (min 512 max 4096 align 1)
pci: device 15ad:07b0 subsystem 15ad:07b0 address :13:00.00 numa 0
max rx packet len: 16384
promiscuous: unicast off all-multicast on
vlan offload: strip off filter off qinq off
rx offload avail:  vlan-strip ipv4-cksum udp-cksum tcp-cksum tcp-lro
   vlan-filter jumbo-frame scatter
rx offload active: ipv4-cksum jumbo-frame scatter
tx offload avail:  vlan-insert ipv4-cksum udp-cksum tcp-cksum tcp-tso
   multi-segs
tx offload active: multi-segs
rss avail: ipv4-tcp ipv4-udp ipv4 ipv6-tcp ipv6-udp ipv6
rss active:none
tx burst function: vmxnet3_xmit_pkts
rx burst function: vmxnet3_recv_pkts


Thanks,
Chetan Bhasin

On Mon, Jan 13, 2020 at 9:17 PM Benoit Ganne (bganne) 
wrote:

> Hmm,
>
>  - I suppose you run VPP as root and not in a container
>  - if you use CentOS/RHEL can you check disabling SELinux ('setenforce 0')
>  - can you share the output of Linux dmesg and VPP 'show pci'
>
> Best
> ben
>
> > -Original Message-
> > From: chetan bhasin 
> > Sent: lundi 13 janvier 2020 15:51
> > To: Benoit Ganne (bganne) 
> > Cc: vpp-dev 
> > Subject: Re: [vpp-dev] (Vpp 19,08)Facing issue with vmxnet3 with 1rx/tx
> > queue
> >
> > Hi Benoit,
> >
> > Thanks for your prompt response.
> >
> > We are migrating from vpp 18.01 to vpp.19.08 , that's why we want least
> > modification in our build system and we want to use DPDK as we were using
> > earlier
> >
> > .
> > DBGvpp# show log
> > 2020/01/13 14:44:42:014 notice dhcp/clientplugin initialized
> > 2020/01/13 14:44:42:051 warn   dpdk   EAL init args: -c 14 -n
> > 4 --in-memory --log-level debug --file-prefix vpp -w :1b:00.0 -w
> > :13:00.0 --master-lcore 4
> > 2020/01/13 14:44:42:603 notice dpdk   DPDK drivers found 2
> > ports...
> > 2020/01/13 14:44:42:622 notice dpdk   EAL: Detected 6
> lcore(s)
> > 2020/01/13 14:44:42:622 notice dpdk   EAL: Detected 1 NUMA
> > nodes
> >
> > 2020/01/13 14:44:42:623 notice dpdk   EAL: PCI device
> > :13:00.0 on NUMA socket -1
> > 2020/01/13 14:44:42:623 notice dpdk   EAL:   Invalid NUMA
> > socket, default to 0
> > 2020/01/13 14:44:42:623 notice dpdk   EAL:   probe driver:
> > 15ad:7b0 net_vmxnet3
> > 2020/01/13 14:44:42:623 notice dpdk   EAL:   using IOMMU type
> > 8 (No-IOMMU)
> > 2020/01/13 14:44:42:623 notice dpdk   EAL: Ignore mapping IO
> > port bar(3)
> > 2020/01/13 14:44:42:623 notice dpdk   EAL: PCI device
> > :1b:00.0 on NUMA socket -1
> > 2020/01/13 14:44:42:623 notice dpdk   EAL:   Invalid NUMA
> > socket, default to 0
> > 2020/01/13 14:44:42:623 notice dpdk   EAL:   probe driver:
> > 15ad:7b0 net_vmxnet3
> > 2020/01/13 14:44:42:623 notice dpdk   EAL: Ignore mapping IO
> > port bar(3)
> > 2020/01/13 14:45:02:475 errdpdk   Interface
> > GigabitEthernet13/0/0 error 1: Operation not permitted
> > 2020/01/13 14:45:02:475 notice dpdk
> > vmxnet3_v4_rss_configure(): Set RSS fields (v4) failed: 1
> > 2020/01/13 14:45:02:475 notice dpdk   vmxnet3_dev_start():
> > Failed to configure v4 RSS
> >
> >
> >
> > Thanks,
> > Chetan Bhasin
> >
> > On Mon, Jan 13, 2020 at 7:58 PM Benoit Ganne (bganne)  >  > wrote:
> >
> >
> >   Hi Chetan,
> >
> >   Any reason for not using VPP built-in vmxnet3 driver instead of
> > DPDK? That should give you better performance and would be easier for us
> > to debug. See https://docs.fd.io/vpp/20.01/d2/d1a/vmxnet3_doc.html
> >
> >   Otherwise, can you share 'show logging' output?
> >
> >   Ben
> >
> >   > -Original Message-
> >   > From: vpp-dev@lists.fd.io    > d...@lists.fd.io  > On Behalf Of chetan
> bhasin
> >   > Sent: lundi 13 janvier 2020 15:20
> >   > To: vpp-dev mailto:vpp-dev@lists.fd.io> >
> >   > Subject: [vpp-dev] (Vpp 19,08)Facing issue with vmxnet3 with
> > 1rx/tx queue
> >   >
> >   > Hello Everyone,
> >   >
> >   > I am facing an issue while bringing up vpp with less than 2 rx
> and
> > 2 tx
> >   > queue. I am using vpp19.08. I have configured pci's under the
> dpdk
> > section
> >   > like below -
> >   >
> >   > 1)
> >   > dpdk {
> >   > # dpdk-config
> >   >  dev default {
> >   >  num-rx-desc 1024
> >   

Re: [vpp-dev] #vapi -- Need multiple times " ip table del xxx" to delete a specific 'ip table' within vpp?

2020-01-13 Thread Neale Ranns via Lists.Fd.Io


From:  on behalf of "rya...@yunify.com" 
Date: Tuesday 14 January 2020 at 14:07
To: "vpp-dev@lists.fd.io" 
Subject: Re: [vpp-dev] #vapi -- Need multiple times " ip table del xxx" to 
delete a specific 'ip table' within vpp?

Hi Neale,

Thanks for answer.
Another question:
If I remove the l3 interface directly, the num in  " locks:[src:CLI:6, ]" won't 
decrease.
If I remove the l3 interface from vrf, the num in  " locks:[src:CLI:6, ]" will 
decrease. Does this imply the correct api sequence should be "removing from 
VRF" before "deleting interface".

yes, you must unbind an interface from a non-default table before deleting it.

/neale


If I remove the l3 interface directly, would it bring any side affect?

Thanks,
Ryan
-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.

View/Reply Online (#15154): https://lists.fd.io/g/vpp-dev/message/15154
Mute This Topic: https://lists.fd.io/mt/69666295/21656
Mute #vapi: https://lists.fd.io/mk?hashtag=vapi=1480452
Group Owner: vpp-dev+ow...@lists.fd.io
Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub  [arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-


Re: [vpp-dev] #vapi -- Need multiple times " ip table del xxx" to delete a specific 'ip table' within vpp?

2020-01-13 Thread ryanlu
Hi Neale,

Thanks for answer.
Another question:
If I remove the l3 interface directly, the num in  " locks:[src:CLI:6, ]" 
*won't* decrease.
If I remove the l3 interface from vrf, the num in  " locks:[src:CLI:6, ]" 
*will* decrease. Does this imply the correct api sequence should be "removing 
from VRF" before "deleting interface".

If I remove the l3 interface directly, would it bring any side affect?

Thanks,
Ryan
-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.

View/Reply Online (#15153): https://lists.fd.io/g/vpp-dev/message/15153
Mute This Topic: https://lists.fd.io/mt/69666295/21656
Mute #vapi: https://lists.fd.io/mk?hashtag=vapi=1480452
Group Owner: vpp-dev+ow...@lists.fd.io
Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub  [arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-


Re: [vpp-dev] #vapi -- Need multiple times " ip table del xxx" to delete a specific 'ip table' within vpp?

2020-01-13 Thread Neale Ranns via Lists.Fd.Io
Hi Ryan,

It’s probably a sign that you have bound multiple interfaces to that table :
  set int ip table  

And you need to unbind them (or bind them back to the default table) all before 
deleting the table :
  set int ip table  0

regards,
neale

From:  on behalf of "rya...@yunify.com" 
Date: Monday 13 January 2020 at 21:57
To: "vpp-dev@lists.fd.io" 
Subject: [vpp-dev] #vapi -- Need multiple times " ip table del xxx" to delete a 
specific 'ip table' within vpp?

Hi guys,



I have a question when I delete 'ip table'/'vrf' within VPP.



It need issue multiple times " ip table del xxx" to delete a specific 'ip 
table' within vpp.

The number decided by num in  " locks:[src:CLI:6, ]"

For example, with follow 'ip table'/'vrf', I need issue " ip table del 4114532" 
six times to delete this specific 'ip table'/'vrf'.



It this a design behavior or an issue?  Thanks for help!



vpp# show ip fib table 4114532

ipv4-VRF:4114532, fib_index:5, flow hash:[src dst sport dport proto ] 
locks:[src:CLI:6, ]

0.0.0.0/0

  unicast-ip4-chain

  [@0]: dpo-load-balance: [proto:ip4 index:49 buckets:1 uRPF:50 to:[0:0]]

[0] [@0]: dpo-drop ip4

0.0.0.0/32

  unicast-ip4-chain

  [@0]: dpo-load-balance: [proto:ip4 index:50 buckets:1 uRPF:51 to:[0:0]]

[0] [@0]: dpo-drop ip4

224.0.0.0/4

  unicast-ip4-chain

  [@0]: dpo-load-balance: [proto:ip4 index:52 buckets:1 uRPF:53 to:[0:0]]

[0] [@0]: dpo-drop ip4

240.0.0.0/4

  unicast-ip4-chain

  [@0]: dpo-load-balance: [proto:ip4 index:51 buckets:1 uRPF:52 to:[0:0]]

[0] [@0]: dpo-drop ip4

255.255.255.255/32

  unicast-ip4-chain

  [@0]: dpo-load-balance: [proto:ip4 index:53 buckets:1 uRPF:54 to:[0:0]]

[0] [@0]: dpo-drop ip4

vpp#



Thanks,

Ryan


-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.

View/Reply Online (#15152): https://lists.fd.io/g/vpp-dev/message/15152
Mute This Topic: https://lists.fd.io/mt/69666295/21656
Mute #vapi: https://lists.fd.io/mk?hashtag=vapi=1480452
Group Owner: vpp-dev+ow...@lists.fd.io
Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub  [arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-


Re: [vpp-dev] (Vpp 19,08)Facing issue with vmxnet3 with 1rx/tx queue

2020-01-13 Thread chetan bhasin
Hi Benoit,

Thanks for your prompt response.

We are migrating from vpp 18.01 to vpp.19.08 , that's why we want least
modification in our build system and we want to use DPDK as we were using
earlier

.
DBGvpp# show log
2020/01/13 14:44:42:014 notice dhcp/clientplugin initialized
2020/01/13 14:44:42:051 warn   dpdk   EAL init args: -c 14 -n 4
--in-memory --log-level debug --file-prefix vpp -w :1b:00.0 -w
:13:00.0 --master-lcore 4
2020/01/13 14:44:42:603 notice dpdk   DPDK drivers found 2
ports...
2020/01/13 14:44:42:622 notice dpdk   EAL: Detected 6 lcore(s)
2020/01/13 14:44:42:622 notice dpdk   EAL: Detected 1 NUMA nodes
2020/01/13 14:44:42:623 notice dpdk   EAL: PCI device
:13:00.0 on NUMA socket -1
2020/01/13 14:44:42:623 notice dpdk   EAL:   Invalid NUMA
socket, default to 0
2020/01/13 14:44:42:623 notice dpdk   EAL:   probe driver:
15ad:7b0 net_vmxnet3
2020/01/13 14:44:42:623 notice dpdk   EAL:   using IOMMU type 8
(No-IOMMU)
2020/01/13 14:44:42:623 notice dpdk   EAL: Ignore mapping IO
port bar(3)
2020/01/13 14:44:42:623 notice dpdk   EAL: PCI device
:1b:00.0 on NUMA socket -1
2020/01/13 14:44:42:623 notice dpdk   EAL:   Invalid NUMA
socket, default to 0
2020/01/13 14:44:42:623 notice dpdk   EAL:   probe driver:
15ad:7b0 net_vmxnet3
2020/01/13 14:44:42:623 notice dpdk   EAL: Ignore mapping IO
port bar(3)
2020/01/13 14:45:02:475 errdpdk   Interface
GigabitEthernet13/0/0 error 1: Operation not permitted

*2020/01/13 14:45:02:475 notice dpdk
vmxnet3_v4_rss_configure(): Set RSS fields (v4) failed: 12020/01/13
14:45:02:475 notice dpdk   vmxnet3_dev_start(): Failed to
configure v4 RSS*


Thanks,
Chetan Bhasin

On Mon, Jan 13, 2020 at 7:58 PM Benoit Ganne (bganne) 
wrote:

> Hi Chetan,
>
> Any reason for not using VPP built-in vmxnet3 driver instead of DPDK? That
> should give you better performance and would be easier for us to debug. See
> https://docs.fd.io/vpp/20.01/d2/d1a/vmxnet3_doc.html
>
> Otherwise, can you share 'show logging' output?
>
> Ben
>
> > -Original Message-
> > From: vpp-dev@lists.fd.io  On Behalf Of chetan
> bhasin
> > Sent: lundi 13 janvier 2020 15:20
> > To: vpp-dev 
> > Subject: [vpp-dev] (Vpp 19,08)Facing issue with vmxnet3 with 1rx/tx queue
> >
> > Hello Everyone,
> >
> > I am facing an issue while bringing up vpp with less than 2 rx and 2 tx
> > queue. I am using vpp19.08. I have configured pci's under the dpdk
> section
> > like below -
> >
> > 1)
> > dpdk {
> > # dpdk-config
> >  dev default {
> >  num-rx-desc 1024
> >  num-rx-queues 1
> >  num-tx-desc 1024
> >  num-tx-queues 1
> > # vlan-strip-offload off
> >  }
> > dev :1b:00.0 {
> > }
> > dev :13:00.0 {
> > }
> > }
> >
> > When I bring pci state to up  , it is showing error in "show hardware-
> > interfaces"
> >
> >  DBGvpp# set interface state GigabitEthernet13/0/0 up  DBGvpp# show
> > hardware-interfaces
> >   NameIdx   Link  Hardware
> > GigabitEthernet13/0/0  1down  GigabitEthernet13/0/0
> >   Link speed: 10 Gbps
> >   Ethernet address 00:50:56:9b:f5:c5
> >   VMware VMXNET3
> > carrier down
> > flags: admin-up pmd maybe-multiseg rx-ip4-cksum
> > Devargs:
> > rx: queues 1 (max 16), desc 1024 (min 128 max 4096 align 1)
> > tx: queues 1 (max 8), desc 1024 (min 512 max 4096 align 1)
> > pci: device 15ad:07b0 subsystem 15ad:07b0 address :13:00.00 numa
> 0
> > max rx packet len: 16384
> > promiscuous: unicast off all-multicast off
> > vlan offload: strip off filter off qinq off
> > rx offload avail:  vlan-strip ipv4-cksum udp-cksum tcp-cksum tcp-lro
> >vlan-filter jumbo-frame scatter
> > rx offload active: ipv4-cksum jumbo-frame scatter
> > tx offload avail:  vlan-insert ipv4-cksum udp-cksum tcp-cksum tcp-tso
> >multi-segs
> > tx offload active: multi-segs
> > rss avail: ipv4-tcp ipv4-udp ipv4 ipv6-tcp ipv6-udp ipv6
> > rss active:none
> > tx burst function: vmxnet3_xmit_pkts
> > rx burst function: vmxnet3_recv_pkts
> >   Errors:
> >
> > rte_eth_dev_start[port:0, errno:1]: Operation not permitted
> >
> > 2) When bring up system without "dev default " section , still facing the
> > same issue , this time default [Rx-queue is 1 and tx-queue is 2 (main
> > thread + 1 worker)]
> >
> > DBGvpp# show hardware-interfaces
> >   NameIdx   Link  Hardware
> > GigabitEthernet13/0/0  1down  GigabitEthernet13/0/0
> >   Link speed: 10 Gbps
> >   Ethernet address 00:50:56:9b:f5:c5
> >   VMware VMXNET3
> > carrier down
> > flags: admin-up pmd maybe-multiseg rx-ip4-cksum
> > Devargs:
> > rx: queues 1 (max 16), desc 1024 (min 128 max 4096 align 1)
> > tx: queues 2 (max 8), desc 

Re: [vpp-dev] VPP support for VRRP

2020-01-13 Thread Ahmed Bashandy
Thanks a lot

Ahmed


From: Matthew Smith 
Date: Monday, January 13, 2020 at 8:20 AM
To: "Jerome Tollet (jtollet)" 
Cc: Ahmed Bashandy , vpp-dev 
Subject: Re: [vpp-dev] VPP support for VRRP


Netgate has a plugin which adds VRRPv3 support to VPP. We plan to submit it in 
gerrit in the next month or two.

On Mon, Jan 13, 2020 at 4:27 AM Jerome Tollet via 
Lists.Fd.Io 
mailto:cisco@lists.fd.io>> wrote:

Of course, contributions are more than welcome in case you’d like to work on 
VRRP for VPP.


Netgate has a plugin for VPP which adds VRRPv3 support. We plan to submit it 
via gerrit sometime in the next month or so.

-Matt

-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.

View/Reply Online (#15150): https://lists.fd.io/g/vpp-dev/message/15150
Mute This Topic: https://lists.fd.io/mt/69665214/21656
Group Owner: vpp-dev+ow...@lists.fd.io
Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub  [arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-


Re: [vpp-dev] VPP support for VRRP

2020-01-13 Thread Matthew Smith via Lists.Fd.Io
Netgate has a plugin which adds VRRPv3 support to VPP. We plan to submit it
in gerrit in the next month or two.

On Mon, Jan 13, 2020 at 4:27 AM Jerome Tollet via Lists.Fd.Io  wrote:

>
> Of course, contributions are more than welcome in case you’d like to work
> on VRRP for VPP.
>
>
>
Netgate has a plugin for VPP which adds VRRPv3 support. We plan to submit
it via gerrit sometime in the next month or so.

-Matt
-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.

View/Reply Online (#15149): https://lists.fd.io/g/vpp-dev/message/15149
Mute This Topic: https://lists.fd.io/mt/69665214/21656
Group Owner: vpp-dev+ow...@lists.fd.io
Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub  [arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-


Re: [vpp-dev] (Vpp 19,08)Facing issue with vmxnet3 with 1rx/tx queue

2020-01-13 Thread Benoit Ganne (bganne) via Lists.Fd.Io
Hmm,

 - I suppose you run VPP as root and not in a container
 - if you use CentOS/RHEL can you check disabling SELinux ('setenforce 0')
 - can you share the output of Linux dmesg and VPP 'show pci'

Best
ben

> -Original Message-
> From: chetan bhasin 
> Sent: lundi 13 janvier 2020 15:51
> To: Benoit Ganne (bganne) 
> Cc: vpp-dev 
> Subject: Re: [vpp-dev] (Vpp 19,08)Facing issue with vmxnet3 with 1rx/tx
> queue
> 
> Hi Benoit,
> 
> Thanks for your prompt response.
> 
> We are migrating from vpp 18.01 to vpp.19.08 , that's why we want least
> modification in our build system and we want to use DPDK as we were using
> earlier
> 
> .
> DBGvpp# show log
> 2020/01/13 14:44:42:014 notice dhcp/clientplugin initialized
> 2020/01/13 14:44:42:051 warn   dpdk   EAL init args: -c 14 -n
> 4 --in-memory --log-level debug --file-prefix vpp -w :1b:00.0 -w
> :13:00.0 --master-lcore 4
> 2020/01/13 14:44:42:603 notice dpdk   DPDK drivers found 2
> ports...
> 2020/01/13 14:44:42:622 notice dpdk   EAL: Detected 6 lcore(s)
> 2020/01/13 14:44:42:622 notice dpdk   EAL: Detected 1 NUMA
> nodes
> 
> 2020/01/13 14:44:42:623 notice dpdk   EAL: PCI device
> :13:00.0 on NUMA socket -1
> 2020/01/13 14:44:42:623 notice dpdk   EAL:   Invalid NUMA
> socket, default to 0
> 2020/01/13 14:44:42:623 notice dpdk   EAL:   probe driver:
> 15ad:7b0 net_vmxnet3
> 2020/01/13 14:44:42:623 notice dpdk   EAL:   using IOMMU type
> 8 (No-IOMMU)
> 2020/01/13 14:44:42:623 notice dpdk   EAL: Ignore mapping IO
> port bar(3)
> 2020/01/13 14:44:42:623 notice dpdk   EAL: PCI device
> :1b:00.0 on NUMA socket -1
> 2020/01/13 14:44:42:623 notice dpdk   EAL:   Invalid NUMA
> socket, default to 0
> 2020/01/13 14:44:42:623 notice dpdk   EAL:   probe driver:
> 15ad:7b0 net_vmxnet3
> 2020/01/13 14:44:42:623 notice dpdk   EAL: Ignore mapping IO
> port bar(3)
> 2020/01/13 14:45:02:475 errdpdk   Interface
> GigabitEthernet13/0/0 error 1: Operation not permitted
> 2020/01/13 14:45:02:475 notice dpdk
> vmxnet3_v4_rss_configure(): Set RSS fields (v4) failed: 1
> 2020/01/13 14:45:02:475 notice dpdk   vmxnet3_dev_start():
> Failed to configure v4 RSS
> 
> 
> 
> Thanks,
> Chetan Bhasin
> 
> On Mon, Jan 13, 2020 at 7:58 PM Benoit Ganne (bganne)   > wrote:
> 
> 
>   Hi Chetan,
> 
>   Any reason for not using VPP built-in vmxnet3 driver instead of
> DPDK? That should give you better performance and would be easier for us
> to debug. See https://docs.fd.io/vpp/20.01/d2/d1a/vmxnet3_doc.html
> 
>   Otherwise, can you share 'show logging' output?
> 
>   Ben
> 
>   > -Original Message-
>   > From: vpp-dev@lists.fd.io    d...@lists.fd.io  > On Behalf Of chetan bhasin
>   > Sent: lundi 13 janvier 2020 15:20
>   > To: vpp-dev mailto:vpp-dev@lists.fd.io> >
>   > Subject: [vpp-dev] (Vpp 19,08)Facing issue with vmxnet3 with
> 1rx/tx queue
>   >
>   > Hello Everyone,
>   >
>   > I am facing an issue while bringing up vpp with less than 2 rx and
> 2 tx
>   > queue. I am using vpp19.08. I have configured pci's under the dpdk
> section
>   > like below -
>   >
>   > 1)
>   > dpdk {
>   > # dpdk-config
>   >  dev default {
>   >  num-rx-desc 1024
>   >  num-rx-queues 1
>   >  num-tx-desc 1024
>   >  num-tx-queues 1
>   > # vlan-strip-offload off
>   >  }
>   > dev :1b:00.0 {
>   > }
>   > dev :13:00.0 {
>   > }
>   > }
>   >
>   > When I bring pci state to up  , it is showing error in "show
> hardware-
>   > interfaces"
>   >
>   >  DBGvpp# set interface state GigabitEthernet13/0/0 up  DBGvpp#
> show
>   > hardware-interfaces
>   >   NameIdx   Link  Hardware
>   > GigabitEthernet13/0/0  1down
> GigabitEthernet13/0/0
>   >   Link speed: 10 Gbps
>   >   Ethernet address 00:50:56:9b:f5:c5
>   >   VMware VMXNET3
>   > carrier down
>   > flags: admin-up pmd maybe-multiseg rx-ip4-cksum
>   > Devargs:
>   > rx: queues 1 (max 16), desc 1024 (min 128 max 4096 align 1)
>   > tx: queues 1 (max 8), desc 1024 (min 512 max 4096 align 1)
>   > pci: device 15ad:07b0 subsystem 15ad:07b0 address
> :13:00.00 numa 0
>   > max rx packet len: 16384
>   > promiscuous: unicast off all-multicast off
>   > vlan offload: strip off filter off qinq off
>   > rx offload avail:  vlan-strip ipv4-cksum udp-cksum tcp-cksum
> tcp-lro
>   >vlan-filter jumbo-frame scatter
>   > rx offload active: ipv4-cksum jumbo-frame scatter
>   > tx offload avail:  vlan-insert ipv4-cksum udp-cksum 

[vpp-dev] Coverity run FAILED as of 2020-01-13 14:00:24 UTC

2020-01-13 Thread Noreply Jenkins
Coverity run failed today.

Current number of outstanding issues are 2
Newly detected: 0
Eliminated: 0
More details can be found at  
https://scan.coverity.com/projects/fd-io-vpp/view_defects
-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.

View/Reply Online (#15147): https://lists.fd.io/g/vpp-dev/message/15147
Mute This Topic: https://lists.fd.io/mt/69669766/21656
Group Owner: vpp-dev+ow...@lists.fd.io
Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub  [arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-


Re: [vpp-dev] (Vpp 19,08)Facing issue with vmxnet3 with 1rx/tx queue

2020-01-13 Thread Benoit Ganne (bganne) via Lists.Fd.Io
Hi Chetan,

Any reason for not using VPP built-in vmxnet3 driver instead of DPDK? That 
should give you better performance and would be easier for us to debug. See 
https://docs.fd.io/vpp/20.01/d2/d1a/vmxnet3_doc.html

Otherwise, can you share 'show logging' output?

Ben

> -Original Message-
> From: vpp-dev@lists.fd.io  On Behalf Of chetan bhasin
> Sent: lundi 13 janvier 2020 15:20
> To: vpp-dev 
> Subject: [vpp-dev] (Vpp 19,08)Facing issue with vmxnet3 with 1rx/tx queue
> 
> Hello Everyone,
> 
> I am facing an issue while bringing up vpp with less than 2 rx and 2 tx
> queue. I am using vpp19.08. I have configured pci's under the dpdk section
> like below -
> 
> 1)
> dpdk {
> # dpdk-config
>  dev default {
>  num-rx-desc 1024
>  num-rx-queues 1
>  num-tx-desc 1024
>  num-tx-queues 1
> # vlan-strip-offload off
>  }
> dev :1b:00.0 {
> }
> dev :13:00.0 {
> }
> }
> 
> When I bring pci state to up  , it is showing error in "show hardware-
> interfaces"
> 
>  DBGvpp# set interface state GigabitEthernet13/0/0 up  DBGvpp# show
> hardware-interfaces
>   NameIdx   Link  Hardware
> GigabitEthernet13/0/0  1down  GigabitEthernet13/0/0
>   Link speed: 10 Gbps
>   Ethernet address 00:50:56:9b:f5:c5
>   VMware VMXNET3
> carrier down
> flags: admin-up pmd maybe-multiseg rx-ip4-cksum
> Devargs:
> rx: queues 1 (max 16), desc 1024 (min 128 max 4096 align 1)
> tx: queues 1 (max 8), desc 1024 (min 512 max 4096 align 1)
> pci: device 15ad:07b0 subsystem 15ad:07b0 address :13:00.00 numa 0
> max rx packet len: 16384
> promiscuous: unicast off all-multicast off
> vlan offload: strip off filter off qinq off
> rx offload avail:  vlan-strip ipv4-cksum udp-cksum tcp-cksum tcp-lro
>vlan-filter jumbo-frame scatter
> rx offload active: ipv4-cksum jumbo-frame scatter
> tx offload avail:  vlan-insert ipv4-cksum udp-cksum tcp-cksum tcp-tso
>multi-segs
> tx offload active: multi-segs
> rss avail: ipv4-tcp ipv4-udp ipv4 ipv6-tcp ipv6-udp ipv6
> rss active:none
> tx burst function: vmxnet3_xmit_pkts
> rx burst function: vmxnet3_recv_pkts
>   Errors:
> 
> rte_eth_dev_start[port:0, errno:1]: Operation not permitted
> 
> 2) When bring up system without "dev default " section , still facing the
> same issue , this time default [Rx-queue is 1 and tx-queue is 2 (main
> thread + 1 worker)]
> 
> DBGvpp# show hardware-interfaces
>   NameIdx   Link  Hardware
> GigabitEthernet13/0/0  1down  GigabitEthernet13/0/0
>   Link speed: 10 Gbps
>   Ethernet address 00:50:56:9b:f5:c5
>   VMware VMXNET3
> carrier down
> flags: admin-up pmd maybe-multiseg rx-ip4-cksum
> Devargs:
> rx: queues 1 (max 16), desc 1024 (min 128 max 4096 align 1)
> tx: queues 2 (max 8), desc 1024 (min 512 max 4096 align 1)
> pci: device 15ad:07b0 subsystem 15ad:07b0 address :13:00.00 numa 0
> max rx packet len: 16384
> promiscuous: unicast off all-multicast off
> vlan offload: strip off filter off qinq off
> rx offload avail:  vlan-strip ipv4-cksum udp-cksum tcp-cksum tcp-lro
>vlan-filter jumbo-frame scatter
> rx offload active: ipv4-cksum jumbo-frame scatter
> tx offload avail:  vlan-insert ipv4-cksum udp-cksum tcp-cksum tcp-tso
>multi-segs
> tx offload active: multi-segs
> rss avail: ipv4-tcp ipv4-udp ipv4 ipv6-tcp ipv6-udp ipv6
> rss active:none
> tx burst function: vmxnet3_xmit_pkts
> rx burst function: vmxnet3_recv_pkts
>   Errors:
> rte_eth_dev_start[port:0, errno:1]: Operation not permitted
> 
> 
> Thanks,
> Chetan Bhasin
> 

-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.

View/Reply Online (#15146): https://lists.fd.io/g/vpp-dev/message/15146
Mute This Topic: https://lists.fd.io/mt/69669298/21656
Group Owner: vpp-dev+ow...@lists.fd.io
Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub  [arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-


[vpp-dev] (Vpp 19,08)Facing issue with vmxnet3 with 1rx/tx queue

2020-01-13 Thread chetan bhasin
Hello Everyone,

I am facing an issue while bringing up vpp with *less than 2 rx and 2 tx
queue*. I am using vpp19.08. I have configured pci's under the dpdk section
like below -

1)
dpdk {
# dpdk-config
 dev default {
 num-rx-desc 1024
 num-rx-queues 1
 num-tx-desc 1024
 num-tx-queues 1
# vlan-strip-offload off
 }
dev :1b:00.0 {
}
dev :13:00.0 {
}
}

When I bring pci state to up  , it is showing error in "show
hardware-interfaces"

 DBGvpp# set interface state GigabitEthernet13/0/0 up
 DBGvpp# show hardware-interfaces
  NameIdx   Link  Hardware
GigabitEthernet13/0/0  1down  GigabitEthernet13/0/0
  Link speed: 10 Gbps
  Ethernet address 00:50:56:9b:f5:c5
  VMware VMXNET3
carrier down
flags: admin-up pmd maybe-multiseg rx-ip4-cksum
Devargs:


*rx: queues 1 (max 16), desc 1024 (min 128 max 4096 align 1)tx:
queues 1 (max 8), desc 1024 (min 512 max 4096 align 1)*pci: device
15ad:07b0 subsystem 15ad:07b0 address :13:00.00 numa 0
max rx packet len: 16384
promiscuous: unicast off all-multicast off
vlan offload: strip off filter off qinq off
rx offload avail:  vlan-strip ipv4-cksum udp-cksum tcp-cksum tcp-lro
   vlan-filter jumbo-frame scatter
rx offload active: ipv4-cksum jumbo-frame scatter
tx offload avail:  vlan-insert ipv4-cksum udp-cksum tcp-cksum tcp-tso
   multi-segs
tx offload active: multi-segs
rss avail: ipv4-tcp ipv4-udp ipv4 ipv6-tcp ipv6-udp ipv6
rss active:none
tx burst function: vmxnet3_xmit_pkts
rx burst function: vmxnet3_recv_pkts
*  Errors:*
*rte_eth_dev_start[port:0, errno:1]: Operation not permitted*

2) When bring up system without "*dev default* " section , still facing the
same issue , this time default [Rx-queue is 1 and tx-queue is 2 (main
thread + 1 worker)]

DBGvpp# show hardware-interfaces
  NameIdx   Link  Hardware
GigabitEthernet13/0/0  1down  GigabitEthernet13/0/0
  Link speed: 10 Gbps
  Ethernet address 00:50:56:9b:f5:c5
  VMware VMXNET3
carrier down
flags: admin-up pmd maybe-multiseg rx-ip4-cksum
Devargs:


*rx: queues 1 (max 16), desc 1024 (min 128 max 4096 align 1)tx:
queues 2 (max 8), desc 1024 (min 512 max 4096 align 1)*pci: device
15ad:07b0 subsystem 15ad:07b0 address :13:00.00 numa 0
max rx packet len: 16384
promiscuous: unicast off all-multicast off
vlan offload: strip off filter off qinq off
rx offload avail:  vlan-strip ipv4-cksum udp-cksum tcp-cksum tcp-lro
   vlan-filter jumbo-frame scatter
rx offload active: ipv4-cksum jumbo-frame scatter
tx offload avail:  vlan-insert ipv4-cksum udp-cksum tcp-cksum tcp-tso
   multi-segs
tx offload active: multi-segs
rss avail: ipv4-tcp ipv4-udp ipv4 ipv6-tcp ipv6-udp ipv6
rss active:none
tx burst function: vmxnet3_xmit_pkts
rx burst function: vmxnet3_recv_pkts

*  Errors:rte_eth_dev_start[port:0, errno:1]: Operation not permitted*

Thanks,
Chetan Bhasin
-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.

View/Reply Online (#15145): https://lists.fd.io/g/vpp-dev/message/15145
Mute This Topic: https://lists.fd.io/mt/69669298/21656
Group Owner: vpp-dev+ow...@lists.fd.io
Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub  [arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-


Re: [vpp-dev] Set of the small bug-fixes for #vpp

2020-01-13 Thread Ole Troan
Aleksander,

> Sorry, you are absolutely right. It's no issues here. In my VPP v19.08-stable 
> I have no commit 75761b93.

Thanks, that was good to hear!

Best regards,
Ole

-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.

View/Reply Online (#15144): https://lists.fd.io/g/vpp-dev/message/15144
Mute This Topic: https://lists.fd.io/mt/69665543/21656
Mute #vpp: https://lists.fd.io/mk?hashtag=vpp=1480452
Group Owner: vpp-dev+ow...@lists.fd.io
Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub  [arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-


Re: [vpp-dev] Set of the small bug-fixes for #vpp

2020-01-13 Thread Aleksander Djuric
Sorry, you are absolutely right. It's no issues here. In my VPP v19.08-stable I 
have no commit 75761b93.
Thanks!
Aleksander
-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.

View/Reply Online (#15143): https://lists.fd.io/g/vpp-dev/message/15143
Mute This Topic: https://lists.fd.io/mt/69665543/21656
Mute #vpp: https://lists.fd.io/mk?hashtag=vpp=1480452
Group Owner: vpp-dev+ow...@lists.fd.io
Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub  [arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-


Re: [vpp-dev] Set of the small bug-fixes for #vpp

2020-01-13 Thread Ole Troan
Hi Aleksander,


> Yes. This fix still needed. Please take a look at the test below:
> 
> > import ipaddress
> > ipaddress.IPv4Network((u'192.168.0.222', 24), False)
> IPv4Network(u'192.168.0.0/24')
> 
> The IPv4Network method should not be used here, because in functions like an 
> ip_address_dump or ip_route_dump we expect to receive the host address with 
> network prefix, instead of it's network address.


Right, those should be separated by the following commit: 75761b93

E.g. ip_address_details uses vl_api_address_with_prefix_t , which maps in 
Python to the IPv6Interface class.
ip_route_details is a 'pure' prefix, with the anything after prefix length 
zero'ed out. Or at least it should be.

Do you have the above patch?
Otherwise I will not discount that there are issues here. ;-)

Cheers,
Ole-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.

View/Reply Online (#15142): https://lists.fd.io/g/vpp-dev/message/15142
Mute This Topic: https://lists.fd.io/mt/69665543/21656
Mute #vpp: https://lists.fd.io/mk?hashtag=vpp=1480452
Group Owner: vpp-dev+ow...@lists.fd.io
Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub  [arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-


Re: [vpp-dev] Set of the small bug-fixes for #vpp

2020-01-13 Thread Aleksander Djuric
Yes. This fix still needed. Please take a look at the test below:

> import ipaddress
> ipaddress.IPv4Network(( u'192.168.0.222' , 24 ), False )
IPv4Network(u'192.168.0.0/24')

The IPv4Network method should not be used here, because in functions like an 
ip_address_dump or ip_route_dump we expect to receive the host address with 
network prefix, instead of it's network address.

>
-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.

View/Reply Online (#15141): https://lists.fd.io/g/vpp-dev/message/15141
Mute This Topic: https://lists.fd.io/mt/69665543/21656
Mute #vpp: https://lists.fd.io/mk?hashtag=vpp=1480452
Group Owner: vpp-dev+ow...@lists.fd.io
Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub  [arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-


Re: [vpp-dev] Set of the small bug-fixes for #vpp

2020-01-13 Thread Aleksander Djuric
On Mon, Jan 13, 2020 at 01:20 PM, Ole Troan wrote:

> 
> would you mind elaborating why you want the Python representation of an IP
> prefix to be a dictionary of address and length as opposed to an
> IPv6Network/IPv4Network object?

Hi Ole!

Thanks! It's strange, but the IPv[46]Network method some time ago were return 
the network address of object in my tests, instead of the host address I were 
expect to receive.
I will test it again.
-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.

View/Reply Online (#15140): https://lists.fd.io/g/vpp-dev/message/15140
Mute This Topic: https://lists.fd.io/mt/69665543/21656
Mute #vpp: https://lists.fd.io/mk?hashtag=vpp=1480452
Group Owner: vpp-dev+ow...@lists.fd.io
Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub  [arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-


[vpp-dev] #vapi -- Need multiple times " ip table del xxx" to delete a specific 'ip table' within vpp?

2020-01-13 Thread ryanlu
Hi guys ,

I have a question when I delete 'ip table'/'vrf' within VPP.

It need issue multiple times " ip table del xxx" to delete a specific 'ip 
table' within vpp.

The number decided by num in " locks:[src:CLI:6, ]"

For example, with follow 'ip table'/'vrf', I need issue " ip table del 4114532" 
six times to delete this specific 'ip table'/'vrf'.

It this a design behavior or an issue? Thanks for help!

vpp# show ip fib table 4114532

ipv4-VRF:4114532, fib_index:5, flow hash:[src dst sport dport proto ] 
locks:[src:CLI:6, ]

0.0.0.0/0

unicast-ip4-chain

[@0]: dpo-load-balance: [proto:ip4 index:49 buckets:1 uRPF:50 to:[0:0]]

[0] [@0]: dpo-drop ip4

0.0.0.0/32

unicast-ip4-chain

[@0]: dpo-load-balance: [proto:ip4 index:50 buckets:1 uRPF:51 to:[0:0]]

[0] [@0]: dpo-drop ip4

224.0.0.0/4

unicast-ip4-chain

[@0]: dpo-load-balance: [proto:ip4 index:52 buckets:1 uRPF:53 to:[0:0]]

[0] [@0]: dpo-drop ip4

240.0.0.0/4

unicast-ip4-chain

[@0]: dpo-load-balance: [proto:ip4 index:51 buckets:1 uRPF:52 to:[0:0]]

[0] [@0]: dpo-drop ip4

255.255.255.255/32

unicast-ip4-chain

[@0]: dpo-load-balance: [proto:ip4 index:53 buckets:1 uRPF:54 to:[0:0]]

[0] [@0]: dpo-drop ip4

vpp#

Thanks,

Ryan
-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.

View/Reply Online (#15139): https://lists.fd.io/g/vpp-dev/message/15139
Mute This Topic: https://lists.fd.io/mt/69666295/21656
Mute #vapi: https://lists.fd.io/mk?hashtag=vapi=1480452
Group Owner: vpp-dev+ow...@lists.fd.io
Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub  [arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-


Re: [vpp-dev] VPP support for VRRP

2020-01-13 Thread Jerome Tollet via Lists.Fd.Io
Hello Ahmed,
The presentation you are referring to is about networking-vpp (OpenStack 
driver). It’s not about VPP in itself.

  *   Networking-vpp supports HA mode with VRRP for VPP using keepalived
  *   We currently have no plan to add support for VRRP

Of course, contributions are more than welcome in case you’d like to work on 
VRRP for VPP.
Jerome

De :  au nom de Ahmed Bashandy 
Date : lundi 13 janvier 2020 à 10:30
À : "vpp-dev@lists.fd.io" 
Objet : [vpp-dev] VPP support for VRRP

Hi

Slide 34 in the presentation
https://www.cisco.com/c/dam/m/en_us/service-provider/ciscoknowledgenetwork/files/0531-techad-ckn.pptx
says “support for HA (VRRP based)“

But when I searched the mailing  I found
https://lists.fd.io/g/vpp-dev/message/12862?p=,,,20,0,0,0::relevance,,vrrp,20,2,0,31351846
where Ole says that VRRP is not supported

Are there any plans to support VRRP anytime soon?

Ahmed

-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.

View/Reply Online (#15138): https://lists.fd.io/g/vpp-dev/message/15138
Mute This Topic: https://lists.fd.io/mt/69665214/21656
Group Owner: vpp-dev+ow...@lists.fd.io
Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub  [arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-


Re: [vpp-dev] Set of the small bug-fixes for #vpp

2020-01-13 Thread Ole Troan
Hi Aleksander,

> 3) vpp_papi: correct unformat ip address for ip_address_dump, ip_route_dump, 
> etc (unformat-api-prefix.patch)

would you mind elaborating why you want the Python representation of an IP 
prefix to be a dictionary of address and length as opposed to an 
IPv6Network/IPv4Network object?

--- a/src/vpp-api/python/vpp_papi/vpp_format.py 2019-10-30 11:50:40.676813774 
+0300
+++ b/src/vpp-api/python/vpp_papi/vpp_format.py 2019-12-26 16:10:54.014344478 
+0300
@@ -182,17 +182,11 @@
 
 def unformat_api_prefix_t(o):
 if o.address.af == 1:
-return ipaddress.IPv6Network((o.address.un.ip6, o.len), False)
+return {'address': ipaddress.IPv6Address(o.address.un.ip6), 'len': 
o.len}
 if o.address.af == 0:
-return ipaddress.IPv4Network((o.address.un.ip4, o.len), False)
+return {'address': ipaddress.IPv4Address(o.address.un.ip4), 'len': 
o.len}
 raise ValueError('Unknown address family {}'.format(o))
 
-if isinstance(o.address, ipaddress.IPv4Address):
-return ipaddress.IPv4Network((o.address, o.len), False)
-if isinstance(o.address, ipaddress.IPv6Address):
-return ipaddress.IPv6Network((o.address, o.len), False)
-raise ValueError('Unknown instance {}', format(o))
-
 def unformat_api_address_with_prefix_t(o):
 if o.address.af == 1:
 return ipaddress.IPv6Interface((o.address.un.ip6, o.len))


Best regards,
Ole-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.

View/Reply Online (#15137): https://lists.fd.io/g/vpp-dev/message/15137
Mute This Topic: https://lists.fd.io/mt/69665543/21656
Mute #vpp: https://lists.fd.io/mk?hashtag=vpp=1480452
Group Owner: vpp-dev+ow...@lists.fd.io
Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub  [arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-


[vpp-dev] Set of the small bug-fixes for #vpp

2020-01-13 Thread Aleksander Djuric
Hello Everyone,

I have a small set of the fixes for VPP, but unfortunatelly right now, I have 
no time for pushing the code with git review.
I hope that these fixes are very useful for the project and I would be very 
happy if someone could do it instead of me.
In any case I hope that these fixes will be useful for someone )

All of these fixes should be actual for v20.01-rc0-1017-g1c6486f7b

Thanks all in advance, and please take a look at the code in attachments.
Any suggestion would be helpful.

1) vppinfra: need to close clib file (close-clib-file.patch)
2) unix: too many warnings on deleted epool file (epool-file-delete.patch)
3) vpp_papi: correct unformat ip address for ip_address_dump, ip_route_dump, 
etc (unformat-api-prefix.patch)
4) dpdk: fix crash on device detach (detach-support.patch)
5) statseg: fix create socket error (statseg_create_socket.patch)

Best wishes!
---
Aleksander
diff --git a/src/vppinfra/elog.h b/src/vppinfra/elog.h
index 3cd067ce7070ad88b8424b6c43e8b6c152bde821..7bed87d672225123f8cf8bab8eaa909c693bca27 100644
--- a/src/vppinfra/elog.h
+++ b/src/vppinfra/elog.h
@@ -540,6 +540,8 @@ elog_write_file (elog_main_t * em, char *clib_file, int flush_ring)
   error = serialize (, serialize_elog_main, em, flush_ring);
   if (!error)
 serialize_close ();
+
+  serialize_close_clib_file ();
   return error;
 }
 
@@ -555,6 +557,8 @@ elog_read_file (elog_main_t * em, char *clib_file)
   error = unserialize (, unserialize_elog_main, em);
   if (!error)
 unserialize_close ();
+
+  serialize_close_clib_file ();
   return error;
 }
 
diff --git a/src/vppinfra/serialize.c b/src/vppinfra/serialize.c
index 93e44f94e078ae6cb607ac9aac1e6e96ccd91ed7..fc2a1065999381514268e1d4ebd4f1c82c5afbdf 100644
--- a/src/vppinfra/serialize.c
+++ b/src/vppinfra/serialize.c
@@ -1244,6 +1244,14 @@ unserialize_open_clib_file (serialize_main_t * m, char *file)
   return serialize_open_clib_file_helper (m, file, /* is_read */ 1);
 }
 
+void
+serialize_close_clib_file (serialize_main_t * m)
+{
+  if (m->stream.data_function_opaque > 0)
+  	close(m->stream.data_function_opaque);
+  m->stream.data_function_opaque = 0;
+}
+
 #endif /* CLIB_UNIX */
 
 /*
diff --git a/src/vppinfra/serialize.h b/src/vppinfra/serialize.h
index 90d615f60a484bb530e8533271b6dac754335d02..953dacf623a455882d29cbfcd5b7d0f327ddfc10 100644
--- a/src/vppinfra/serialize.h
+++ b/src/vppinfra/serialize.h
@@ -418,6 +418,7 @@ void unserialize_open_vector (serialize_main_t * m, u8 * vector);
 #ifdef CLIB_UNIX
 clib_error_t *serialize_open_clib_file (serialize_main_t * m, char *file);
 clib_error_t *unserialize_open_clib_file (serialize_main_t * m, char *file);
+void serialize_close_clib_file (serialize_main_t * m);
 
 void serialize_open_clib_file_descriptor (serialize_main_t * m, int fd);
 void unserialize_open_clib_file_descriptor (serialize_main_t * m, int fd);
--- a/src/vlib/unix/input.c	2019-10-30 11:50:40.616813871 +0300
+++ b/src/vlib/unix/input.c	2019-11-14 11:45:03.133999426 +0300
@@ -117,6 +117,8 @@
 
   if (epoll_ctl (em->epoll_fd, op, f->file_descriptor, ) < 0)
 {
+  if (update_type == UNIX_FILE_UPDATE_DELETE)
+	f->file_descriptor = ~0;
   clib_unix_warning ("epoll_ctl");
   return;
 }
--- a/src/vpp-api/python/vpp_papi/vpp_format.py	2019-10-30 11:50:40.676813774 +0300
+++ b/src/vpp-api/python/vpp_papi/vpp_format.py	2019-12-26 16:10:54.014344478 +0300
@@ -182,17 +182,11 @@
 
 def unformat_api_prefix_t(o):
 if o.address.af == 1:
-return ipaddress.IPv6Network((o.address.un.ip6, o.len), False)
+return {'address': ipaddress.IPv6Address(o.address.un.ip6), 'len': o.len}
 if o.address.af == 0:
-return ipaddress.IPv4Network((o.address.un.ip4, o.len), False)
+return {'address': ipaddress.IPv4Address(o.address.un.ip4), 'len': o.len}
 raise ValueError('Unknown address family {}'.format(o))
 
-if isinstance(o.address, ipaddress.IPv4Address):
-return ipaddress.IPv4Network((o.address, o.len), False)
-if isinstance(o.address, ipaddress.IPv6Address):
-return ipaddress.IPv6Network((o.address, o.len), False)
-raise ValueError('Unknown instance {}', format(o))
-
 def unformat_api_address_with_prefix_t(o):
 if o.address.af == 1:
 return ipaddress.IPv6Interface((o.address.un.ip6, o.len))
diff --git a/src/plugins/dpdk/api/dpdk_api.c b/src/plugins/dpdk/api/dpdk_api.c
index 5ff8d5f..09b8c52 100755
--- a/src/plugins/dpdk/api/dpdk_api.c
+++ b/src/plugins/dpdk/api/dpdk_api.c
@@ -320,7 +320,8 @@ dpdk_api_init (vlib_main_t * vm)
 VLIB_INIT_FUNCTION (dpdk_api_init) =
 {
   .runs_after = VLIB_INITS ("dpdk_init"),
-/* *INDENT-OFF* */
+};
+/* *INDENT-ON* */
 
 /*
  * fd.io coding-style-patch-verification: ON
diff --git a/src/plugins/dpdk/device/format.c b/src/plugins/dpdk/device/format.c
index 942def6..20fcadd 100644
--- a/src/plugins/dpdk/device/format.c
+++ b/src/plugins/dpdk/device/format.c
@@ -142,7 +142,7 @@ format_dpdk_device_name (u8 * s, va_list * args)
   u32 i 

[vpp-dev] VPP support for VRRP

2020-01-13 Thread Ahmed Bashandy
Hi

Slide 34 in the presentation
https://www.cisco.com/c/dam/m/en_us/service-provider/ciscoknowledgenetwork/files/0531-techad-ckn.pptx
says “support for HA (VRRP based)“

But when I searched the mailing  I found
https://lists.fd.io/g/vpp-dev/message/12862?p=,,,20,0,0,0::relevance,,vrrp,20,2,0,31351846
where Ole says that VRRP is not supported

Are there any plans to support VRRP anytime soon?

Ahmed

-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.

View/Reply Online (#15135): https://lists.fd.io/g/vpp-dev/message/15135
Mute This Topic: https://lists.fd.io/mt/69665214/21656
Group Owner: vpp-dev+ow...@lists.fd.io
Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub  [arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-