Re: [vpp-dev] ip routing by mpls

2017-12-19 Thread Holoo Gulakh
Hi,
It works by adding next_hop_weight.
thanks

On Tue, Dec 19, 2017 at 6:15 PM, Neale Ranns (nranns) 
wrote:

> Hi Holoo,
>
>
>
> I think you need to add weight=1 to the arg list.
>
> If that doesn’t work, can you show me;
>
> sh ip fib 1.2.3.4/32
>
>
>
> Thanks,
>
> neale
>
>
>
> *From: * on behalf of Holoo Gulakh <
> holoogul...@gmail.com>
> *Date: *Tuesday, 19 December 2017 at 13:39
> *To: *"vpp-dev@lists.fd.io" 
> *Subject: *[vpp-dev] ip routing by mpls
>
>
>
> Hi,
>
> I used the following command to enter an entry into IP FIB so that I can
> route incoming packets by their mpls label:
>
>
>
>" vppctl ip route add 5.6.7.8/32 via 10.10.10.10
> GigabitEthernet0/9/0 out-label 46 "
>
>
>
> Now IP FIB has a new entry like this:
>
>
>
> 5.6.7.8/32
>   unicast-ip4-chain
>   [@0]: dpo-load-balance: [proto:ip4 index:25 buckets:1 uRPF:26 to:[0:0]]
> [0] [@10]: mpls-label:[0]:[46:255:0:eos]
> [@1]: arp-mpls: via 10.10.10.10 GigabitEthernet0/9/0
>
>
>
> ===
>
> My Question:
>
> ===
>
> I am trying to add an entry to IP FIB using API:
>
>
>
> " r = vpp.ip_add_del_route(is_add=1, is_ipv6=0, is_multipath=1,
> dst_address="\x01\x02\x03\x04", dst_address_length=32,
> next_hop_sw_if_index=2, next_hop_n_out_labels=1,
> next_hop_out_label_stack=[78] "
>
> I expect to get an entry like the one I got by CLI containing mpls label,
> but what I get is:
> 1.2.3.4/32
>   unicast-ip4-chain
>   [@0]: dpo-load-balance: [proto:ip4 index:24 buckets:1 uRPF:25 to:[0:0]]
> [0] [@3]: arp-ipv4: via 1.2.3.4 GigabitEthernet0/9/0
>
> This entry does not have any mpls label. What is wrong with my API?
>
> thanks in advance
>
>
___
vpp-dev mailing list
vpp-dev@lists.fd.io
https://lists.fd.io/mailman/listinfo/vpp-dev

Re: [vpp-dev] kube-proxy test fail

2017-12-19 Thread Ni, Hongjun

Hi Gabriel,

I used below command and it works well:
make test TEST=test_kubeproxy
Could you give it a try?

Thanks,
Hongjun

From: Gabriel Ganne [mailto:gabriel.ga...@enea.com]
Sent: Tuesday, December 19, 2017 8:54 PM
To: vpp-dev@lists.fd.io; Ni, Hongjun 
Subject: kube-proxy test fail


Hi Hongjun,



I just ran the kube-proxy tests and I end up stuck in kp_vip_find_index() while 
processing a cli command.

Below is the command I used and the backtrace I get.



make test-all V=2 TEST=*.TestKP.*

...
0x7eaa9c04 in kp_vip_find_index (prefix=prefix@entry=0x7ec3dbe8, 
plen=104 'h',
vip_index=vip_index@entry=0x7ec3dbbc)
at /home/gannega/vpp/build-data/../src/plugins/kubeproxy/kp.c:457
457   kp_get_writer_lock();
(gdb) bt
#0  0x7eaa9c04 in kp_vip_find_index 
(prefix=prefix@entry=0x7ec3dbe8, plen=104 'h',
vip_index=vip_index@entry=0x7ec3dbbc)
at /home/gannega/vpp/build-data/../src/plugins/kubeproxy/kp.c:457
#1  0x7eaaeacc in kp_pod_command_fn (vm=, 
input=,
cmd=) at 
/home/gannega/vpp/build-data/../src/plugins/kubeproxy/kp_cli.c:132
#2  0xbf62cfd0 in vlib_cli_dispatch_sub_commands (
vm=vm@entry=0xbf68cfd0 , cm=cm@entry=0xbf68d220 
,
input=input@entry=0x7ec3ddf8, parent_command_index=)
at /home/gannega/vpp/build-data/../src/vlib/cli.c:588
#3  0xbf62d618 in vlib_cli_dispatch_sub_commands (
vm=vm@entry=0xbf68cfd0 , cm=cm@entry=0xbf68d220 
,
input=input@entry=0x7ec3ddf8, 
parent_command_index=parent_command_index@entry=0)
at /home/gannega/vpp/build-data/../src/vlib/cli.c:566
#4  0xbf62d764 in vlib_cli_input (vm=vm@entry=0xbf68cfd0 
,
input=input@entry=0x7ec3ddf8, function=function@entry=0x411b78 
,
function_arg=function_arg@entry=281472808508960)
at /home/gannega/vpp/build-data/../src/vlib/cli.c:662
#5  0x00411e78 in vl_api_cli_inband_t_handler (mp=0x3809c4ec)
at /home/gannega/vpp/build-data/../src/vpp/api/api.c:219
#6  0xbf6941ec in vl_msg_api_handler_with_vm_node (am=0xbf6be430 
,
the_msg=0x3809c4ec, vm=0xbf68cfd0 , 
node=0x7ec35000)
at /home/gannega/vpp/build-data/../src/vlibapi/api_shared.c:508
#7  0xbf69d544 in memclnt_process (vm=, 
node=0x139013a, f=)
at /home/gannega/vpp/build-data/../src/vlibmemory/memory_vlib.c:970
#8  0xbf635410 in vlib_process_bootstrap (_a=)
at /home/gannega/vpp/build-data/../src/vlib/main.c:1231
#9  0xbef127a8 in clib_calljmp ()
at /home/gannega/vpp/build-data/../src/vppinfra/longjmp.S:676


Best regards,



--

Gabriel Ganne
___
vpp-dev mailing list
vpp-dev@lists.fd.io
https://lists.fd.io/mailman/listinfo/vpp-dev

Re: [vpp-dev] Unable to make GTPU Tunnel work

2017-12-19 Thread Ryota Yushina

Hi Patel,

I faced same issue, please see following mails.
I hope it helps you.
https://www.mail-archive.com/vpp-dev@lists.fd.io/msg04284.html

At least, v17.10 seems to require us to configure arp entry of gtp_tunnel.
If you wouldn’t like to configure arp entry, you need to apply a patche 
https://gerrit.fd.io/r/#/c/9207/

Thanks
---
Best Regards,

Ryota Yushina,
NEC



> -Original Message-
> From: vpp-dev-boun...@lists.fd.io [mailto:vpp-dev-boun...@lists.fd.io] On 
> Behalf Of pravir.pa...@phazr.net
> Sent: Wednesday, December 20, 2017 5:17 AM
> To: vpp-dev@lists.fd.io
> Subject: [vpp-dev] Unable to make GTPU Tunnel work
> 
> 
> Setup:
> +- ubuntu-test -
> |
> | [eth2: 10.1.13.197]
> | route add 11.9.0.0 netmask 255.255.0.0 gw 10.1.13.199
> +-- | 
> |
> +-VPP#7 | 
> | [GigabitEthernet0/6/0: 10.1.13.199]
> |
> | [gtpu_tunnel0: 11.9.0.1]
> |   |
> | [GigabitEthernet0/7/0: 10.1.14.199] --> vrf:152
> +-- || ---
> ||
> +-VPP#4 || ---
> | [TenGigabitEthernet82/0/0: GigabitEthernet0/c/0] --> vrf:152
> |
> | [loop0: 11.9.0.4]
> +- VPP#4 -
> 
> 
> Commands used to configure
> 
>  sudo vppctl set interface ip table GigabitEthernet0/7/0 152  sudo vppctl set 
> interface ip address GigabitEthernet0/7/0
> 10.1.14.199/24  sudo vppctl create gtpu tunnel src 10.1.14.199 dst 
> 10.1.14.200 teid  encap-vrf-id 152 decap-next node
> ip4-lookup
> 
>  sudo vppctl set interface ip address gtpu_tunnel0 11.9.0.1/16
> 
>  sudo vppctl ip route 11.9.0.0/16 via gtpu_tunnel0  sudo vppctl ip route 
> 10.1.13.0/16 via GigabitEthernet0/6/0  sudo vppctl
> set interface state GigabitEthernet0/6/0 up  sudo vppctl set interface state 
> GigabitEthernet0/7/0 up
> 
> 
> 
>  sudo vppctl set interface ip table GigabitEthernet0/c/0 152  sudo vppctl set 
> interface ip address GigabitEthernet0/c/0
> 10.1.14.200/24  sudo vppctl create gtpu tunnel src 10.1.14.200 dst 
> 10.1.14.199 teid  encap-vrf-id 152 decap-next node
> ip4-lookup
> 
>  sudo vppctl show gtpu tunnel
>  src 10.1.14.200 dst 10.1.14.199 teid  sw_if_index 4 encap_fib_index 1 
> fib_entry_index 16 decap_next index 4
> 
>  sudo vppctl create loopback interface
> 
>  sudo vppctl set interface ip address loop0 11.9.0.4/16  sudo vppctl ip route 
> 10.1.13.0/24 via gtpu_tunnel1  sudo vppctl
> set interface state GigabitEthernet0/c/0 up  sudo vppctl set interface state 
> loop0 up
> 
> 
> When trying to ping from ubuntu-test machine "ping 11.9.0.4" the VPP#7 just 
> restarts.
> 
> VPP Version: vpp v17.10-release built by jenkins on
> ubuntu1604-basebuild-8c-32g-2873 at Thu Oct 26 02:05:09 UTC 2017
> 
> cat /etc/lsb-release
> DISTRIB_ID=Ubuntu
> DISTRIB_RELEASE=16.04
> DISTRIB_CODENAME=xenial
> DISTRIB_DESCRIPTION="Ubuntu 16.04.3 LTS"
> 
> dpkg -l | grep vpp
> ii  vpp  17.10-release amd64 Vector Packet Processing--executables ii  
> vpp-dev 17.10-release amd64 Vector Packet
> Processing--development support
> ii  vpp-lib   17.10-release amd64 Vector Packet Processing--runtime
> libraries
> ii  vpp-plugins 17.10-release amd64 Vector Packet Processing--runtime plugins
> 
> 
> Any help appreciated.
> 
> Pravir Patel
> 
> 
> ___
> vpp-dev mailing list
> vpp-dev@lists.fd.io
> https://lists.fd.io/mailman/listinfo/vpp-dev



smime.p7s
Description: S/MIME cryptographic signature
___
vpp-dev mailing list
vpp-dev@lists.fd.io
https://lists.fd.io/mailman/listinfo/vpp-dev

Re: [vpp-dev] Need Help on an ipsec Problem

2017-12-19 Thread Bin Zhang (binzhang)
Hi Neale,

Yes, 172.28.128.4 is the tunnel endpoint.  After making a few more changes to 
the way I set up the ipsec tunnel, I was able to ping through now.  Really 
appreciate your help.

Regards,

Bin

From: "Neale Ranns (nranns)" 
Date: Tuesday, December 19, 2017 at 7:18 AM
To: "Bin Zhang (binzhang)" 
Cc: "vpp-dev@lists.fd.io" 
Subject: Re: [vpp-dev] Need Help on an ipsec Problem

Hi Bin,

That looks like a FIB entry caused by there being an ARP entry for 172.28.128.4 
on GigE0/8/0. Is that true?

and it looks like from your trace that 172.28.128.4 is the tunnel endpoint. You 
don’t want to route packets to the tunnel’s destination via the tunnel…

thanks,
neale

From: "Bin Zhang (binzhang)" 
Date: Tuesday, 19 December 2017 at 02:28
To: "Neale Ranns (nranns)" 
Cc: "vpp-dev@lists.fd.io" 
Subject: Re: [vpp-dev] Need Help on an ipsec Problem

Hi Neale,

Many thanks for the help.  You are right that I did not enable/configure the 
ipsec tunnel.  I encountered a new problem after I fixed that.  I think I still 
miss some config to make this ipsec tunnel to work on both directions. I 
received the echo reply from the destination, but vpp did not push the packet 
into the tunnel. After the ip4 look up, it went to the rewirte and tx.  How do 
I configure the tunnel interface (or routing table) to make the packet to go 
into the tunnel?

DBGvpp# show ip fib
..
172.28.128.4/32 – all the packet to this interface should be pushed to the 
tunnel
  unicast-ip4-chain
  [@0]: dpo-load-balance: [proto:ip4 index:18 buckets:1 uRPF:20 to:[9:1224] 
via:[1:84]]
[0] [@5]: ipv4 via 172.28.128.4 GigabitEthernet0/8/0: 
08002773718f08002794519e0800
..


DBGvpp# show trace
--- Start of thread 0 vpp_main ---
Packet 1

06:54:53:849574: af-packet-input
  af_packet: hw_if_index 2 next-index 4
tpacket2_hdr:
  status 0x1 len 98 snaplen 98 mac 66 net 80
  sec 0x5a386454 nsec 0x34d90965 vlan 0 vlan_tpid 0
06:54:53:849585: ethernet-input
  IP4: 4e:9a:96:eb:16:33 -> 02:fe:f1:95:12:6c
06:54:53:849593: ip4-input
  ICMP: 151.1.1.2 -> 172.28.128.4
tos 0x00, ttl 64, length 84, checksum 0x468d
fragment id 0x6ff8
  ICMP echo_reply checksum 0x91d6
06:54:53:849596: ip4-lookup
  fib 0 dpo-idx 3 flow hash: 0x
  ICMP: 151.1.1.2 -> 172.28.128.4
tos 0x00, ttl 64, length 84, checksum 0x468d
fragment id 0x6ff8
  ICMP echo_reply checksum 0x91d6
06:54:53:849600: ip4-rewrite
  tx_sw_if_index 1 dpo-idx 3 : ipv4 via 172.28.128.4 GigabitEthernet0/8/0: 
08002773718f08002794519e0800 flow hash: 0x
  : 08002773718f08002794519e080045546ff83f01478d97010102ac1c
  0020: 800491d60d815464385a08170d001011
06:54:53:849604: GigabitEthernet0/8/0-output
  GigabitEthernet0/8/0
  IP4: 08:00:27:94:51:9e -> 08:00:27:73:71:8f
  ICMP: 151.1.1.2 -> 172.28.128.4
tos 0x00, ttl 63, length 84, checksum 0x478d
fragment id 0x6ff8
  ICMP echo_reply checksum 0x91d6
06:54:53:849609: GigabitEthernet0/8/0-tx
  GigabitEthernet0/8/0 tx queue 0
  buffer 0xd10f: current data 0, length 98, free-list 0, clone-count 0, 
totlen-nifb 0, trace 0x0
  IP4: 08:00:27:94:51:9e -> 08:00:27:73:71:8f
  ICMP: 151.1.1.2 -> 172.28.128.4
tos 0x00, ttl 63, length 84, checksum 0x478d
fragment id 0x6ff8
  ICMP echo_reply checksum 0x91d6

Thanks in advance,

Bin


From: "Neale Ranns (nranns)" 
Date: Sunday, December 17, 2017 at 8:07 AM
To: "Bin Zhang (binzhang)" , "vpp-dev@lists.fd.io" 

Subject: Re: [vpp-dev] Need Help on an ipsec Problem

Hi Bin,

I expect your IPsec tunnel is not enabled/configured to accept IPv4 packets.
Do:
  sh int featuee 

and if you see:
ip4-unicast:
  ip4-drop

then the tunnel is configured to drop all IPv4 packets.
In order to enable any interface to receive IP it must either have an IP 
address applied;
  set int ip addr  p.q.r.s/t
Or be unnumbered to another interface that has one;
  set int ip addr  p.q.r.s/t
  set int unnumbered  use 

/neale

From:  on behalf of "Bin Zhang (binzhang)" 

Date: Friday, 15 December 2017 at 23:04
To: "vpp-dev@lists.fd.io" 
Subject: [vpp-dev] Need Help on an ipsec Problem

Hi Team,

I am new to VPP and would appreciate your comments and debugging tips on the 
following problem.

I have set up an ipsec tunnel between two VMs (as shown in the diagram).  The 
client side is Strongswan and the server side is VPP.  But I can not ping 
through the tunnel from the client VM (on the left) to the subnet in the VPP VM 
(on the right).  In other words, “ping 151.1.1.2” failed.  The packet is 
dropped by VPP.

I used gdb to track the code execution.  The “next index” was changed from 2 
(IP4_INPUT_NEXT_LOOKUP) to 5 (IP4_INPUT_N_NEXT) in function 
vnet_get_config_data.



  /* Last 32 bits 

[vpp-dev] Unable to make GTPU Tunnel work

2017-12-19 Thread pravir.patel

Setup:
+- ubuntu-test -
|
| [eth2: 10.1.13.197]
| route add 11.9.0.0 netmask 255.255.0.0 gw 10.1.13.199
+-- | 
|
+-VPP#7 | 
| [GigabitEthernet0/6/0: 10.1.13.199]
|
| [gtpu_tunnel0: 11.9.0.1]
|   |
| [GigabitEthernet0/7/0: 10.1.14.199] --> vrf:152
+-- || ---
||
+-VPP#4 || ---
| [TenGigabitEthernet82/0/0: GigabitEthernet0/c/0] --> vrf:152
|
| [loop0: 11.9.0.4]
+- VPP#4 -


Commands used to configure

 sudo vppctl set interface ip table GigabitEthernet0/7/0 152
 sudo vppctl set interface ip address GigabitEthernet0/7/0 10.1.14.199/24
 sudo vppctl create gtpu tunnel src 10.1.14.199 dst 10.1.14.200 teid 
encap-vrf-id 152 decap-next node ip4-lookup

 sudo vppctl set interface ip address gtpu_tunnel0 11.9.0.1/16

 sudo vppctl ip route 11.9.0.0/16 via gtpu_tunnel0
 sudo vppctl ip route 10.1.13.0/16 via GigabitEthernet0/6/0
 sudo vppctl set interface state GigabitEthernet0/6/0 up
 sudo vppctl set interface state GigabitEthernet0/7/0 up

 
 
 sudo vppctl set interface ip table GigabitEthernet0/c/0 152
 sudo vppctl set interface ip address GigabitEthernet0/c/0 10.1.14.200/24
 sudo vppctl create gtpu tunnel src 10.1.14.200 dst 10.1.14.199 teid 
encap-vrf-id 152 decap-next node ip4-lookup

 sudo vppctl show gtpu tunnel
 src 10.1.14.200 dst 10.1.14.199 teid  sw_if_index 4 encap_fib_index 1
fib_entry_index 16 decap_next index 4

 sudo vppctl create loopback interface

 sudo vppctl set interface ip address loop0 11.9.0.4/16
 sudo vppctl ip route 10.1.13.0/24 via gtpu_tunnel1
 sudo vppctl set interface state GigabitEthernet0/c/0 up
 sudo vppctl set interface state loop0 up


When trying to ping from ubuntu-test machine "ping 11.9.0.4" the VPP#7 just
restarts.

VPP Version: vpp v17.10-release built by jenkins on
ubuntu1604-basebuild-8c-32g-2873 at Thu Oct 26 02:05:09 UTC 2017

cat /etc/lsb-release
DISTRIB_ID=Ubuntu
DISTRIB_RELEASE=16.04
DISTRIB_CODENAME=xenial
DISTRIB_DESCRIPTION="Ubuntu 16.04.3 LTS"

dpkg -l | grep vpp
ii  vpp  17.10-release amd64 Vector Packet Processing--executables
ii  vpp-dev 17.10-release amd64 Vector Packet Processing--development
support
ii  vpp-lib   17.10-release amd64 Vector Packet Processing--runtime
libraries
ii  vpp-plugins 17.10-release amd64 Vector Packet Processing--runtime
plugins


Any help appreciated.

Pravir Patel


___
vpp-dev mailing list
vpp-dev@lists.fd.io
https://lists.fd.io/mailman/listinfo/vpp-dev


Re: [vpp-dev] IPsec with AES-NI MB cryptodev

2017-12-19 Thread Matthew Smith

Hi Sergio,

Thanks!

I just submitted https://gerrit.fd.io/r/#/c/9878/ 
. It’s slightly different than the last patch 
I pasted last time, it just turns off the offload flags instead of resetting 
all flags.

I’m testing with the ENA PMD (drivers/net/ena). I think the PMD sets the 
offload flags when it receives the packets originally (ena_ethdev.c - 
ena_rx_mbuf_prepare()). I haven’t looked at offloads with any other PMDs so I 
don’t know whether that’s a common practice. I am presuming that it’s not 
common or this problem probably would have occurred before.

-Matt


> On Dec 19, 2017, at 3:45 AM, Gonzalez Monroy, Sergio 
>  wrote:
> 
> Hey Matt,
> 
> Good stuff.
> 
> I think the change makes sense, remove any L4 checksum offload when doing 
> IPsec.
> I wonder why we have those offloads on in the first place, but that is a 
> different issue.
> 
> Regards,
> Sergio
> 
> On 18/12/2017 16:41, Matthew Smith wrote:
>> Hi Sergio,
>> 
>> I think I identified the problem.
>> 
>> When the UDP packet arrives, there are packet checksum offload flags set on 
>> the buffer (PKT_TX_TCP_CKSUM, PKT_TX_SCTP_CKSUM). Those can be seen in the 
>> previous trace I sent. I’m not sure why a UDP packet is resulting in those 2 
>> particular flags being set. Maybe the trace function is showing the wrong 
>> thing, or maybe the PMD is doing something weird. I think after 
>> encapsulation & encryption the buffer still carries those original offload 
>> flags when it gets transmitted so the PMD tells the hardware to calculate 
>> the checksum. The UDP checksum is at the same offset in an IP packet as the 
>> lower 16 bits of an ESP sequence number is so those 2 bytes are being 
>> overwritten by the hardware.
>> 
>> The ICMP packet buffers don’t have any offload flags set, which is why those 
>> packets are being correctly delivered.
>> 
>> I only saw this behavior with the DPDK cryptodev and not the default openssl 
>> encryption, because the openssl encrypt function esp_encrypt_node_fn() 
>> allocates a new buffer and initializes the flags to 
>> VLIB_BUFFER_TOTAL_LENGTH_VALID. When I applied the following patch to reset 
>> the flags in the same way the default crypto encrypt function does, I 
>> stopped seeing the ‘auth failed’ on UDP packets and saw those packets arrive 
>> at the host on the other side of the tunnel.
>> 
>> diff --git a/src/plugins/dpdk/ipsec/esp_encrypt.c 
>> b/src/plugins/dpdk/ipsec/esp_encrypt.c
>> index b4873d4..7bf24a4 100644
>> --- a/src/plugins/dpdk/ipsec/esp_encrypt.c
>> +++ b/src/plugins/dpdk/ipsec/esp_encrypt.c
>> @@ -265,6 +265,8 @@ dpdk_esp_encrypt_node_fn (vlib_main_t * vm,
>>iv_size = cipher_alg->iv_len;
>>trunc_size = auth_alg->trunc_size;
>>  + b0->flags = VLIB_BUFFER_TOTAL_LENGTH_VALID;
>> +
>>if (sa0->is_tunnel)
>>  {
>>if (!is_ipv6 && !sa0->is_tunnel_ip6)  /* ip4inip4 */
>> 
>> 
>> I’m not sure if that’s all that needs to change. VPP also crashed a short 
>> while after I tested with that patch and I haven’t looked into the cause of 
>> the crash yet, so there may be more to properly fixing the issue than adding 
>> that statement.
>> 
>> -Matt
>> 
>> 
>> 
>>> On Dec 18, 2017, at 4:01 AM, Gonzalez Monroy, Sergio 
>>>  wrote:
>>> 
>>> Hi Matt,
>>> 
>>> Could you add verbose to the trace? ie. 'trace add dpdk-input 10 verbose'
>>> 
>>> Thanks,
>>> Sergio
>>> 
>>> On 15/12/2017 15:11, Matthew Smith wrote:
 Hi Sergio,
 
 Here is the sending side trace:
 
 Packet 1
 
 10:54:40:291456: dpdk-input
   VirtualFunctionEthernet0/6/0 rx queue 0
   buffer 0x6e4f: current data 14, length 84, free-list 0, clone-count 0, 
 totlen-nifb 0, trace 0x0
  l4-cksum-computed l4-cksum-correct l2-hdr-offset 0 
 l3-hdr-offset 14
   PKT MBUF: port 0, nb_segs 1, pkt_len 98
 buf_len 2176, data_len 98, ol_flags 0x0, data_off 128, phys_addr 
 0x2c5b9440
 packet_type 0x0 l2_len 0 l3_len 0 outer_l2_len 0 outer_l3_len 0
 Packet Offload Flags
   IP4: 0e:c6:15:62:1f:9e -> 0e:7e:f2:5d:0d:b6
   ICMP: 10.1.3.235 -> 10.0.1.253
 tos 0x00, ttl 64, length 84, checksum 0x78df
 fragment id 0xa7e1, flags DONT_FRAGMENT
   ICMP echo_request checksum 0x5047
 10:54:40:291473: ip4-input
   ICMP: 10.1.3.235 -> 10.0.1.253
 tos 0x00, ttl 64, length 84, checksum 0x78df
 fragment id 0xa7e1, flags DONT_FRAGMENT
   ICMP echo_request checksum 0x5047
 10:54:40:291485: ip4-lookup
   fib 0 dpo-idx 2 flow hash: 0x
   ICMP: 10.1.3.235 -> 10.0.1.253
 tos 0x00, ttl 64, length 84, checksum 0x78df
 fragment id 0xa7e1, flags DONT_FRAGMENT
   ICMP echo_request checksum 0x5047
 10:54:40:291491: ip4-rewrite
   tx_sw_if_index 3 dpo-idx 2 : ipv4 via 0.0.0.0 ipsec0:  flow hash: 
 0x
   : 

Re: [vpp-dev] Need Help on an ipsec Problem

2017-12-19 Thread Neale Ranns (nranns)
Hi Bin,

That looks like a FIB entry caused by there being an ARP entry for 172.28.128.4 
on GigE0/8/0. Is that true?

and it looks like from your trace that 172.28.128.4 is the tunnel endpoint. You 
don’t want to route packets to the tunnel’s destination via the tunnel…

thanks,
neale

From: "Bin Zhang (binzhang)" 
Date: Tuesday, 19 December 2017 at 02:28
To: "Neale Ranns (nranns)" 
Cc: "vpp-dev@lists.fd.io" 
Subject: Re: [vpp-dev] Need Help on an ipsec Problem

Hi Neale,

Many thanks for the help.  You are right that I did not enable/configure the 
ipsec tunnel.  I encountered a new problem after I fixed that.  I think I still 
miss some config to make this ipsec tunnel to work on both directions. I 
received the echo reply from the destination, but vpp did not push the packet 
into the tunnel. After the ip4 look up, it went to the rewirte and tx.  How do 
I configure the tunnel interface (or routing table) to make the packet to go 
into the tunnel?

DBGvpp# show ip fib
..
172.28.128.4/32 – all the packet to this interface should be pushed to the 
tunnel
  unicast-ip4-chain
  [@0]: dpo-load-balance: [proto:ip4 index:18 buckets:1 uRPF:20 to:[9:1224] 
via:[1:84]]
[0] [@5]: ipv4 via 172.28.128.4 GigabitEthernet0/8/0: 
08002773718f08002794519e0800
..


DBGvpp# show trace
--- Start of thread 0 vpp_main ---
Packet 1

06:54:53:849574: af-packet-input
  af_packet: hw_if_index 2 next-index 4
tpacket2_hdr:
  status 0x1 len 98 snaplen 98 mac 66 net 80
  sec 0x5a386454 nsec 0x34d90965 vlan 0 vlan_tpid 0
06:54:53:849585: ethernet-input
  IP4: 4e:9a:96:eb:16:33 -> 02:fe:f1:95:12:6c
06:54:53:849593: ip4-input
  ICMP: 151.1.1.2 -> 172.28.128.4
tos 0x00, ttl 64, length 84, checksum 0x468d
fragment id 0x6ff8
  ICMP echo_reply checksum 0x91d6
06:54:53:849596: ip4-lookup
  fib 0 dpo-idx 3 flow hash: 0x
  ICMP: 151.1.1.2 -> 172.28.128.4
tos 0x00, ttl 64, length 84, checksum 0x468d
fragment id 0x6ff8
  ICMP echo_reply checksum 0x91d6
06:54:53:849600: ip4-rewrite
  tx_sw_if_index 1 dpo-idx 3 : ipv4 via 172.28.128.4 GigabitEthernet0/8/0: 
08002773718f08002794519e0800 flow hash: 0x
  : 08002773718f08002794519e080045546ff83f01478d97010102ac1c
  0020: 800491d60d815464385a08170d001011
06:54:53:849604: GigabitEthernet0/8/0-output
  GigabitEthernet0/8/0
  IP4: 08:00:27:94:51:9e -> 08:00:27:73:71:8f
  ICMP: 151.1.1.2 -> 172.28.128.4
tos 0x00, ttl 63, length 84, checksum 0x478d
fragment id 0x6ff8
  ICMP echo_reply checksum 0x91d6
06:54:53:849609: GigabitEthernet0/8/0-tx
  GigabitEthernet0/8/0 tx queue 0
  buffer 0xd10f: current data 0, length 98, free-list 0, clone-count 0, 
totlen-nifb 0, trace 0x0
  IP4: 08:00:27:94:51:9e -> 08:00:27:73:71:8f
  ICMP: 151.1.1.2 -> 172.28.128.4
tos 0x00, ttl 63, length 84, checksum 0x478d
fragment id 0x6ff8
  ICMP echo_reply checksum 0x91d6

Thanks in advance,

Bin


From: "Neale Ranns (nranns)" 
Date: Sunday, December 17, 2017 at 8:07 AM
To: "Bin Zhang (binzhang)" , "vpp-dev@lists.fd.io" 

Subject: Re: [vpp-dev] Need Help on an ipsec Problem

Hi Bin,

I expect your IPsec tunnel is not enabled/configured to accept IPv4 packets.
Do:
  sh int featuee 

and if you see:
ip4-unicast:
  ip4-drop

then the tunnel is configured to drop all IPv4 packets.
In order to enable any interface to receive IP it must either have an IP 
address applied;
  set int ip addr  p.q.r.s/t
Or be unnumbered to another interface that has one;
  set int ip addr  p.q.r.s/t
  set int unnumbered  use 

/neale

From:  on behalf of "Bin Zhang (binzhang)" 

Date: Friday, 15 December 2017 at 23:04
To: "vpp-dev@lists.fd.io" 
Subject: [vpp-dev] Need Help on an ipsec Problem

Hi Team,

I am new to VPP and would appreciate your comments and debugging tips on the 
following problem.

I have set up an ipsec tunnel between two VMs (as shown in the diagram).  The 
client side is Strongswan and the server side is VPP.  But I can not ping 
through the tunnel from the client VM (on the left) to the subnet in the VPP VM 
(on the right).  In other words, “ping 151.1.1.2” failed.  The packet is 
dropped by VPP.

I used gdb to track the code execution.  The “next index” was changed from 2 
(IP4_INPUT_NEXT_LOOKUP) to 5 (IP4_INPUT_N_NEXT) in function 
vnet_get_config_data.



  /* Last 32 bits are next index. */

  *next_index = d[n];


How do I move forward with my investigation?

More info:
[1] Packet trace of the ping packet.  As we can see, the de-tunneling worked 
and the (inner) ICMP packet was moved to ip4-input.  But it was then dropped.
00:12:16:877859: dpdk-input
  GigabitEthernet0/8/0 rx queue 0
  buffer 0x494a: current data 14, length 152, free-list 0, clone-count 0, 
totlen-nifb 0, trace 0x0
  PKT MBUF: port 0, nb_segs 

[vpp-dev] FRR3 and VPP, zebra FIB push to FPM

2017-12-19 Thread Joe Botha
Hi!

I'd like to use VPP as a router.

I'm planning to use Free Range Routing 3.0.2's bgpd and
zebra as the control plane, and using zebra's FIB push
feature to manage the VPP 17.10 control plane.

I'm busy building a python app which acts as a forwarding
plane manager (fpm) and listens for the netlink messages
which zebra sends. I'll then add and remove routes in VPP
with the VPP Python API.

That's the current plan.

I've looked into the Sandbox Router related links:

https://wiki.fd.io/view/VPP_Sandbox/router
https://github.com/FRRouting/frr/wiki/Alternate-forwarding-planes:-VPP
https://lists.fd.io/pipermail/vpp-dev/2017-July/005828.html

Some questions:

Does the FPM plan above make sense, or
is there an obvious easier way to do this?

Should I be building the FPM to use Protobuf messages,
rather than Netlink?

Know of any other FPM projects for VPP I can look? Seems
ONOS does something similar.

Would I need the tap devices in the OS, as the Sandbox
router plugin uses? or, do I just add other "dummy"
interfaces in the OS which match the IPs of the real DPDK
interfaces?

Does anybody know of a project that's connecting FRR3 and
VPP17.10, or similar new'ish versions to end up with a
bgp border router?

-- 
Swimmingly,
 Joe 
 
 www.swimgeek.com/blog  +27 82 562 6167  instagram.com/joe.swimgeek
"...all progress depends on the unreasonable man."

___
vpp-dev mailing list
vpp-dev@lists.fd.io
https://lists.fd.io/mailman/listinfo/vpp-dev


[vpp-dev] VPP Design Documents

2017-12-19 Thread Holoo Gulakh
Hi,
I want to analyze and understand VPP code, but as you know getting insight
into how things are implemented from the source code is difficult.

Can I have VPP design documents (software engineering) to understand:
+how project is structured and divided in multiple parts??
+how different parts communicate each other??
+how to make change in some part of code??
and so on

I need more details than what provided in wiki and readme files.

thanks in advance
___
vpp-dev mailing list
vpp-dev@lists.fd.io
https://lists.fd.io/mailman/listinfo/vpp-dev

[vpp-dev] kube-proxy test fail

2017-12-19 Thread Gabriel Ganne
Hi Hongjun,


I just ran the kube-proxy tests and I end up stuck in kp_vip_find_index() while 
processing a cli command.

Below is the command I used and the backtrace I get.


make test-all V=2 TEST=*.TestKP.*

...

0x7eaa9c04 in kp_vip_find_index (prefix=prefix@entry=0x7ec3dbe8, 
plen=104 'h',
vip_index=vip_index@entry=0x7ec3dbbc)
at /home/gannega/vpp/build-data/../src/plugins/kubeproxy/kp.c:457
457   kp_get_writer_lock();
(gdb) bt
#0  0x7eaa9c04 in kp_vip_find_index 
(prefix=prefix@entry=0x7ec3dbe8, plen=104 'h',
vip_index=vip_index@entry=0x7ec3dbbc)
at /home/gannega/vpp/build-data/../src/plugins/kubeproxy/kp.c:457
#1  0x7eaaeacc in kp_pod_command_fn (vm=, 
input=,
cmd=) at 
/home/gannega/vpp/build-data/../src/plugins/kubeproxy/kp_cli.c:132
#2  0xbf62cfd0 in vlib_cli_dispatch_sub_commands (
vm=vm@entry=0xbf68cfd0 , cm=cm@entry=0xbf68d220 
,
input=input@entry=0x7ec3ddf8, parent_command_index=)
at /home/gannega/vpp/build-data/../src/vlib/cli.c:588
#3  0xbf62d618 in vlib_cli_dispatch_sub_commands (
vm=vm@entry=0xbf68cfd0 , cm=cm@entry=0xbf68d220 
,
input=input@entry=0x7ec3ddf8, 
parent_command_index=parent_command_index@entry=0)
at /home/gannega/vpp/build-data/../src/vlib/cli.c:566
#4  0xbf62d764 in vlib_cli_input (vm=vm@entry=0xbf68cfd0 
,
input=input@entry=0x7ec3ddf8, function=function@entry=0x411b78 
,
function_arg=function_arg@entry=281472808508960)
at /home/gannega/vpp/build-data/../src/vlib/cli.c:662
#5  0x00411e78 in vl_api_cli_inband_t_handler (mp=0x3809c4ec)
at /home/gannega/vpp/build-data/../src/vpp/api/api.c:219
#6  0xbf6941ec in vl_msg_api_handler_with_vm_node (am=0xbf6be430 
,
the_msg=0x3809c4ec, vm=0xbf68cfd0 , 
node=0x7ec35000)
at /home/gannega/vpp/build-data/../src/vlibapi/api_shared.c:508
#7  0xbf69d544 in memclnt_process (vm=, 
node=0x139013a, f=)
at /home/gannega/vpp/build-data/../src/vlibmemory/memory_vlib.c:970
#8  0xbf635410 in vlib_process_bootstrap (_a=)
at /home/gannega/vpp/build-data/../src/vlib/main.c:1231
#9  0xbef127a8 in clib_calljmp ()
at /home/gannega/vpp/build-data/../src/vppinfra/longjmp.S:676


Best regards,


--

Gabriel Ganne
___
vpp-dev mailing list
vpp-dev@lists.fd.io
https://lists.fd.io/mailman/listinfo/vpp-dev

[vpp-dev] Can ip4-inacl work with nat44-in2out?

2017-12-19 Thread st.linux.ily via vpp-dev
Hi,
We are trying to use input ACL  with NAT, but found it didn't work together.
Version: 17.10
Topology: lan -> bd -> loop0(inside) -> nat44 -> wan(outside)

Enable inacl on loop0 using:...vppctl classify session hit-next 4294967295 
table-index 0 match l3 ip4 dst 192.168.20.22 action set-ip4-fib-id 210vppctl 
set interface input acl intfc loop0 ip4-table 0
vpp#  show interface features loop0
Driver feature paths configured on loop0...

ip4-unicast:
  nat44-in2out
  ip4-inacl


But we can not trace the packet go into the ip4-inacl feature as below.


Packet 1

11:28:26:100657: af-packet-input
  af_packet: hw_if_index 1 next-index 4
tpacket2_hdr:
  status 0x2001 len 98 snaplen 98 mac 66 net 80
  sec 0x5a3859b3 nsec 0x89d3c8d vlan 0 vlan_tpid 0
11:28:26:100692: ethernet-input
  IP4: 52:99:aa:f0:c2:13 -> de:ad:00:00:00:00
11:28:26:100734: l2-input
  l2-input: sw_if_index 1 dst de:ad:00:00:00:00 src 52:99:aa:f0:c2:13
11:28:26:100755: l2-learn
  l2-learn: sw_if_index 1 dst de:ad:00:00:00:00 src 52:99:aa:f0:c2:13 bd_index 1
11:28:26:100784: l2-fwd
  l2-fwd:   sw_if_index 1 dst de:ad:00:00:00:00 src 52:99:aa:f0:c2:13 bd_index 1
11:28:26:100793: ip4-input
  ICMP: 192.168.1.2 -> 192.168.20.22
tos 0x00, ttl 64, length 84, checksum 0x17dc
fragment id 0x8c64, flags DONT_FRAGMENT
  ICMP echo_request checksum 0x8c1d
11:28:26:100817: nat44-in2out
  NAT44_IN2OUT_FAST_PATH: sw_if_index 5, next index 3, session -1
11:28:26:100829: nat44-in2out-slowpath
  NAT44_IN2OUT_SLOW_PATH: sw_if_index 5, next index 0, session 5
11:28:26:100897: ip4-lookup
  fib 0 dpo-idx 15 flow hash: 0x
  ICMP: 192.168.20.20 -> 192.168.20.22
tos 0x00, ttl 64, length 84, checksum 0x04ca
fragment id 0x8c64, flags DONT_FRAGMENT
  ICMP echo_request checksum 0x3892
11:28:26:100921: ip4-rewrite
  tx_sw_if_index 3 dpo-idx 15 : ipv4 via 192.168.20.22 host-wan1: 
4e258981d94102fea6c9441f0800 flow hash: 0x
  : 4e258981d94102fea6c9441f080045548c6440003f0105cac0a81414c0a8
  0020: 141608003892a0b10001b359385a723402001011
11:28:26:100929: host-wan1-output   

  
  host-wan1 

  
  IP4: 02:fe:a6:c9:44:1f -> 4e:25:89:81:d9:41
  ICMP: 192.168.20.20 -> 192.168.20.22
tos 0x00, ttl 63, length 84, checksum 0x05ca
fragment id 0x8c64, flags DONT_FRAGMENT
  ICMP echo_request checksum 0x3892

If disable the nat44-in2out feature on loop0, we can see the trace go into the 
ip4-inacl as below:

Packet 1

12:05:51:764284: af-packet-input
  af_packet: hw_if_index 1 next-index 4
tpacket2_hdr:
  status 0x2001 len 98 snaplen 98 mac 66 net 80
  sec 0x5a386277 nsec 0x1d5ad53b vlan 0 vlan_tpid 0
12:05:51:764324: ethernet-input
  IP4: 52:99:aa:f0:c2:13 -> de:ad:00:00:00:00
12:05:51:764364: l2-input
  l2-input: sw_if_index 1 dst de:ad:00:00:00:00 src 52:99:aa:f0:c2:13
12:05:51:764391: l2-learn
  l2-learn: sw_if_index 1 dst de:ad:00:00:00:00 src 52:99:aa:f0:c2:13 bd_index 1
12:05:51:764424: l2-fwd
  l2-fwd:   sw_if_index 1 dst de:ad:00:00:00:00 src 52:99:aa:f0:c2:13 bd_index 1
12:05:51:764457: ip4-input
  ICMP: 192.168.1.2 -> 192.168.20.22
tos 0x00, ttl 64, length 84, checksum 0x77f2
fragment id 0x2c4e, flags DONT_FRAGMENT
  ICMP echo_request checksum 0x8246
12:05:51:764498: ip4-inacl
  INACL: sw_if_index 5, next_index 1, table 0, offset 192
12:05:51:764513: ip4-lookup
  fib 3 dpo-idx 12 flow hash: 0x
  ICMP: 192.168.1.2 -> 192.168.20.22
tos 0x00, ttl 64, length 84, checksum 0x77f2
fragment id 0x2c4e, flags DONT_FRAGMENT
  ICMP echo_request checksum 0x8246
12:05:51:764541: ip4-arp
ICMP: 192.168.1.2 -> 192.168.20.22
  tos 0x00, ttl 64, length 84, checksum 0x77f2
  fragment id 0x2c4e, flags DONT_FRAGMENT
ICMP echo_request checksum 0x8246
12:05:51:764559: host-wan1-output
  host-wan1 

  
  ARP: 02:fe:a6:c9:44:1f -> ff:ff:ff:ff:ff:ff
  request, type ethernet/IP4, address size 6/4
  02:fe:a6:c9:44:1f/192.168.20.20 -> 00:00:00:00:00:00/192.168.20.20
12:05:51:764584: error-drop
  ip4-arp: ARP requests sent

So the question is how to do the classify on a inside(L3) interface during the 
NAT?

BR,
xliao___
vpp-dev mailing list
vpp-dev@lists.fd.io
https://lists.fd.io/mailman/listinfo/vpp-dev

[vpp-dev] ip routing by mpls

2017-12-19 Thread Holoo Gulakh
Hi,
I used the following command to enter an entry into IP FIB so that I can
route incoming packets by their mpls label:

   " vppctl ip route add 5.6.7.8/32 via 10.10.10.10
GigabitEthernet0/9/0 out-label 46 "

Now IP FIB has a new entry like this:

5.6.7.8/32
  unicast-ip4-chain
  [@0]: dpo-load-balance: [proto:ip4 index:25 buckets:1 uRPF:26 to:[0:0]]
[0] [@10]: mpls-label:[0]:[46:255:0:eos]
[@1]: arp-mpls: via 10.10.10.10 GigabitEthernet0/9/0

===
My Question:
===
I am trying to add an entry to IP FIB using API:

" r = vpp.ip_add_del_route(is_add=1, is_ipv6=0, is_multipath=1,
dst_address="\x01\x02\x03\x04", dst_address_length=32,
next_hop_sw_if_index=2, next_hop_n_out_labels=1,
next_hop_out_label_stack=[78] "

I expect to get an entry like the one I got by CLI containing mpls label,
but what I get is:
1.2.3.4/32
  unicast-ip4-chain
  [@0]: dpo-load-balance: [proto:ip4 index:24 buckets:1 uRPF:25 to:[0:0]]
[0] [@3]: arp-ipv4: via 1.2.3.4 GigabitEthernet0/9/0

This entry does not have any mpls label. What is wrong with my API?

thanks in advance
___
vpp-dev mailing list
vpp-dev@lists.fd.io
https://lists.fd.io/mailman/listinfo/vpp-dev

[vpp-dev] 180 iterative routing configured ,Segmentation fault appear

2017-12-19 Thread ??????

Hi guys,

I'm testing the static routing. When I configuring static routing in routing 
iterations. there is a SIGSEGV.
My configuration and more info is shown below:

configuration:
B 2~180
C 3~181
ip route add 1.1.(C).1/24 via ip4-address 1.1.(B).1

VPP# ip route add 1.1.179.1/24 via ip4-address 1.1.178.1
VPP# ip route add 1.1.180.1/24 via ip4-address 1.1.179.1
VPP# ip route add 1.1.181.1/24 via ip4-address 1.1.180.1


Program received signal SIGSEGV, Segmentation fault.
0x76bf5ed4 in round_pow2 (x=1808181231615, pow2=18446744073709551615) 
at /home/vpp/build-data/../src/vppinfra/clib.h:279
279 {
(gdb) bt
#0  0x76bf5ed4 in round_pow2 (x=1808181231615, 
pow2=18446744073709551615) at /home/vpp/build-data/../src/vppinfra/clib.h:279
#1  0x76bf6020 in vec_aligned_header_bytes (header_bytes=1936, 
align=16) at /home/vpp/build-data/../src/vppinfra/vec_bootstrap.h:112
#2  0x76bf606c in vec_aligned_header (v=0x7fffb58ce000, 
header_bytes=1936, align=16) at 
/home/vpp/build-data/../src/vppinfra/vec_bootstrap.h:118
#3  0x76bf666b in mheap_header (v=0x7fffb58ce000 
"\377\377\377\177\002") at 
/home/vpp/build-data/../src/vppinfra/mheap_bootstrap.h:272
#4  0x76bf8d1c in mheap_get_search_free_list (v=0x7fffb58ce000, 
n_user_bytes_arg=0x7fffb5eef180, align=4, align_offset=0) at 
/home/vpp/build-data/../src/vppinfra/mheap.c:533
#5  0x76bf9317 in mheap_get_aligned (v=0x7fffb58ce000, 
n_user_data_bytes=720, align=4, align_offset=0, offset_return=0x7fffb5eef228)
at /home/vpp/build-data/../src/vppinfra/mheap.c:696
#6  0x76c2e3fa in clib_mem_alloc_aligned_at_offset (size=720, align=4, 
align_offset=4, os_out_of_memory_on_failure=1) at 
/home/vpp/build-data/../src/vppinfra/mem.h:92
#7  0x76c2e7ba in vec_resize_allocate_memory (v=0x0, 
length_increment=179, data_bytes=720, header_bytes=4, data_align=4) at 
/home/vpp/build-data/../src/vppinfra/vec.c:59
#8  0x7742383a in _vec_resize (v=0x0, length_increment=179, 
data_bytes=716, header_bytes=0, data_align=0) at 
/home/vpp/build-data/../src/vppinfra/vec.h:142
#9  0x77426bb7 in fib_path_list_recursive_loop_detect 
(path_list_index=0, entry_indicies=0x7fffb5eef488) at 
/home/vpp/build-data/../src/vnet/fib/fib_path_list.c:1198
#10 0x77418e61 in fib_entry_recursive_loop_detect (entry_index=8, 
entry_indicies=0x7fffb5eef538) at 
/home/vpp/build-data/../src/vnet/fib/fib_entry.c:1455
#11 0x7742b91a in fib_path_recursive_loop_detect (path_index=11, 
entry_indicies=0x7fffb5eef538) at 
/home/vpp/build-data/../src/vnet/fib/fib_path.c:1803
#12 0x77426c01 in fib_path_list_recursive_loop_detect 
(path_list_index=11, entry_indicies=0x7fffb5eef5c8) at 
/home/vpp/build-data/../src/vnet/fib/fib_path_list.c:1201
#13 0x77418e61 in fib_entry_recursive_loop_detect (entry_index=10, 
entry_indicies=0x7fffb5eef678) at 
/home/vpp/build-data/../src/vnet/fib/fib_entry.c:1455
#14 0x7742b91a in fib_path_recursive_loop_detect (path_index=12, 
entry_indicies=0x7fffb5eef678) at 
/home/vpp/build-data/../src/vnet/fib/fib_path.c:1803


...

#536 0x7742b91a in fib_path_recursive_loop_detect (path_index=194, 
entry_indicies=0x7fffb5efcff8) at 
/home/vpp/build-data/../src/vnet/fib/fib_path.c:1803
#537 0x77426c01 in fib_path_list_recursive_loop_detect 
(path_list_index=194, entry_indicies=0x7fffb5efd088) at 
/home/vpp/build-data/../src/vnet/fib/fib_path_list.c:1201
#538 0x77418e61 in fib_entry_recursive_loop_detect (entry_index=368, 
entry_indicies=0x7fffb5efd138) at 
/home/vpp/build-data/../src/vnet/fib/fib_entry.c:1455
#539 0x7742b91a in fib_path_recursive_loop_detect (path_index=195, 
entry_indicies=0x7fffb5efd138) at 
/home/vpp/build-data/../src/vnet/fib/fib_path.c:1803
#540 0x77426c01 in fib_path_list_recursive_loop_detect 
(path_list_index=195, entry_indicies=0x7fffb5efd1c8) at 
/home/vpp/build-data/../src/vnet/fib/fib_path_list.c:1201
#541 0x77418e61 in fib_entry_recursive_loop_detect (entry_index=370, 
entry_indicies=0x7fffb5efd278) at 
/home/vpp/build-data/../src/vnet/fib/fib_entry.c:1455
#542 0x7742b91a in fib_path_recursive_loop_detect (path_index=196, 
entry_indicies=0x7fffb5efd278) at 
/home/vpp/build-data/../src/vnet/fib/fib_path.c:1803
#543 0x77426c01 in fib_path_list_recursive_loop_detect 
(path_list_index=196, entry_indicies=0x7fffb5efd308) at 
/home/vpp/build-data/../src/vnet/fib/fib_path_list.c:1201
#544 0x7741dce3 in fib_entry_src_rr_use_covers_pl (src=0x7fffb70cc50c, 
fib_entry=0x7fffb6d9e2c0, cover=0x7fffb6d9e1d0)
at /home/vpp/build-data/../src/vnet/fib/fib_entry_src_rr.c:107
#545 0x7741dea8 in fib_entry_src_rr_activate (src=0x7fffb70cc50c, 
fib_entry=0x7fffb6d9e2c0) at 
/home/vpp/build-data/../src/vnet/fib/fib_entry_src_rr.c:164
#546 0x7741ba2d in fib_entry_src_action_activate 
(fib_entry=0x7fffb6d9e2c0, source=FIB_SOURCE_RR) at 

Re: [vpp-dev] ACL Plugin: check for null session

2017-12-19 Thread khers
Dear Andrew

Unfortunately I can't reproduce this case. It's really a rare situation.

Regards

On Tue, Dec 12, 2017 at 5:43 PM, khers  wrote:

> Dear Andrew
>
> This is a good explanation of how session add and delete works,
> I think this not a benign operation, I could produce the rare scenario you
> explained. I will send backtrace and other details tomorrow.
>
> On Tue, Dec 12, 2017 at 2:46 PM, Andrew  Yourtchenko  > wrote:
>
>> Dear Khers,
>>
>> I think you are right. Normally the entry in the session hash table is
>> deleted before any operations with the per-worker pool, so we should
>> not end up on that line. Also, the deletion itself usually happens as
>> a result of the idle timeout - meaning, no packets hit the session for
>> a comparatively very long time. Also, the deletion happens in the
>> interrupt handler on the worker, so that worker will not be able to
>> use that session anyway. The only theoretically possible scenario I
>> can see this happening is if the interrupt handler on the worker
>> owning the session starts the process of its deletion, and then the
>> worker handling the other leg of the connection receives the packet
>> just in time to do the lookup of the session in the global table just
>> before it gets deleted *AND* then the per-worker session deletion
>> happens earlier than we get the pointer for the session. Initially I
>> did not have the check when getting the pointer to the session, so in
>> this rare case we would reset the timeout or update the flags on the
>> free session. So this would be a benign operation. But as some of the
>> places used the ~0 as index, that was a bit problematic so i have
>> added that check. Coincidentally this is also the exact place which
>> appears to have triggered the other issue that you saw.
>>
>> So I think I might just simplify the check to ensure the index is
>> within the bounds of the allowed indices for the pool.
>>
>> Hope this clarifies the logic...
>>
>> --a
>>
>> On 12/11/17, khers  wrote:
>> > Dear Andrew
>> >
>> > I'm working on d594711a5d79859a7d0bde83a516f7ab52051d9b commit on
>> > stable/1710 branch. sorry for less info.
>> > I can't reproduce last issue I have reported, forgot the commit I were
>> > working on.
>> >
>> > Regards,
>> > Khers
>> >
>> > On Mon, Dec 11, 2017 at 12:24 PM, Andrew Yourtchenko <
>> ayour...@gmail.com>
>> > wrote:
>> >
>> >> Dear Khers,
>> >>
>> >> At least the exact commit# you are working with to get more context
>> would
>> >> be useful - line 1029 on master points to a call acl_fill_5tuple to
>> me...
>> >>
>> >> Also, I have not heard - were you able to reproduce the issue you
>> >> contacted about a while ago ?
>> >>
>> >> --a
>> >>
>> >> > On 11 Dec 2017, at 08:46, khers  wrote:
>> >> >
>> >> > Dear VPP folks,
>> >> >
>> >> > The get_session_ptr function may return null pointer, while we do not
>> >> check this situation in code, for example fa_node.c line 1029, if the
>> >> sess
>> >> equals null, we get segmentation fault in next usage of sess.
>> >> > Please share your thought about this.
>> >> >
>> >> > Regards,
>> >> > Khers
>> >> > ___
>> >> > vpp-dev mailing list
>> >> > vpp-dev@lists.fd.io
>> >> > https://lists.fd.io/mailman/listinfo/vpp-dev
>> >>
>> >
>>
>
>
___
vpp-dev mailing list
vpp-dev@lists.fd.io
https://lists.fd.io/mailman/listinfo/vpp-dev

Re: [vpp-dev] SEGSEGV in acl using 2 core configuration

2017-12-19 Thread khers
Hi

Finally I could reproduce the situation.
commit: d594711a5d79859a7d0bde83a516f7ab52051d9b
branch:stable/1710

git diff 
startup.conf 

vpp commands:
vppctl set interface l2 bridge TenGigabitEthernet6/0/0 1
vppctl set interface l2 bridge TenGigabitEthernet6/0/1 1

vppctl set int state TenGigabitEthernet6/0/0 up
vppctl set int state TenGigabitEthernet6/0/1 up

vppctl set acl-plugin session timeout udp idle 5
vppctl set acl-plugin session timeout tcp idle 10
vppctl set acl-plugin session timeout tcp transient 5

vat>
acl_add_replace permit+reflect

acl_interface_add_del sw_if_index 1 add input acl 0
acl_interface_add_del sw_if_index 1 add output acl 0
acl_interface_add_del sw_if_index 2 add input acl 0
acl_interface_add_del sw_if_index 2 add output acl 0


bizarre output of 'sh acl-plugin sessions' : output

lookat thread #1 sw_if_index_2!!!

gdb backtrace 

T-rex run as I said earlier.

Hardware Info 


Regards,
Khers

On Wed, Nov 8, 2017 at 8:48 PM, Andrew Yourtchenko 
wrote:

> Dear Khers,
>
> That is without applying the one liner change that I have proposed, right ?
>
> I would suggest to retry the reproduction on the same commit where you
> were previously able to reproduce it, and if it is reliably reproducible
> there - to apply that change and see if it addresses the issue. Then we can
> track if the latest commit fixed or merely masked it...
>
> --a
>
> On 8 Nov 2017, at 08:40, khers  wrote:
>
> Dear Andrew
>
> Sorry for my delay, I get last revision of master  (commit :
> e695cb4dbdb6f9424ac5a567799e67f791fad328 ), and
> segfault did not occur with the same environment and test scenario. I will
> try to reproduce the potential bug
> with running test with longer duration and more aggressive scenario.
>
> Regards,
> Khers
>
> On Wed, Oct 25, 2017 at 1:45 PM, Andrew  Yourtchenko  > wrote:
>
>> Dear Khers,
>>
>> okay, cool! When testing the debug image, you could save the full dump
>> and the .debs for all the artefacts so just in case I could grab the
>> entire set of info and was able to look at it in my environment.
>>
>> Meantime, I had an idea for another potential failure mode, whereby
>> the session would get checked while there is a session being freed,
>> potentially resulting in a reallocation of the free bitmap in the
>> pool.
>>
>> So before the reproduction in the debug build, give a shot to this
>> one-line change
>>  in the release build and see if you still can reproduce the crash with
>> it:
>>
>> --- a/src/plugins/acl/fa_node.c
>> +++ b/src/plugins/acl/fa_node.c
>> @@ -609,6 +609,8 @@ acl_fa_verify_init_sessions (acl_main_t * am)
>>  for (wk = 0; wk < vec_len (am->per_worker_data); wk++) {
>>acl_fa_per_worker_data_t *pw = >per_worker_data[wk];
>>pool_alloc_aligned(pw->fa_sessions_pool,
>> am->fa_conn_table_max_entries, CLIB_CACHE_LINE_BYTES);
>> +  /* preallocate the free bitmap */
>> +  clib_bitmap_validate(pool_header(pw->fa_sessions_pool)->free
>> _bitmap,
>> am->fa_conn_table_max_entries);
>>  }
>>
>> --a
>>
>> On 10/24/17, khers  wrote:
>> > Dear Andrew
>> >
>> > I used latest version of master branch, I will replay the test with
>> debug
>> > build to make more debug info ASAP.
>> > Vpp is running on Xeon E5-2600  series.
>> > I did the tanother tests with two rx-queue and two worker, also with 4
>> > rx-queue and 4 worker, I got segmentation fault on the same function.
>> >
>> > I will send more info in few days.
>> >
>> > Regards,
>> > Khers
>> >
>> > On Oct 24, 2017 6:43 PM, "Andrew  Yourtchenko" 
>> > wrote:
>> >
>> >> Dear Khers,
>> >>
>> >> Thanks for the info!
>> >>
>> >> I tried with these configs in my local setup (I tried even to increase
>> >> the multi-cpu contention by specifying 4 rx-queues instead of 2), but
>> >> it works ok for me on the master. What is the version you are testing
>> >> with ? I presume it is also the master, but just wanted to verify.
>> >>
>> >> To try to get more info about this happening: could you give a shot at
>> >> reproducing this on the debug build ? There are a few asserts that
>> >> would be handy to verify that they do hold true during your tests -
>> >> the location of the crash points to either the pool header being
>> >> corrupted by something (the asserts should catch that) or the pool
>> >> itself reallocated and memory used by something else (which should not
>> >> happen because the memory is preallocated during the initialisation
>> >> time - unless you change the max number of sessions after
>> >> initialisation).
>> >>
>> >> Also, could you tell a bit more about the hardware you are testing
>> >> with ? (cat /proc/cpuinfo)
>> >>
>> >> --a
>> >>
>> >> On 10/24/17, khers  wrote:

Re: [vpp-dev] IPsec with AES-NI MB cryptodev

2017-12-19 Thread Gonzalez Monroy, Sergio

Hey Matt,

Good stuff.

I think the change makes sense, remove any L4 checksum offload when 
doing IPsec.
I wonder why we have those offloads on in the first place, but that is a 
different issue.


Regards,
Sergio

On 18/12/2017 16:41, Matthew Smith wrote:

Hi Sergio,

I think I identified the problem.

When the UDP packet arrives, there are packet checksum offload flags set on the 
buffer (PKT_TX_TCP_CKSUM, PKT_TX_SCTP_CKSUM). Those can be seen in the previous 
trace I sent. I’m not sure why a UDP packet is resulting in those 2 particular 
flags being set. Maybe the trace function is showing the wrong thing, or maybe the 
PMD is doing something weird. I think after encapsulation & encryption the 
buffer still carries those original offload flags when it gets transmitted so the 
PMD tells the hardware to calculate the checksum. The UDP checksum is at the same 
offset in an IP packet as the lower 16 bits of an ESP sequence number is so those 2 
bytes are being overwritten by the hardware.

The ICMP packet buffers don’t have any offload flags set, which is why those 
packets are being correctly delivered.

I only saw this behavior with the DPDK cryptodev and not the default openssl 
encryption, because the openssl encrypt function esp_encrypt_node_fn() 
allocates a new buffer and initializes the flags to 
VLIB_BUFFER_TOTAL_LENGTH_VALID. When I applied the following patch to reset the 
flags in the same way the default crypto encrypt function does, I stopped 
seeing the ‘auth failed’ on UDP packets and saw those packets arrive at the 
host on the other side of the tunnel.

diff --git a/src/plugins/dpdk/ipsec/esp_encrypt.c 
b/src/plugins/dpdk/ipsec/esp_encrypt.c
index b4873d4..7bf24a4 100644
--- a/src/plugins/dpdk/ipsec/esp_encrypt.c
+++ b/src/plugins/dpdk/ipsec/esp_encrypt.c
@@ -265,6 +265,8 @@ dpdk_esp_encrypt_node_fn (vlib_main_t * vm,
  iv_size = cipher_alg->iv_len;
  trunc_size = auth_alg->trunc_size;
  
+	  b0->flags = VLIB_BUFFER_TOTAL_LENGTH_VALID;

+
  if (sa0->is_tunnel)
{
  if (!is_ipv6 && !sa0->is_tunnel_ip6)   /* ip4inip4 */


I’m not sure if that’s all that needs to change. VPP also crashed a short while 
after I tested with that patch and I haven’t looked into the cause of the crash 
yet, so there may be more to properly fixing the issue than adding that 
statement.

-Matt




On Dec 18, 2017, at 4:01 AM, Gonzalez Monroy, Sergio 
 wrote:

Hi Matt,

Could you add verbose to the trace? ie. 'trace add dpdk-input 10 verbose'

Thanks,
Sergio

On 15/12/2017 15:11, Matthew Smith wrote:

Hi Sergio,

Here is the sending side trace:

Packet 1

10:54:40:291456: dpdk-input
   VirtualFunctionEthernet0/6/0 rx queue 0
   buffer 0x6e4f: current data 14, length 84, free-list 0, clone-count 0, 
totlen-nifb 0, trace 0x0
  l4-cksum-computed l4-cksum-correct l2-hdr-offset 0 
l3-hdr-offset 14
   PKT MBUF: port 0, nb_segs 1, pkt_len 98
 buf_len 2176, data_len 98, ol_flags 0x0, data_off 128, phys_addr 0x2c5b9440
 packet_type 0x0 l2_len 0 l3_len 0 outer_l2_len 0 outer_l3_len 0
 Packet Offload Flags
   IP4: 0e:c6:15:62:1f:9e -> 0e:7e:f2:5d:0d:b6
   ICMP: 10.1.3.235 -> 10.0.1.253
 tos 0x00, ttl 64, length 84, checksum 0x78df
 fragment id 0xa7e1, flags DONT_FRAGMENT
   ICMP echo_request checksum 0x5047
10:54:40:291473: ip4-input
   ICMP: 10.1.3.235 -> 10.0.1.253
 tos 0x00, ttl 64, length 84, checksum 0x78df
 fragment id 0xa7e1, flags DONT_FRAGMENT
   ICMP echo_request checksum 0x5047
10:54:40:291485: ip4-lookup
   fib 0 dpo-idx 2 flow hash: 0x
   ICMP: 10.1.3.235 -> 10.0.1.253
 tos 0x00, ttl 64, length 84, checksum 0x78df
 fragment id 0xa7e1, flags DONT_FRAGMENT
   ICMP echo_request checksum 0x5047
10:54:40:291491: ip4-rewrite
   tx_sw_if_index 3 dpo-idx 2 : ipv4 via 0.0.0.0 ipsec0:  flow hash: 0x
   : 4554a7e140003f0179df0a0103eb0a0001fd08005047549c00013de0335a
   0020: 210e0200101112131415161718191a1b1c1d1e1f
10:54:40:291495: ipsec-if-output
   IPSec: spi 3181023528 seq 28
10:54:40:291500: dpdk-esp-encrypt
   cipher aes-cbc-128 auth sha1-96
   IPSEC_ESP: 10.1.2.79 -> 10.0.0.213
 tos 0x00, ttl 254, length 152, checksum 0xc0d3
 fragment id 0x
   ESP: spi 3181023528, seq 29
10:54:40:291514: dpdk-crypto-input
   status: success

Packet 2

10:54:45:312374: dpdk-input
   VirtualFunctionEthernet0/6/0 rx queue 0
   buffer 0x6e76: current data 14, length 112, free-list 0, clone-count 0, 
totlen-nifb 0, trace 0x1
  l4-cksum-computed l4-cksum-correct l2-hdr-offset 0 
l3-hdr-offset 14
   PKT MBUF: port 0, nb_segs 1, pkt_len 126
 buf_len 2176, data_len 126, ol_flags 0x0, data_off 128, phys_addr 
0x2c5b9e00
 packet_type 0x0 l2_len 0 l3_len 0 outer_l2_len 0 outer_l3_len 0
 Packet Offload Flags
   PKT_TX_TCP_CKSUM (0x) TCP cksum of TX pkt. computed by NIC
   PKT_TX_SCTP_CKSUM (0x) SCTP cksum of TX