[vpp-dev] Events from VPP worker thread over shared memory

2019-03-06 Thread siddarth rai
Hi,

I want to send some events from VPP to client over shared memory. The
trigger for these events are being detected on my worker threads.

Can I directly send them from the worker threads or do I need to send them
to the main thread first from where they will be forwarded over shared
memory ?

Regards,
Siddarth
-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.

View/Reply Online (#12454): https://lists.fd.io/g/vpp-dev/message/12454
Mute This Topic: https://lists.fd.io/mt/30295283/21656
Group Owner: vpp-dev+ow...@lists.fd.io
Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub  [arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-


[vpp-dev] Getting invalid address while doing pool_alloc_aligned with release build

2019-03-06 Thread chetan bhasin
Hello everyone,

I am getting invalid address while I am doing "pool_alloc_aligned" of 1 M
sessions for 10 workers on release build.

But when I do the same with debug build , Process getting crash at below
ASSERT

always_inline mheap_elt_t *

mheap_elt_at_uoffset (void *v, uword uo)

{

  ASSERT (mheap_offset_is_valid (v, uo)); àHere

  return (mheap_elt_t *) (v + uo - STRUCT_OFFSET_OF (mheap_elt_t,
user_data));

}

Query 1 : If we get invalid address in release build then application is
crashing at random places because of heap corruption. What's the best way
to fix this?
Query 2 : I have increase the "heapsize from 10G to 40G" , still facing the
same issue, is it because of low memory or issue is somewhere else?

Thanks,
Chetan Bhasin
-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.

View/Reply Online (#12453): https://lists.fd.io/g/vpp-dev/message/12453
Mute This Topic: https://lists.fd.io/mt/30295236/21656
Group Owner: vpp-dev+ow...@lists.fd.io
Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub  [arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-


[vpp-dev] Regarding interface-output arc and node

2019-03-06 Thread Prashant Upadhyaya
Hi,

I see that there is a node called 'interface-output'
And there is a feature arc called 'interface-output'

My understanding is that if I send a packet to the node
interface-output then that will further send the packet to the device
specific node to accomplish the actual output.

If I make a new node and make it sit on the arc interface-output, then
will my new node get packets if someone tries to send the packets to
the node interface-output ?
If yes, can my node then do a normal inspection of next node with
vnet_feature_next and send the packets down the line effectively
reaching the node interface-output to complete the pipeline.

The objective of my new node sitting on the arc interface-output is to
snoop on all the outgoing packets without breaking the output
pipeline.

Regards
-Prashant
-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.

View/Reply Online (#12452): https://lists.fd.io/g/vpp-dev/message/12452
Mute This Topic: https://lists.fd.io/mt/30295140/21656
Group Owner: vpp-dev+ow...@lists.fd.io
Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub  [arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-


Re: [vpp-dev] subif interface api handle function bug

2019-03-06 Thread John Lo (loj) via Lists.Fd.Io
Hi Joe,

Thank you very much for catching this bug.  I took a look at your patch which 
looks to be the right fix to this problem.  Without this fix, I suppose the 
work around is to always add BVI interface to a BD last, after all other 
interfaces are added in the BD.

Can you push your patch to fd.io gerrit directly for code review, please?  I'll 
be happy to review your patch and merge to master branch after it is built and 
passes regression properly.

The following URL describes how to push your patch to fd.io gerrit for review:
https://wiki.fd.io/view/VPP/Pulling,_Building,_Running,_Hacking_and_Pushing_VPP_Code#Pushing

Regards,
John
From: vpp-dev@lists.fd.io  On Behalf Of "Zhou You(Joe Zhou)
Sent: Wednesday, March 06, 2019 10:31 PM
To: vpp-dev@lists.fd.io
Subject: [vpp-dev] subif interface api handle function bug

Dear vpp devs:

  I met a bug while adding 802.1q subif interface into bridge domain via python 
api. BVI interface must be the first number of bridge domain, but after I added 
a subif into bd, subif took the first position of bd, and the l2 flooding 
didn't run well:

  here is the infomation:
  vpp# show bridge-domain 10 detail
  BD-ID   Index   BSN  Age(min)  Learning  U-Forwrd   UU-Flood   Flooding  
ARP-Term   BVI-Intf
   10   2  0  60onon   floodon   
off   loop10

   Interface   If-idx ISN  SHG  BVI  TxFlood
VLAN-Tag-Rewrite
GigabitEthernet2/0/0.10  8 70-  * pop-1
loop10   7 80*  * none
 GigabitEthernet1/0/01 11   0-  * none

  I read the code, found vl_api_create_vlan_subif_t_handler and 
vl_api_create_subif_t_handler function lacking an assignment of 
:template.flood_class = VNET_FLOOD_CLASS_NORMAL;
  I think it might be a bug, and create a patch, looking forward the bug to be 
fixed:-)

  
  Best Regards
  Joe Zhou


-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.

View/Reply Online (#12451): https://lists.fd.io/g/vpp-dev/message/12451
Mute This Topic: https://lists.fd.io/mt/30294062/21656
Group Owner: vpp-dev+ow...@lists.fd.io
Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub  [arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-


[vpp-dev] subif interface api handle function bug

2019-03-06 Thread "Zhou You(Joe Zhou)
Dear vpp devs:


  I met a bug while adding 802.1q subif interface into bridge domain via python 
api. BVI interface must be the first number of bridge domain, but after I added 
a subif into bd, subif took the first position of bd, and the l2 flooding 
didn't run well:
  

  here is the infomation:

  vpp# show bridge-domain 10 detail
  BD-ID   Index   BSN  Age(min)  Learning  U-Forwrd   UU-Flood   Flooding  
ARP-Term   BVI-Intf
   10   2  0  60onon   floodon   
off   loop10

   Interface   If-idx ISN  SHG  BVI  TxFlood
VLAN-Tag-Rewrite
GigabitEthernet2/0/0.10  8 70-  * pop-1
loop10   7 80*  * none
 GigabitEthernet1/0/01 11   0-  * none
  
  I read the code, found vl_api_create_vlan_subif_t_handler and 
vl_api_create_subif_t_handler function lacking an assignment of 
:template.flood_class = VNET_FLOOD_CLASS_NORMAL;

  I think it might be a bug, and create a patch, looking forward the bug to be 
fixed:-)


  
  Best Regards

  Joe Zhou

interface_api.c.patch
Description: Binary data
-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.

View/Reply Online (#12450): https://lists.fd.io/g/vpp-dev/message/12450
Mute This Topic: https://lists.fd.io/mt/30294062/21656
Group Owner: vpp-dev+ow...@lists.fd.io
Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub  [arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-


Re: [vpp-dev] VPP crash when deleting route related with GTPU tunnel endpoint

2019-03-06 Thread lollita
Hi,Neale.
   Sorry that I have do the test on my own branch yesterday.
I have retry the test on main with “* master  f940f8a 
[origin/master] session: use transport custom tx for app transports”
And try the test on vxlan and gtp, it is reproduced. Please check the backtrace.

   DBGvpp# ip route del 0.0.0.0/0 via 18.1.0.1

Thread 1 "lollita_main" received signal SIGSEGV, Segmentation fault.
0x775a4e1e in fib_entry_src_covered_inherit_add 
(fib_entry=0x7ffdf5cd352c, source=FIB_SOURCE_RR)
at /home/vppshare/lollita/vpp-gerrit/vpp/src/vnet/fib/fib_entry_src.c:951
/home/vppshare/lollita/vpp-gerrit/vpp/src/vnet/fib/fib_entry_src.c:951:27449:beg:0x775a4e1e
(gdb) bt
#0  0x775a4e1e in fib_entry_src_covered_inherit_add 
(fib_entry=0x7ffdf5cd352c, source=FIB_SOURCE_RR)
at /home/vppshare/lollita/vpp-gerrit/vpp/src/vnet/fib/fib_entry_src.c:951
#1  0x775a56b0 in fib_entry_src_action_reactivate 
(fib_entry=0x7ffdf5cd352c, source=FIB_SOURCE_RR)
at /home/vppshare/lollita/vpp-gerrit/vpp/src/vnet/fib/fib_entry_src.c:1168
#2  0x7759befd in fib_entry_cover_updated (fib_entry_index=16) at 
/home/vppshare/lollita/vpp-gerrit/vpp/src/vnet/fib/fib_entry.c:1375
#3  0x775ac121 in fib_entry_cover_update_one (cover=0x7ffdf5cd30ac, 
covered=16, args=0x0)
at /home/vppshare/lollita/vpp-gerrit/vpp/src/vnet/fib/fib_entry_cover.c:168
#4  0x775abf53 in fib_entry_cover_walk_node_ptr (depend=0x7ffdf5949e9c, 
args=0x7ffdf5c007c0)
at /home/vppshare/lollita/vpp-gerrit/vpp/src/vnet/fib/fib_entry_cover.c:80
#5  0x77597782 in fib_node_list_walk (list=28, fn=0x775abf09 
, args=0x7ffdf5c007c0)
at /home/vppshare/lollita/vpp-gerrit/vpp/src/vnet/fib/fib_node_list.c:375
#6  0x775abfde in fib_entry_cover_walk (cover=0x7ffdf5cd30ac, 
walk=0x775ac0f5 , args=0x0)
at /home/vppshare/lollita/vpp-gerrit/vpp/src/vnet/fib/fib_entry_cover.c:104
#7  0x775ac16f in fib_entry_cover_update_notify 
(fib_entry=0x7ffdf5cd30ac)
at /home/vppshare/lollita/vpp-gerrit/vpp/src/vnet/fib/fib_entry_cover.c:177
#8  0x7759ac7b in fib_entry_post_update_actions 
(fib_entry=0x7ffdf5cd30ac, source=FIB_SOURCE_DEFAULT_ROUTE, 
old_flags=FIB_ENTRY_FLAG_NONE)
at /home/vppshare/lollita/vpp-gerrit/vpp/src/vnet/fib/fib_entry.c:798
#9  0x7759b46b in fib_entry_source_removed (fib_entry=0x7ffdf5cd30ac, 
old_flags=FIB_ENTRY_FLAG_NONE)
at /home/vppshare/lollita/vpp-gerrit/vpp/src/vnet/fib/fib_entry.c:989
#10 0x7759b6c0 in fib_entry_path_remove (fib_entry_index=0, 
source=FIB_SOURCE_CLI, rpath=0x7ffdf6c1a7bc)
at /home/vppshare/lollita/vpp-gerrit/vpp/src/vnet/fib/fib_entry.c:1072
#11 0x77585246 in fib_table_entry_path_remove2 (fib_index=0, 
prefix=0x7ffdf5c00ab0, source=FIB_SOURCE_CLI, rpath=0x7ffdf6c1a7bc)
at /home/vppshare/lollita/vpp-gerrit/vpp/src/vnet/fib/fib_table.c:652
#12 0x76f59f8e in vnet_ip_route_cmd (vm=0x76801240 
, main_input=0x7ffdf5c00ee0, cmd=0x7ffdf5bafc7c)
at /home/vppshare/lollita/vpp-gerrit/vpp/src/vnet/ip/lookup.c:463
#13 0x76533abf in vlib_cli_dispatch_sub_commands (vm=0x76801240 
, cm=0x76801440 ,
input=0x7ffdf5c00ee0, parent_command_index=45) at 
/home/vppshare/lollita/vpp-gerrit/vpp/src/vlib/cli.c:607
#14 0x7653396a in vlib_cli_dispatch_sub_commands (vm=0x76801240 
, cm=0x76801440 ,
input=0x7ffdf5c00ee0, parent_command_index=0) at 
/home/vppshare/lollita/vpp-gerrit/vpp/src/vlib/cli.c:568
#15 0x76533eec in vlib_cli_input (vm=0x76801240 , 
input=0x7ffdf5c00ee0, function=0x765c22fc ,
function_arg=0) at /home/vppshare/lollita/vpp-gerrit/vpp/src/vlib/cli.c:707
#16 0x765c7e7f in unix_cli_process_input (cm=0x76801ac0 
, cli_file_index=0)
at /home/vppshare/lollita/vpp-gerrit/vpp/src/vlib/unix/cli.c:2420
#17 0x765c8a40 in unix_cli_process (vm=0x76801240 
, rt=0x7ffdf5bf, f=0x0)
at /home/vppshare/lollita/vpp-gerrit/vpp/src/vlib/unix/cli.c:2536
#18 0x76572a9d in vlib_process_bootstrap (_a=140728715966800) at 
/home/vppshare/lollita/vpp-gerrit/vpp/src/vlib/main.c:1463
#19 0x760277dc in clib_calljmp () from 
/home/vppshare/lollita/vpp-gerrit/vpp/build-root/install-vpp_debug-native/vpp/lib/libvppinfra.so.19.04
#20 0x7ffdf51ff920 in ?? ()
#21 0x76572bc8 in vlib_process_startup (vm=0x8, p=0x7ffdf51ff960, 
f=0x765c02aa )
---Type  to continue, or q  to quit---
at /home/vppshare/lollita/vpp-gerrit/vpp/src/vlib/main.c:1485

   Also attach configuration here for your favor.

  1 set interface state  eth0 up
  2 set interface mtu 1500 eth0
  3
  4 create sub-interfaces eth0 181
  5 set interface state eth0.181 up
  6 set interface ip address eth0.181 18.1.0.31/24
  7
  8 create sub-interfaces eth0 182
  9 set interface state eth0.182 up
10 set interface ip address  eth0.182 18.2.0.31/24
11
 12 ip route 0.0.0.0/0 via 18.1.0.1

Create gtp tunnel via
create gtpu tunnel  src 1.1.1.1

Re: [vpp-dev] Submitting code for NAT PAP

2019-03-06 Thread Ole Troan
Hi John,

> I've added support to the NAT plugin for Paired-Address-Pooling (PAP) and 
> wanted to see if there is interest for me to submit it as a patch for review?
> 
> The changes modify the behaviour of user creation, address allocation, and 
> address management. Fundamentally it pairs a NAT user with an external IP 
> when the user is created. The plugin will then only hand out ports within 
> that external IP to that NAT user. The ceiling for max translations is 
> overridden by (ports per IP / max_users_per_IP), but one can manually set a 
> lower number of max translations. The max number of users per external IP is 
> also configurable.
> When a new user is seen, the system will pick the external IP with the lowest 
> number of paired addresses. This ensures that if we have a lot of external 
> addresses, we spread usage across them.
> 
> I've so far tested this in a lab with a few thousand simulated clients and it 
> has worked as intended. This fixes issues for services that require all user 
> connections to originate from the same source IP otherwise authentication 
> breaks, such as banks.

This is clearly better NAT behaviour. I would certainly like to see this 
upstreamed!

Cheers,
Ole-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.

View/Reply Online (#12448): https://lists.fd.io/g/vpp-dev/message/12448
Mute This Topic: https://lists.fd.io/mt/30286653/21656
Group Owner: vpp-dev+ow...@lists.fd.io
Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub  [arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-


Re: [vpp-dev] VPP core dump

2019-03-06 Thread Andrew Yourtchenko


> On 6 Mar 2019, at 09:20, Raj  wrote:
> 
>> On Wed, Mar 6, 2019 at 2:52 PM Andrew 👽 Yourtchenko  
>> wrote:
>> 
>> Sounds like a memory corruption.
>> I am out of office for another week, so in the meantime if you might collect 
>> few postmortem dumps with reproductions, I will look at it when I return.
> 
> Sure, I will get some dumps with problem reproduced. In the mean time,
> if you like me to check any thing specific I can do it.

There are already unit tests covering the macip acls and they don’t cause the 
problem, so it is some trigger that makes this happen. If you are able to 
isolate the trigger which is required to make this happen, it will help a lot.

Unfortunately I can’t give you a step by step way to isolate that trigger 
because it can be most everything potentially. Multicore/single core, interface 
actions (delete/create/etc) coinciding with macip acl modifications and similar 
things is something that comes to mind first but this is by far not an 
exhaustive list...

—a 


> 
> Thanks and Regards,
> 
> Raj
> -=-=-=-=-=-=-=-=-=-=-=-
> Links: You receive all messages sent to this group.
> 
> View/Reply Online (#12442): https://lists.fd.io/g/vpp-dev/message/12442
> Mute This Topic: https://lists.fd.io/mt/30283387/675608
> Group Owner: vpp-dev+ow...@lists.fd.io
> Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub  [ayour...@gmail.com]
> -=-=-=-=-=-=-=-=-=-=-=-
-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.

View/Reply Online (#12447): https://lists.fd.io/g/vpp-dev/message/12447
Mute This Topic: https://lists.fd.io/mt/30283387/21656
Group Owner: vpp-dev+ow...@lists.fd.io
Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub  [arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-


[vpp-dev] Submitting code for NAT PAP

2019-03-06 Thread JB
Hi,

I've added support to the NAT plugin for Paired-Address-Pooling (PAP) and 
wanted to see if there is interest for me to submit it as a patch for review?

The changes modify the behaviour of user creation, address allocation, and 
address management. Fundamentally it pairs a NAT user with an external IP when 
the user is created. The plugin will then only hand out ports within that 
external IP to that NAT user. The ceiling for max translations is overridden by 
(ports per IP / max_users_per_IP), but one can manually set a lower number of 
max translations. The max number of users per external IP is also configurable.
When a new user is seen, the system will pick the external IP with the lowest 
number of paired addresses. This ensures that if we have a lot of external 
addresses, we spread usage across them.

I've so far tested this in a lab with a few thousand simulated clients and it 
has worked as intended. This fixes issues for services that require all user 
connections to originate from the same source IP otherwise authentication 
breaks, such as banks.

Sincerely,
John
-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.

View/Reply Online (#12446): https://lists.fd.io/g/vpp-dev/message/12446
Mute This Topic: https://lists.fd.io/mt/30286653/21656
Group Owner: vpp-dev+ow...@lists.fd.io
Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub  [arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-


Re: [vpp-dev] VPP stats

2019-03-06 Thread Ole Troan
Hi Raj,

statseg { … }

There is a C API in vpp-api/client/stat_client.h you can use.
Or a higher level Go, Python or C++ binding too.

Cheers,
Ole

> On 6 Mar 2019, at 14:27, Raj  wrote:
> 
> Hi all,
> 
> I am trying to get the stats from VPP SHM. If I understand correctly,
> I need to open  VPP stats Unix domain socket and read from the
> corresponding memory mapped segment.
> 
> My problem is that the stats socket file is missing. I added the
> following line in startup.conf but of no avail.
> 
> stats { socket-name /run/vpp/stats.sock }
> 
> Should I have to do any thing else to get the sock file and possibly
> enable SHM logging?
> 
> Thanks and Regards,
> 
> Raj
> -=-=-=-=-=-=-=-=-=-=-=-
> Links: You receive all messages sent to this group.
> 
> View/Reply Online (#12443): https://lists.fd.io/g/vpp-dev/message/12443
> Mute This Topic: https://lists.fd.io/mt/30284996/675193
> Group Owner: vpp-dev+ow...@lists.fd.io
> Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub  [otr...@employees.org]
> -=-=-=-=-=-=-=-=-=-=-=-

-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.

View/Reply Online (#12445): https://lists.fd.io/g/vpp-dev/message/12445
Mute This Topic: https://lists.fd.io/mt/30284996/21656
Group Owner: vpp-dev+ow...@lists.fd.io
Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub  [arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-


[vpp-dev] VPP 19.01.1 Maintenance Release is happening today!

2019-03-06 Thread Dave Wallace

Folks,

I am creating the 19.01.1 Maintenance Release today.  Last call for 
patches for 19.01.1 expires at 1800 UTC.


VPP Committers, Please do not merge any patches until the maintenance 
release is complete.


Thanks,
-daw-
-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.

View/Reply Online (#12444): https://lists.fd.io/g/vpp-dev/message/12444
Mute This Topic: https://lists.fd.io/mt/30285277/21656
Group Owner: vpp-dev+ow...@lists.fd.io
Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub  [arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-


[vpp-dev] VPP stats

2019-03-06 Thread Raj
Hi all,

I am trying to get the stats from VPP SHM. If I understand correctly,
I need to open  VPP stats Unix domain socket and read from the
corresponding memory mapped segment.

My problem is that the stats socket file is missing. I added the
following line in startup.conf but of no avail.

stats { socket-name /run/vpp/stats.sock }

Should I have to do any thing else to get the sock file and possibly
enable SHM logging?

Thanks and Regards,

Raj
-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.

View/Reply Online (#12443): https://lists.fd.io/g/vpp-dev/message/12443
Mute This Topic: https://lists.fd.io/mt/30284996/21656
Group Owner: vpp-dev+ow...@lists.fd.io
Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub  [arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-


Re: [vpp-dev] VPP core dump

2019-03-06 Thread Raj
On Wed, Mar 6, 2019 at 2:52 PM Andrew 👽 Yourtchenko  wrote:
>
> Sounds like a memory corruption.
> I am out of office for another week, so in the meantime if you might collect 
> few postmortem dumps with reproductions, I will look at it when I return.

Sure, I will get some dumps with problem reproduced. In the mean time,
if you like me to check any thing specific I can do it.

Thanks and Regards,

Raj
-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.

View/Reply Online (#12442): https://lists.fd.io/g/vpp-dev/message/12442
Mute This Topic: https://lists.fd.io/mt/30283387/21656
Group Owner: vpp-dev+ow...@lists.fd.io
Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub  [arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-


Re: [vpp-dev] VPP crash when deleting route related with GTPU tunnel endpoint

2019-03-06 Thread Neale Ranns via Lists.Fd.Io

Hi lolita,

What GTPU code are you running. Your test case does not work for me on master:


DBGvpp# create gtpu tunnel src 1.1.1.1 dst 1.1.1.4 teid-in 3 teid-out 4

create gtpu tunnel: parse error: 'teid-in 3 teid-out 4'

to answer your questions:

1)  You should be able to delete the default route even though it is used 
to reach the tunnel destination. You don’t provide a backtrace, but I suspect 
the GTPU code does not restack the tunnel correctly.

2)  It established the child dependency (i.e. build edges in a graph). In 
this case the tunnel depends on the route it uses to reach the destintion.

/neale

De :  au nom de lollita 
Date : mercredi 6 mars 2019 à 11:50
À : "vpp-dev@lists.fd.io" 
Objet : [vpp-dev] VPP crash when deleting route related with GTPU tunnel 
endpoint

Hi,
   I’m checking the implementation of GTPU performance enhancement with 
bypass ip-lookup after gtpu_encap.
I started up a VPP with hardware interface configuration only, and do following 
configuration:
ip route add 0.0.0.0/0 via 18.1.0.1
create gtpu tunnel src 1.1.1.1 dst 1.1.1.4 teid-in 3 teid-out 4
ip route add 1.2.3.4/32 via gtpu_tunnel0

A host route created:
17@1.1.1.4/32
  unicast-ip4-chain
  [@0]: dpo-load-balance: [proto:ip4 index:18 buckets:1 uRPF:20 to:[2:72]]
[0] [@12]: dpo-load-balance: [proto:ip4 index:16 buckets:1 uRPF:8 to:[0:0] 
via:[270:17056]]
  [0] [@5]: ipv4 via 18.1.0.1 eth0.181: mtu:9000 
0004969829b40209c0d2bbc381b50800
And the packet to 1.2.3.4 will be encapsulated via GTP and send to 
ip4-load-balance directly.

Then I do another test,
ip route add 0.0.0.0/0 via 18.1.0.1
create gtpu tunnel src 1.1.1.1 dst 1.1.1.4 teid-in 3 teid-out 4
ip route del 0.0.0.0/0 via 18.1.0.1

VPP crashed when deleting the default route, both on debug and release version. 
I think it is a bug.
I did same test on VXLAN, and it also crashed.

I assume the relation between default route and route to 1.1.1.4 is established 
via below code in vnet_gtpu_add_del_tunnel.

t->sibling_index = fib_entry_child_add
   (t->fib_entry_index, gtm->fib_node_type, t - gtm->tunnels);

I uncommented the two lines, but vpp still crashed.

In summary, my question is :

1、 Is it a bug of the crash when deleting default route in that scenario?

2、 What is the usage of fib_entry_child_add related code?

BR/Lollita Liu
-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.

View/Reply Online (#12441): https://lists.fd.io/g/vpp-dev/message/12441
Mute This Topic: https://lists.fd.io/mt/30283902/21656
Group Owner: vpp-dev+ow...@lists.fd.io
Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub  [arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-


[vpp-dev] VPP crash when deleting route related with GTPU tunnel endpoint

2019-03-06 Thread lollita
Hi,
   I’m checking the implementation of GTPU performance enhancement with 
bypass ip-lookup after gtpu_encap.
I started up a VPP with hardware interface configuration only, and do following 
configuration:
ip route add 0.0.0.0/0 via 18.1.0.1
create gtpu tunnel src 1.1.1.1 dst 1.1.1.4 teid-in 3 teid-out 4
ip route add 1.2.3.4/32 via gtpu_tunnel0

A host route created:
17@1.1.1.4/32
  unicast-ip4-chain
  [@0]: dpo-load-balance: [proto:ip4 index:18 buckets:1 uRPF:20 to:[2:72]]
[0] [@12]: dpo-load-balance: [proto:ip4 index:16 buckets:1 uRPF:8 to:[0:0] 
via:[270:17056]]
  [0] [@5]: ipv4 via 18.1.0.1 eth0.181: mtu:9000 
0004969829b40209c0d2bbc381b50800
And the packet to 1.2.3.4 will be encapsulated via GTP and send to 
ip4-load-balance directly.

Then I do another test,
ip route add 0.0.0.0/0 via 18.1.0.1
create gtpu tunnel src 1.1.1.1 dst 1.1.1.4 teid-in 3 teid-out 4
ip route del 0.0.0.0/0 via 18.1.0.1

VPP crashed when deleting the default route, both on debug and release version. 
I think it is a bug.
I did same test on VXLAN, and it also crashed.

I assume the relation between default route and route to 1.1.1.4 is established 
via below code in vnet_gtpu_add_del_tunnel.

t->sibling_index = fib_entry_child_add
   (t->fib_entry_index, gtm->fib_node_type, t - gtm->tunnels);

I uncommented the two lines, but vpp still crashed.

In summary, my question is :

1、 Is it a bug of the crash when deleting default route in that scenario?

2、 What is the usage of fib_entry_child_add related code?

BR/Lollita Liu
-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.

View/Reply Online (#12440): https://lists.fd.io/g/vpp-dev/message/12440
Mute This Topic: https://lists.fd.io/mt/30283902/21656
Group Owner: vpp-dev+ow...@lists.fd.io
Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub  [arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-


Re: [vpp-dev] VPP core dump

2019-03-06 Thread Andrew Yourtchenko
Sounds like a memory corruption.

I am out of office for another week, so in the meantime if you might collect 
few postmortem dumps with reproductions, I will look at it when I return.

(I aim to try to reproduce by adding the necessary api calls to 
https://github.com/vpp-dev/apidump2py - because this way I can later enhance 
the test suite to cover that case)

--a

> On 6 Mar 2019, at 05:02, Raj  wrote:
> 
> Hello all,
> 
> I am getting a core dump when adding MACIP ACL using API (using
> honeycomb). My observation is that I can reproduce this core dump
> reliably if I add about 300 MACIP ACL. I am on v18.10-27~ga0005702c
> 
> I did some debugging and my observations is:
> 
> In the function:
> 
> void
> vl_msg_api_handler_with_vm_node (api_main_t * am,
> void *the_msg, vlib_main_t * vm,
> vlib_node_runtime_t * node)
> {
> ...
> ...
>  /*
>   * Special-case, so we can e.g. bounce messages off the vnet
>   * main thread without copying them...
>   */
>  if (!(am->message_bounce[id]))
>vl_msg_api_free (the_msg);
> ...
> }
> 
> Control is reaching the special-case, and core dump is happening in
> vl_msg_api_free function.
> 
> Code flow is:
> void_mem_api_handle_msg_i()
>   ->vl_msg_api_free (the_msg);
>   ->clib_mem_free (rv);
>   ->mspace_put (heap, p);
>   ->mspace_free (msp, object_header);
>  ->ok_magic(fm)
>  ->return (m->magic == mparams.magic);  /* here it dumps 
> */
> 
> 
> 
> Following is my gdb session transcript:
> 
> (gdb) bt
> #0  0x75fd9f98 in ok_magic (m=0x13131313cdbec9ad) at
> /home/raj/vpp/src/vppinfra/dlmalloc.c:1618
> #1  0x75fe271a in mspace_free (msp=0x130044010,
> mem=0x1301c4ca0) at /home/raj/vpp/src/vppinfra/dlmalloc.c:4456
> #2  0x75fe1b9d in mspace_put (msp=0x130044010,
> p_arg=0x1301c4ca4) at /home/raj/vpp/src/vppinfra/dlmalloc.c:4291
> #3  0x77b916a4 in clib_mem_free (p=0x1301c4ca4) at
> /home/raj/vpp/src/vppinfra/mem.h:215
> #4  0x77b922f6 in vl_msg_api_free (a=0x1301c4cb4) at
> /home/raj/vpp/src/vlibmemory/memory_shared.c:291
> #5  0x77bc325c in vl_msg_api_handler_with_vm_node
> (am=0x77dd3d20 , the_msg=0x1301c4cb4, vm=0x76952240
> node=0x7fffb5264000) at /home/raj/vpp/src/vlibapi/api_shared.c:516
> #6  0x77b8feb4 in void_mem_api_handle_msg_i (am=0x77dd3d20
> , vm=0x76952240 , node=0x7fffb
>at /home/raj/vpp/src/vlibmemory/memory_api.c:692
> #7  0x77b8ff23 in vl_mem_api_handle_msg_main
> (vm=0x76952240 , node=0x7fffb5264000) at
> /home/raj/vpp/
> #8  0x77baded4 in vl_api_clnt_process (vm=0x76952240
> , node=0x7fffb5264000, f=0x0) at /home/raj/vpp/
> #9  0x766ce32a in vlib_process_bootstrap (_a=140736236354592)
> at /home/raj/vpp/src/vlib/main.c:1232
> #10 0x75f5784c in clib_calljmp () from
> /home/raj/vpp/build-root/install-vpp_debug-native/vpp/lib/libvppinfra.so.18.10
> #11 0x7fffb55ffbf0 in ?? ()
> #12 0x766ce455 in vlib_process_startup (vm=0xd52f22e80133b900,
> p=0x, f=0x7fffb5264000) at /home/raj/vpp/sr
> #13 0x0086 in ?? ()
> #14 0x76952350 in vlib_global_main () from
> /home/raj/vpp/build-root/install-vpp_debug-native/vpp/lib/libvlib.so.18.10
> #15 0x0003612097f3543e in ?? ()
> #16 0x7fffb5264000 in ?? ()
> n ?? ()
> #18 0x7fffb5ccf56c in ?? ()
> #19 0x0011 in ?? ()
> #20 0x7fffb5ccf668 in ?? ()
> #21 0x7fffb5264000 in ?? ()
> #22 0x7fffb79d8294 in ?? ()
> #23 0x in ?? ()
> 
> (gdb) f 2
> #2  0x75fe1b9d in mspace_put (msp=0x130044010,
> p_arg=0x1301c4ca4) at /home/raj/vpp/src/vppinfra/dlmalloc.c:4291
> 4291  mspace_free (msp, object_header);
> 
> (gdb) p msp
> $1 = (mspace) 0x130044010
> 
> (gdb) p *msp
> Attempt to dereference a generic pointer.
> 
> (gdb) p *(mstate)msp
> $2 = {smallmap = 4096, treemap = 32768, dvsize = 0, topsize =
> 15069712, least_addr = 0x130044000 "", dv = 0x0, top = 0x1301e4da0,
> tri
>  release_checks = 4086, magic = 3735935678, smallbins = {0x0, 0x0,
> 0x130044058, 0x130044058, 0x130044068, 0x130044068, 0x130044078,
>0x130044088, 0x130044098, 0x130044098, 0x1300440a8, 0x1300440a8,
> 0x1300440b8, 0x1300440b8, 0x1300440c8, 0x1300440c8, 0x13005c5b0,
>0x1300440e8, 0x1300440f8, 0x1300440f8, 0x130044108, 0x130044108,
> 0x1300652c0, 0x1300652c0, 0x130044128, 0x130044128, 0x130044138,
>0x130044148, 0x130044158, 0x130044158, 0x130044168, 0x130044168,
> 0x130044178, 0x130044178, 0x130044188, 0x130044188, 0x1301c4ce0,
>0x1300441a8, 0x1300441b8, 0x1300441b8, 0x1300441c8, 0x1300441c8,
> 0x1300441d8, 0x1300441d8, 0x1300441e8, 0x1300441e8, 0x1300441f8,
>0x130044208, 0x130044218, 0x130044218, 0x130044228, 0x130044228,
> 0x130044238, 0x130044238, 0x130044248, 0x130044248}, treebins =
>0x1301c5cc0, 0x0 }, footprint = 16777216,
> max_footprint = 16777216, footprint_limit = 0, mflags = 15,

[vpp-dev] VPP core dump

2019-03-06 Thread Raj
Hello all,

I am getting a core dump when adding MACIP ACL using API (using
honeycomb). My observation is that I can reproduce this core dump
reliably if I add about 300 MACIP ACL. I am on v18.10-27~ga0005702c

I did some debugging and my observations is:

In the function:

void
vl_msg_api_handler_with_vm_node (api_main_t * am,
 void *the_msg, vlib_main_t * vm,
 vlib_node_runtime_t * node)
{
...
...
  /*
   * Special-case, so we can e.g. bounce messages off the vnet
   * main thread without copying them...
   */
  if (!(am->message_bounce[id]))
vl_msg_api_free (the_msg);
...
}

Control is reaching the special-case, and core dump is happening in
vl_msg_api_free function.

Code flow is:
 void_mem_api_handle_msg_i()
   ->vl_msg_api_free (the_msg);
   ->clib_mem_free (rv);
   ->mspace_put (heap, p);
   ->mspace_free (msp, object_header);
  ->ok_magic(fm)
  ->return (m->magic == mparams.magic);  /* here it dumps */



Following is my gdb session transcript:

(gdb) bt
#0  0x75fd9f98 in ok_magic (m=0x13131313cdbec9ad) at
/home/raj/vpp/src/vppinfra/dlmalloc.c:1618
#1  0x75fe271a in mspace_free (msp=0x130044010,
mem=0x1301c4ca0) at /home/raj/vpp/src/vppinfra/dlmalloc.c:4456
#2  0x75fe1b9d in mspace_put (msp=0x130044010,
p_arg=0x1301c4ca4) at /home/raj/vpp/src/vppinfra/dlmalloc.c:4291
#3  0x77b916a4 in clib_mem_free (p=0x1301c4ca4) at
/home/raj/vpp/src/vppinfra/mem.h:215
#4  0x77b922f6 in vl_msg_api_free (a=0x1301c4cb4) at
/home/raj/vpp/src/vlibmemory/memory_shared.c:291
#5  0x77bc325c in vl_msg_api_handler_with_vm_node
(am=0x77dd3d20 , the_msg=0x1301c4cb4, vm=0x76952240
, vm=0x76952240 , node=0x7fffb
at /home/raj/vpp/src/vlibmemory/memory_api.c:692
#7  0x77b8ff23 in vl_mem_api_handle_msg_main
(vm=0x76952240 , node=0x7fffb5264000) at
/home/raj/vpp/
#8  0x77baded4 in vl_api_clnt_process (vm=0x76952240
, node=0x7fffb5264000, f=0x0) at /home/raj/vpp/
#9  0x766ce32a in vlib_process_bootstrap (_a=140736236354592)
at /home/raj/vpp/src/vlib/main.c:1232
#10 0x75f5784c in clib_calljmp () from
/home/raj/vpp/build-root/install-vpp_debug-native/vpp/lib/libvppinfra.so.18.10
#11 0x7fffb55ffbf0 in ?? ()
#12 0x766ce455 in vlib_process_startup (vm=0xd52f22e80133b900,
p=0x, f=0x7fffb5264000) at /home/raj/vpp/sr
#13 0x0086 in ?? ()
#14 0x76952350 in vlib_global_main () from
/home/raj/vpp/build-root/install-vpp_debug-native/vpp/lib/libvlib.so.18.10
#15 0x0003612097f3543e in ?? ()
#16 0x7fffb5264000 in ?? ()
n ?? ()
#18 0x7fffb5ccf56c in ?? ()
#19 0x0011 in ?? ()
#20 0x7fffb5ccf668 in ?? ()
#21 0x7fffb5264000 in ?? ()
#22 0x7fffb79d8294 in ?? ()
#23 0x in ?? ()

(gdb) f 2
#2  0x75fe1b9d in mspace_put (msp=0x130044010,
p_arg=0x1301c4ca4) at /home/raj/vpp/src/vppinfra/dlmalloc.c:4291
4291  mspace_free (msp, object_header);

(gdb) p msp
$1 = (mspace) 0x130044010

(gdb) p *msp
Attempt to dereference a generic pointer.

(gdb) p *(mstate)msp
$2 = {smallmap = 4096, treemap = 32768, dvsize = 0, topsize =
15069712, least_addr = 0x130044000 "", dv = 0x0, top = 0x1301e4da0,
tri
  release_checks = 4086, magic = 3735935678, smallbins = {0x0, 0x0,
0x130044058, 0x130044058, 0x130044068, 0x130044068, 0x130044078,
0x130044088, 0x130044098, 0x130044098, 0x1300440a8, 0x1300440a8,
0x1300440b8, 0x1300440b8, 0x1300440c8, 0x1300440c8, 0x13005c5b0,
0x1300440e8, 0x1300440f8, 0x1300440f8, 0x130044108, 0x130044108,
0x1300652c0, 0x1300652c0, 0x130044128, 0x130044128, 0x130044138,
0x130044148, 0x130044158, 0x130044158, 0x130044168, 0x130044168,
0x130044178, 0x130044178, 0x130044188, 0x130044188, 0x1301c4ce0,
0x1300441a8, 0x1300441b8, 0x1300441b8, 0x1300441c8, 0x1300441c8,
0x1300441d8, 0x1300441d8, 0x1300441e8, 0x1300441e8, 0x1300441f8,
0x130044208, 0x130044218, 0x130044218, 0x130044228, 0x130044228,
0x130044238, 0x130044238, 0x130044248, 0x130044248}, treebins =
0x1301c5cc0, 0x0 }, footprint = 16777216,
max_footprint = 16777216, footprint_limit = 0, mflags = 15, mutex = 0
size = 16777216, next = 0x0, sflags = 8}, extp = 0x0, exts = 0}

(gdb) f 5
#5  0x77bc325c in vl_msg_api_handler_with_vm_node
(am=0x77dd3d20 , the_msg=0x1301c4cb4, vm=0x76952240
...}
(gdb) f 0
#0  0x75fd9f98 in ok_magic (m=0x13131313cdbec9ad) at
/home/raj/vpp/src/vppinfra/dlmalloc.c:1618
1618return (m->magic == mparams.magic);

(gdb) p m->magic
Cannot access memory at address 0x13131313cdbec9ed

(gdb) f 1
#1  0x75fe271a in mspace_free (msp=0x130044010,
mem=0x1301c4ca0) at /home/raj/vpp/src/vppinfra/dlmalloc.c:4456
4456if (!ok_magic(fm)) {

(gdb) p *(mstate)msp
$24 = {smallmap = 4096, treemap = 32768, dvsize = 0, topsize =
15069712, least_addr = 0x130044000 "", dv = 0x0, top = 0x1301e4da0, t