Hi Dave, John,

I've tried building the latest 17.01 vpp (using "make V=0 PLATFORM=vpp 
TAG=vpp_debug install-rpm" - I understand that's where the TAG=vpp_debug is 
refereing to) and the issue is no longer present there, but there is something 
else - now vpp crashes when I delete a vhost-user port.

I've looked at patches submitted for master that could solve this and I've 
found https://gerrit.fd.io/r/#/c/4619/, but that didn't help. I've attached 
post-mortem api traces and backtrace. Pierre, could you please look at it?

I also have two other questions:

*        what's the difference between a regular image and an TAG=vpp_debug 
image?

*        I've tried configuring the core files in a number of different ways, 
but nothings seems to be working - the core files are just not being created. 
Is there a guide on how to set it up for CentOS7? For reference, here's one of 
the guides<https://www.unixmen.com/how-to-enable-core-dumps-in-rhel6/> that I 
used.

And the last thing is that Honeycomb now should work with vpp 17.04, so I'm 
going to try that one as well.

Thanks,
Juraj

From: Dave Barach (dbarach)
Sent: Wednesday, 11 January, 2017 23:43
To: John Lo (loj) <l...@cisco.com>; Juraj Linkes -X (jlinkes - PANTHEON 
TECHNOLOGIES at Cisco) <jlin...@cisco.com>; vpp-dev@lists.fd.io
Subject: RE: VPP-556 - vpp crashing in an openstack odl stack

+1... Hey John, thanks a lot for the detailed analysis...

Dave

From: John Lo (loj)
Sent: Wednesday, January 11, 2017 5:40 PM
To: Dave Barach (dbarach) <dbar...@cisco.com<mailto:dbar...@cisco.com>>; Juraj 
Linkes -X (jlinkes - PANTHEON TECHNOLOGIES at Cisco) 
<jlin...@cisco.com<mailto:jlin...@cisco.com>>; 
vpp-dev@lists.fd.io<mailto:vpp-dev@lists.fd.io>
Subject: RE: VPP-556 - vpp crashing in an openstack odl stack

Hi Juraj,

I looked at the custom-dump of the API trace and noticed this "interesting" 
sequence:
SCRIPT: vxlan_add_del_tunnel src 192.168.11.22 dst 192.168.11.20 decap-next -1 
vni 1
SCRIPT: sw_interface_set_flags sw_if_index 4 admin-up link-up
SCRIPT: sw_interface_set_l2_bridge sw_if_index 4 bd_id 1 shg 1  enable
SCRIPT: sw_interface_set_l2_bridge sw_if_index 2 disable
SCRIPT: bridge_domain_add_del bd_id 1 del

Any idea why BD1 is deleted while the VXLAN tunnel with sw_if_index still in 
the BD? May be this is what is  causing the crash. From your vppctl output 
capture for "compute_that_crashed.txt", I do see BD 1 presen with vxlan_tunnel0 
on it:
[root@overcloud-novacompute-1 ~]# vppctl show bridge-domain
  ID   Index   Learning   U-Forwrd   UU-Flood   Flooding   ARP-Term     BVI-Intf
  0      0        off        off        off        off        off        local0
  1      1        on         on         on         on         off          N/A
[root@overcloud-novacompute-1 ~]# vppctl show bridge-domain 1 detail
  ID   Index   Learning   U-Forwrd   UU-Flood   Flooding   ARP-Term     BVI-Intf
  1      1        on         on         on         on         off          N/A

           Interface           Index  SHG  BVI  TxFlood        VLAN-Tag-Rewrite
         vxlan_tunnel0           3     1    -      *                 none

I did install a vpp 1701 image on my server and performed an api trace replay 
of your api_post_mortem. Thereafter, I do not see BD 1 present while 
vxlan_tunnel1 is still configured as in BD 1:
DBGvpp# show bridge
  ID   Index   Learning   U-Forwrd   UU-Flood   Flooding   ARP-Term     BVI-Intf
  0      0        off        off        off        off        off        local0
DBGvpp# sho vxlan tunnel
[1] src 192.168.11.22 dst 192.168.11.20 vni 1 sw_if_index 4 encap_fib_index 0 
fib_entry_index 12 decap_next l2
DBGvpp# sho int addr
GigabitEthernet2/3/0 (dn):
VirtualEthernet0/0/0 (up):
local0 (dn):
vxlan_tunnel0 (dn):
vxlan_tunnel1 (up):
  l2 bridge bd_id 1 shg 1
DBGvpp# show int
              Name               Idx       State          Counter          Count
GigabitEthernet2/3/0              1        down
VirtualEthernet0/0/0              2         up
local0                            0        down
vxlan_tunnel0                     3        down
vxlan_tunnel1                     4         up
DBGvpp#

With system in this state, I can easily imaging a packet received by 
vxlan_tunnel1 and forwarded in a non-existing BD causes VPP crash. I will look 
into VPP code from this angle. In general, however, there is really no need to 
create and delete BDs on VPP. Adding an interface/tunnel to a BD will cause it 
to be created. Deleting a BD without removing all the ports in it can cause 
problems which may well be the cause here. If a BD is to be not used, all the 
ports on it should be removed. If a BD is to be reused, just add ports to it.

As mentioned by Dave, please test using a known good image like 1701 and 
preferably built with debug enabled (with TAG-vpp_debug) so it is easier to 
find any issues.

Regards,
John

From: Dave Barach (dbarach)
Sent: Wednesday, January 11, 2017 9:01 AM
To: Juraj Linkes -X (jlinkes - PANTHEON TECHNOLOGIES at Cisco) 
<jlin...@cisco.com<mailto:jlin...@cisco.com>>; 
vpp-dev@lists.fd.io<mailto:vpp-dev@lists.fd.io>; John Lo (loj) 
<l...@cisco.com<mailto:l...@cisco.com>>
Subject: RE: VPP-556 - vpp crashing in an openstack odl stack

Dear Juraj,

I took a look. It appears that the last operation in the post-mortem API trace 
was to kill a vxlan tunnel. Is there a reasonable chance that other interfaces 
in the bridge group containing the tunnel were still admin-up? Was the tunnel 
interface removed from the bridge group prior to killing it?

The image involved is not stable/1701/LATEST. It's missing at least 20 fixes 
considered critical enough to justify merging them into the release throttle:

[root@overcloud-novacompute-1 ~]# vppctl show version verbose
Version:                  v17.01-rc0~242-gabd98b2~b1576
Compiled by:              jenkins
Compile host:             centos-7-a8b
Compile date:             Mon Dec 12 18:55:56 UTC 2016

Please re-test with stable/1701/LATEST. Please use a TAG=vpp_debug image. If 
the problem is reproducible, we'll need a core file to make further progress.

Copying John Lo ("Dr. Vxlan") for any further thoughts he might have...

Thanks... Dave

From: vpp-dev-boun...@lists.fd.io<mailto:vpp-dev-boun...@lists.fd.io> 
[mailto:vpp-dev-boun...@lists.fd.io] On Behalf Of Juraj Linkes -X (jlinkes - 
PANTHEON TECHNOLOGIES at Cisco)
Sent: Wednesday, January 11, 2017 3:47 AM
To: vpp-dev@lists.fd.io<mailto:vpp-dev@lists.fd.io>
Subject: [vpp-dev] VPP-556 - vpp crashing in an openstack odl stack

Hi vpp-dev,

I just wanted to ask whether anyone has taken a look at 
VPP-556<https://jira.fd.io/browse/VPP-556>? There might not be enough logs, I 
collected just backtrace from gdb - if we need anything more, please give me a 
little bit of a guidance on what could help/how to get it.

This is one the last few issues we're facing with the openstack odl scenario 
where we use vpp jsut for l2 and it's been there for a while.

Thanks,
Juraj

Attachment: api_post_mortem_vhost_crash
Description: api_post_mortem_vhost_crash

[root@overcloud-novacompute-1 ~]# vppctl api trace custom-dump 
/tmp/api_post_mortem.11279
SCRIPT: memclnt_create name vpp_api_test 
SCRIPT: sw_interface_dump name_filter Ether 
SCRIPT: sw_interface_dump name_filter lo 
SCRIPT: sw_interface_dump name_filter pg 
SCRIPT: sw_interface_dump name_filter vxlan_gpe 
SCRIPT: sw_interface_dump name_filter vxlan 
SCRIPT: sw_interface_dump name_filter host 
SCRIPT: sw_interface_dump name_filter l2tpv3_tunnel 
SCRIPT: sw_interface_dump name_filter gre 
SCRIPT: sw_interface_dump name_filter lisp_gpe 
SCRIPT: sw_interface_dump name_filter ipsec 
SCRIPT: control_ping 
SCRIPT: get_first_msg_id vxlan_gpe_ioam_export_91d10435 
SCRIPT: get_first_msg_id ioam_export_eb694f98 
SCRIPT: get_first_msg_id ioam_pot_3024a1d1 
SCRIPT: get_first_msg_id ioam_vxlan_gpe_d6f1ab7b 
SCRIPT: get_first_msg_id acl_3cd02d84 
SCRIPT: get_first_msg_id ioam_trace_715b0386 
SCRIPT: get_first_msg_id snat_a3e50aef 
SCRIPT: get_first_msg_id lb_16c904aa 
SCRIPT: get_first_msg_id flowperpkt_789ffa7b 
SCRIPT: cli_request 
vl_api_memclnt_delete_t:
index: 0
handle: 0x305de0c0
SCRIPT: memclnt_create name vpp_api_test 
SCRIPT: sw_interface_dump name_filter Ether 
SCRIPT: sw_interface_dump name_filter lo 
SCRIPT: sw_interface_dump name_filter pg 
SCRIPT: sw_interface_dump name_filter vxlan_gpe 
SCRIPT: sw_interface_dump name_filter vxlan 
SCRIPT: sw_interface_dump name_filter host 
SCRIPT: sw_interface_dump name_filter l2tpv3_tunnel 
SCRIPT: sw_interface_dump name_filter gre 
SCRIPT: sw_interface_dump name_filter lisp_gpe 
SCRIPT: sw_interface_dump name_filter ipsec 
SCRIPT: control_ping 
SCRIPT: get_first_msg_id vxlan_gpe_ioam_export_91d10435 
SCRIPT: get_first_msg_id ioam_export_eb694f98 
SCRIPT: get_first_msg_id ioam_pot_3024a1d1 
SCRIPT: get_first_msg_id ioam_vxlan_gpe_d6f1ab7b 
SCRIPT: get_first_msg_id acl_3cd02d84 
SCRIPT: get_first_msg_id ioam_trace_715b0386 
SCRIPT: get_first_msg_id snat_a3e50aef 
SCRIPT: get_first_msg_id lb_16c904aa 
SCRIPT: get_first_msg_id flowperpkt_789ffa7b 
SCRIPT: cli_request 
vl_api_memclnt_delete_t:
index: 0
handle: 0x305e2048
SCRIPT: memclnt_create name honeycomb 
SCRIPT: control_ping 
SCRIPT: get_first_msg_id acl_3cd02d84 
vl_api_acl_plugin_get_version_t:
_vl_msg_id: 439
client_index: 0
context: 16777216
SCRIPT: get_first_msg_id snat_a3e50aef 
SCRIPT: want_interface_events pid 1 enable 1 
SCRIPT: ip_fib_dump 
SCRIPT: control_ping 
SCRIPT: ip6_fib_dump 
SCRIPT: control_ping 
SCRIPT: lisp_eid_table_vni_dump
SCRIPT: control_ping 
SCRIPT: lisp_locator_set_dump local
SCRIPT: control_ping 
SCRIPT: snat_static_mapping_dump 
SCRIPT: control_ping 
SCRIPT: snat_address_dump 
SCRIPT: control_ping 
SCRIPT: sw_interface_dump all 
SCRIPT: control_ping 
SCRIPT: sw_interface_dump all 
SCRIPT: control_ping 
vl_api_acl_interface_list_dump_t:
_vl_msg_id: 451
client_index: 0
context: 318767104
sw_if_index: 0
SCRIPT: control_ping 
vl_api_macip_acl_interface_get_t:
_vl_msg_id: 461
client_index: 0
context: 352321536
SCRIPT: snat_interface_dump 
SCRIPT: control_ping 
SCRIPT: snat_interface_dump 
SCRIPT: control_ping 
vl_api_sw_interface_get_table_t:
_vl_msg_id: 22
client_index: 0
context: 436207616
sw_if_index: 0
is_ipv6: 0
vl_api_sw_interface_get_table_t:
_vl_msg_id: 22
client_index: 0
context: 452984832
sw_if_index: 0
is_ipv6: 1
SCRIPT: bridge_domain_dump 
SCRIPT: control_ping 
SCRIPT: ip6_address_dump sw_if_index 0 is_ipv6 0 
SCRIPT: control_ping 
SCRIPT: sw_interface_dump all 
SCRIPT: control_ping 
SCRIPT: sw_interface_dump all 
SCRIPT: control_ping 
vl_api_acl_interface_list_dump_t:
_vl_msg_id: 451
client_index: 0
context: 603979776
sw_if_index: 16777216
SCRIPT: control_ping 
vl_api_sw_interface_get_table_t:
_vl_msg_id: 22
client_index: 0
context: 637534208
sw_if_index: 16777216
is_ipv6: 0
vl_api_sw_interface_get_table_t:
_vl_msg_id: 22
client_index: 0
context: 654311424
sw_if_index: 16777216
is_ipv6: 1
SCRIPT: bridge_domain_dump 
SCRIPT: control_ping 
SCRIPT: ip6_address_dump sw_if_index 1 is_ipv6 0 
SCRIPT: control_ping 
SCRIPT: sw_interface_dump all 
SCRIPT: control_ping 
SCRIPT: bridge_domain_dump 
SCRIPT: control_ping 
SCRIPT: classify_table_ids 
SCRIPT: show_version 
SCRIPT: control_ping 
SCRIPT: show_version 
SCRIPT: control_ping 
SCRIPT: show_version 
SCRIPT: control_ping 
SCRIPT: show_version 
SCRIPT: control_ping 
SCRIPT: create_vhost_user_if socket 
/tmp/socket_f6bcb09b-a131-4e23-b73d-a273d043f067 
SCRIPT: sw_interface_set_flags sw_if_index 2 admin-up link-up
SCRIPT: bridge_domain_add_del bd_id 1 flood 1 uu-flood 1 forward 1 learn 1 
arp-term 0
SCRIPT: sw_interface_dump all 
SCRIPT: control_ping 
vl_api_acl_interface_list_dump_t:
_vl_msg_id: 451
client_index: 0
context: 1040187392
sw_if_index: 0
SCRIPT: control_ping 
vl_api_macip_acl_interface_get_t:
_vl_msg_id: 461
client_index: 0
context: 1073741824
SCRIPT: snat_interface_dump 
SCRIPT: control_ping 
SCRIPT: snat_interface_dump 
SCRIPT: control_ping 
vl_api_sw_interface_get_table_t:
_vl_msg_id: 22
client_index: 0
context: 1157627904
sw_if_index: 0
is_ipv6: 0
vl_api_sw_interface_get_table_t:
_vl_msg_id: 22
client_index: 0
context: 1174405120
sw_if_index: 0
is_ipv6: 1
SCRIPT: bridge_domain_dump 
SCRIPT: control_ping 
SCRIPT: classify_table_by_interface sw_if_index 0 
SCRIPT: sw_interface_span_dump 
SCRIPT: control_ping 
SCRIPT: ip6_address_dump sw_if_index 0 is_ipv6 0 
SCRIPT: control_ping 
SCRIPT: sw_interface_dump all 
SCRIPT: control_ping 
vl_api_acl_interface_list_dump_t:
_vl_msg_id: 451
client_index: 0
context: 1342177280
sw_if_index: 16777216
SCRIPT: control_ping 
vl_api_sw_interface_get_table_t:
_vl_msg_id: 22
client_index: 0
context: 1375731712
sw_if_index: 16777216
is_ipv6: 0
vl_api_sw_interface_get_table_t:
_vl_msg_id: 22
client_index: 0
context: 1392508928
sw_if_index: 16777216
is_ipv6: 1
SCRIPT: bridge_domain_dump 
SCRIPT: control_ping 
SCRIPT: classify_table_by_interface sw_if_index 1 
SCRIPT: ip6_address_dump sw_if_index 1 is_ipv6 0 
SCRIPT: control_ping 
SCRIPT: sw_interface_dump all 
SCRIPT: control_ping 
vl_api_acl_interface_list_dump_t:
_vl_msg_id: 451
client_index: 0
context: 1526726656
sw_if_index: 33554432
SCRIPT: control_ping 
vl_api_sw_interface_get_table_t:
_vl_msg_id: 22
client_index: 0
context: 1560281088
sw_if_index: 33554432
is_ipv6: 0
vl_api_sw_interface_get_table_t:
_vl_msg_id: 22
client_index: 0
context: 1577058304
sw_if_index: 33554432
is_ipv6: 1
SCRIPT: sw_interface_vhost_user_dump 
SCRIPT: control_ping 
SCRIPT: bridge_domain_dump 
SCRIPT: control_ping 
SCRIPT: classify_table_by_interface sw_if_index 2 
SCRIPT: ip6_address_dump sw_if_index 2 is_ipv6 0 
SCRIPT: control_ping 
SCRIPT: sw_interface_dump all 
SCRIPT: control_ping 
SCRIPT: sw_interface_set_l2_bridge sw_if_index 2 bd_id 1 shg 0  enable 
SCRIPT: vxlan_add_del_tunnel src 192.168.11.22 dst 192.168.11.21 decap-next 1 
vni 1 
SCRIPT: sw_interface_set_flags sw_if_index 3 admin-up link-up
SCRIPT: sw_interface_set_l2_bridge sw_if_index 3 bd_id 1 shg 1  enable 
SCRIPT: sw_interface_dump all 
SCRIPT: control_ping 
vl_api_acl_interface_list_dump_t:
_vl_msg_id: 451
client_index: 0
context: 1845493760
sw_if_index: 0
SCRIPT: control_ping 
vl_api_macip_acl_interface_get_t:
_vl_msg_id: 461
client_index: 0
context: 1879048192
SCRIPT: snat_interface_dump 
SCRIPT: control_ping 
SCRIPT: snat_interface_dump 
SCRIPT: control_ping 
vl_api_sw_interface_get_table_t:
_vl_msg_id: 22
client_index: 0
context: 1962934272
sw_if_index: 0
is_ipv6: 0
vl_api_sw_interface_get_table_t:
_vl_msg_id: 22
client_index: 0
context: 1979711488
sw_if_index: 0
is_ipv6: 1
SCRIPT: bridge_domain_dump 
SCRIPT: control_ping 
SCRIPT: classify_table_by_interface sw_if_index 0 
SCRIPT: sw_interface_span_dump 
SCRIPT: control_ping 
SCRIPT: ip6_address_dump sw_if_index 0 is_ipv6 0 
SCRIPT: control_ping 
SCRIPT: sw_interface_dump all 
SCRIPT: control_ping 
vl_api_acl_interface_list_dump_t:
_vl_msg_id: 451
client_index: 0
context: 2147483648
sw_if_index: 16777216
SCRIPT: control_ping 
vl_api_sw_interface_get_table_t:
_vl_msg_id: 22
client_index: 0
context: 2181038080
sw_if_index: 16777216
is_ipv6: 0
vl_api_sw_interface_get_table_t:
_vl_msg_id: 22
client_index: 0
context: 2197815296
sw_if_index: 16777216
is_ipv6: 1
SCRIPT: bridge_domain_dump 
SCRIPT: control_ping 
SCRIPT: classify_table_by_interface sw_if_index 1 
SCRIPT: ip6_address_dump sw_if_index 1 is_ipv6 0 
SCRIPT: control_ping 
SCRIPT: sw_interface_dump all 
SCRIPT: control_ping 
vl_api_acl_interface_list_dump_t:
_vl_msg_id: 451
client_index: 0
context: 2332033024
sw_if_index: 33554432
SCRIPT: control_ping 
vl_api_sw_interface_get_table_t:
_vl_msg_id: 22
client_index: 0
context: 2365587456
sw_if_index: 33554432
is_ipv6: 0
vl_api_sw_interface_get_table_t:
_vl_msg_id: 22
client_index: 0
context: 2382364672
sw_if_index: 33554432
is_ipv6: 1
SCRIPT: sw_interface_vhost_user_dump 
SCRIPT: control_ping 
SCRIPT: bridge_domain_dump 
SCRIPT: control_ping 
SCRIPT: classify_table_by_interface sw_if_index 2 
SCRIPT: ip6_address_dump sw_if_index 2 is_ipv6 0 
SCRIPT: control_ping 
SCRIPT: sw_interface_dump all 
SCRIPT: control_ping 
vl_api_acl_interface_list_dump_t:
_vl_msg_id: 451
client_index: 0
context: 2550136832
sw_if_index: 50331648
SCRIPT: control_ping 
vl_api_sw_interface_get_table_t:
_vl_msg_id: 22
client_index: 0
context: 2583691264
sw_if_index: 50331648
is_ipv6: 0
vl_api_sw_interface_get_table_t:
_vl_msg_id: 22
client_index: 0
context: 2600468480
sw_if_index: 50331648
is_ipv6: 1
SCRIPT: vxlan_tunnel_dump sw_if_index 3 
SCRIPT: control_ping 
SCRIPT: bridge_domain_dump 
SCRIPT: control_ping 
SCRIPT: classify_table_by_interface sw_if_index 3 
SCRIPT: ip6_address_dump sw_if_index 3 is_ipv6 0 
SCRIPT: control_ping 
SCRIPT: sw_interface_dump all 
SCRIPT: control_ping 
SCRIPT: vxlan_add_del_tunnel src 192.168.11.22 dst 192.168.11.20 decap-next 1 
vni 1 
SCRIPT: sw_interface_set_flags sw_if_index 4 admin-up link-up
SCRIPT: sw_interface_set_l2_bridge sw_if_index 4 bd_id 1 shg 1  enable 
SCRIPT: sw_interface_set_l2_bridge sw_if_index 2 disable 
SCRIPT: delete_vhost_user_if sw_if_index 2
(gdb) bt
#0  0x00007ffff512b1d7 in __GI_raise (sig=sig@entry=6) at 
../nptl/sysdeps/unix/sysv/linux/raise.c:56
#1  0x00007ffff512c8c8 in __GI_abort () at abort.c:90
#2  0x0000000000c2b561 in os_panic () at /root/vpp/vpp/vnet/main.c:330
#3  0x00007ffff5ff2984 in debugger () at 
/root/vpp/build-data/../vppinfra/vppinfra/error.c:84
#4  0x00007ffff5ff2d8b in _clib_error (how_to_die=2, function_name=0x0, 
line_number=0, fmt=0x7ffff71b98c0 "%s:%d (%s) assertion `%s' fails") at 
/root/vpp/build-data/../vppinfra/vppinfra/error.c:143
#5  0x00007ffff703cf7a in vhost_user_interface_admin_up_down (vnm=0x1236600 
<vnet_main>, hw_if_index=2, flags=0) at 
/root/vpp/build-data/../vnet/vnet/devices/virtio/vhost-user.c:2263
#6  0x00007ffff6b30e42 in vnet_sw_interface_set_flags_helper (vnm=0x1236600 
<vnet_main>, sw_if_index=2, flags=0, helper_flags=0) at 
/root/vpp/build-data/../vnet/vnet/interface.c:493
#7  0x00007ffff6b30fcc in vnet_sw_interface_set_flags (vnm=0x1236600 
<vnet_main>, sw_if_index=2, flags=0) at 
/root/vpp/build-data/../vnet/vnet/interface.c:540
#8  0x00007ffff6b31a52 in vnet_delete_sw_interface (vnm=0x1236600 <vnet_main>, 
sw_if_index=2) at /root/vpp/build-data/../vnet/vnet/interface.c:646
#9  0x00007ffff6b32c55 in vnet_delete_hw_interface (vnm=0x1236600 <vnet_main>, 
hw_if_index=2) at /root/vpp/build-data/../vnet/vnet/interface.c:888
#10 0x00007ffff6ba8408 in ethernet_delete_interface (vnm=0x1236600 <vnet_main>, 
hw_if_index=2) at /root/vpp/build-data/../vnet/vnet/ethernet/interface.c:314
#11 0x00007ffff703dd20 in vhost_user_delete_if (vnm=0x1236600 <vnet_main>, 
vm=0x7ffff77678c0 <vlib_global_main>, sw_if_index=2) at 
/root/vpp/build-data/../vnet/vnet/devices/virtio/vhost-user.c:2434
#12 0x0000000000c6a175 in vl_api_delete_vhost_user_if_t_handler (mp=0x304cb4ec) 
at /root/vpp/vpp/vpp-api/api.c:2654
#13 0x00007ffff7bcca9b in vl_msg_api_handler_with_vm_node (am=0x7ffff7ddc3a0 
<api_main>, the_msg=0x304cb4ec, vm=0x7ffff77678c0 <vlib_global_main>, 
node=0x7fffb4f99000) at 
/root/vpp/build-data/../vlib-api/vlibapi/api_shared.c:510
#14 0x00007ffff79b22a9 in memclnt_process (vm=0x7ffff77678c0 
<vlib_global_main>, node=0x7fffb4f99000, f=0x0) at 
/root/vpp/build-data/../vlib-api/vlibmemory/memory_vlib.c:487
#15 0x00007ffff74f1424 in vlib_process_bootstrap (_a=140736230984912) at 
/root/vpp/build-data/../vlib/vlib/main.c:1218
#16 0x00007ffff6016c30 in clib_calljmp () at 
/root/vpp/build-data/../vppinfra/vppinfra/longjmp.S:110
#17 0x00007fffb50e0ca0 in ?? ()
#18 0x00007ffff74f1559 in vlib_process_startup (vm=0x7fffb4f99000, p=0x5, 
f=0x7ffff74f15eb <vlib_process_resume+111>) at 
/root/vpp/build-data/../vlib/vlib/main.c:1240
#19 0x00007ffff77678c0 in ?? () from 
/root/vpp/build-root/install-vpp_debug-native/vlib/lib64/libvlib.so.0
#20 0x00000000175f625f in ?? ()
#21 0x00007ffff77679e0 in vlib_global_main () from 
/root/vpp/build-root/install-vpp_debug-native/vlib/lib64/libvlib.so.0
#22 0x002ebec4becfd255 in ?? ()
#23 0x00007fffb4f99000 in ?? ()
#24 0x00007fffb6092ed0 in ?? ()
#25 0x00007fffb6092e7c in ?? ()
#26 0x0000000000000005 in ?? ()
#27 0x00007fffb6092ed0 in ?? ()
#28 0x00007fffb4f99000 in ?? ()
#29 0x00007fffb4fac86c in ?? ()
#30 0x0000000000000000 in ?? ()
_______________________________________________
vpp-dev mailing list
vpp-dev@lists.fd.io
https://lists.fd.io/mailman/listinfo/vpp-dev
  • [vpp-dev] VPP-5... Juraj Linkes -X (jlinkes - PANTHEON TECHNOLOGIES at Cisco)
    • Re: [vpp-d... Dave Barach (dbarach)
      • Re: [v... John Lo (loj)
        • Re... Dave Barach (dbarach)
          • ... Juraj Linkes -X (jlinkes - PANTHEON TECHNOLOGIES at Cisco)
          • ... Juraj Linkes -X (jlinkes - PANTHEON TECHNOLOGIES at Cisco)
            • ... Damjan Marion (damarion)

Reply via email to