Re: [vpp-dev] VPP crash observed with 4k sub-interfaces and 4k FIBs

2017-11-27 Thread Dave Barach (dbarach)
Laying aside the out-of-memory issue for a minute: can you explain the vpp 
deployment you have in mind?

Given where vpp would fit in a normal network design, I’m not seeing why you’d 
want to go with a full vlan / VRF’s mesh.

Thanks… Dave

From: vpp-dev-boun...@lists.fd.io [mailto:vpp-dev-boun...@lists.fd.io] On 
Behalf Of Balaji Kn
Sent: Monday, November 27, 2017 4:32 AM
To: vpp-dev 
Subject: [vpp-dev] VPP crash observed with 4k sub-interfaces and 4k FIBs

Hello,

I am using VPP 17.07 and initialized heap memory as 3G in startup configuration.
My use case is to have 4k sub-interfaces to differentiated by VLAN and 
associated each sub-interface with unique VRF. Eventually using 4k FIBs.

However i am observing VPP is crashing with memory crunch while adding an ip 
route.

backtrace
#0  0x7fae4c981cc9 in __GI_raise (sig=sig@entry=6) at 
../nptl/sysdeps/unix/sysv/linux/raise.c:56
#1  0x7fae4c9850d8 in __GI_abort () at abort.c:89
#2  0x004070b3 in os_panic ()
at 
/jenkins_home/workspace/vFE/vFE_Release_Master_Build/datapath/vpp/build-data/../src/vpp/vnet/main.c:263
#3  0x7fae4d19007a in clib_mem_alloc_aligned_at_offset 
(os_out_of_memory_on_failure=1,
align_offset=, align=64, size=1454172096)
at 
/jenkins_home/workspace/vFE/vFE_Release_Master_Build/datapath/vpp/build-data/../src/vppinfra/mem.h:102
#4  vec_resize_allocate_memory (v=v@entry=0x7fade2c44880, 
length_increment=length_increment@entry=1,
data_bytes=, header_bytes=, 
header_bytes@entry=24,
data_align=data_align@entry=64)
at 
/jenkins_home/workspace/vFE/vFE_Release_Master_Build/datapath/vpp/build-data/../src/vppinfra/vec.c:84
#5  0x7fae4db9210c in _vec_resize (data_align=, 
header_bytes=,
data_bytes=, length_increment=, v=)
at 
/jenkins_home/workspace/vFE/vFE_Release_Master_Build/datapath/vpp/build-data/../src/vppinfra/vec.h:142

I initially suspected FIB is consuming more heap space. But I do see much 
memory consumed by FIB table also and felt 3GB of heap is sufficient

vpp# show fib memory
FIB memory
 Name   Size  in-use /allocated   totals
 Entry   7260010 /  60010 4320720/4320720
 Entry Source3268011 /  68011 2176352/2176352
 Entry Path-Extensions   60  0   /0   0/0
multicast-Entry 1924006  /   4006 769152/769152
   Path-list 4860016 /  60016 2880768/2880768
   uRPF-list 1676014 /  76015 1216224/1216240
 Path8060016 /  60016 4801280/4801280
  Node-list elements 2076017 /  76019 1520340/1520380
Node-list heads  8 68020 /  68020 544160/544160

Is there any way to identify usage of heap memory in other modules?
Any pointers would be helpful.

Regards,
Balaji
___
vpp-dev mailing list
vpp-dev@lists.fd.io
https://lists.fd.io/mailman/listinfo/vpp-dev

[vpp-dev] vpp-verify-master-opensuse build failure triage

2017-11-27 Thread Dave Wallace

Marco,

Can you please take a look at the build failure encountered with 
https://gerrit.fd.io/r/#/c/9582/ on the vpp-verify-master-opensuse 
jenkins job:


- %< -
fd.io JJB  7:56 AM
Patch Set 2: Verified-1
Build Failed
https://jenkins.fd.io/job/vpp-verify-master-opensuse/459/ : FAILURE
No problems were identified. If you know why this problem occurred, 
please add a suitable Cause for it. ( 
https://jenkins.fd.io/job/vpp-verify-master-opensuse/459/ )
Logs: 
https://logs.fd.io/production/vex-yul-rot-jenkins-1/vpp-verify-master-opensuse/459

- %< --

From the logs, it appears that there is an issue related to building 
dpdk.  Have you seen this issue before?  If so, it this an 
infrastructure issue or something else?


Thanks,
-daw-
___
vpp-dev mailing list
vpp-dev@lists.fd.io
https://lists.fd.io/mailman/listinfo/vpp-dev

Re: [vpp-dev] Router plugin capability question (VRFs and MPLS)

2017-11-27 Thread Michael Borokhovich
I see... Thanks, Ray!
We will see what will be the best approach to pass this information (MPLS,
VRFs) from the Linux control plane to the VPP.

On Fri, Nov 24, 2017 at 1:01 PM, Kinsella, Ray  wrote:

>
> If you mean the router plugin from the sandbox,
> the short answer is yes, it doesn't support any of these.
>
> Better approach is to use Honeycomb, with or without ODL.
>
> Ray K
>
>
>
> On 22/11/2017 18:53, Michael Borokhovich wrote:
>
>> Hi,
>>
>> Does router plugin support the following features?
>>
>> 1) Multiple VRFs
>> 2) MPLS
>>
>> From our initial experiments, the above features are not supported.
>> Multiple VRFs do not work (I tried with namespaces).
>> The MPLS information is not passed from Linux to VPP's FIBs.
>>
>> Please let me know what you think.
>>
>> Thanks,
>> Michael.
>>
>>
>>
>> ___
>> vpp-dev mailing list
>> vpp-dev@lists.fd.io
>> https://lists.fd.io/mailman/listinfo/vpp-dev
>>
>
>
___
vpp-dev mailing list
vpp-dev@lists.fd.io
https://lists.fd.io/mailman/listinfo/vpp-dev

Re: [vpp-dev] vpp api test

2017-11-27 Thread Gabriel Ganne
Hi Holoo,


There are two great pages explaining how to use the vpp C and python APIs:

C: https://wiki.fd.io/view/VPP/How_To_Use_The_C_API

python: https://wiki.fd.io/view/VPP/Python_API

I believe you can also use java or lua if you wish.


Regards,


--

Gabriel Ganne


From: vpp-dev-boun...@lists.fd.io  on behalf of 
Holoo Gulakh 
Sent: Monday, November 27, 2017 8:55:43 AM
To: vpp-dev@lists.fd.io
Subject: [vpp-dev] vpp api test

Hello
I am working on VPP and now need to use its API to communicate with it.
I want to know how I can use its API?? Is there any example in source code or 
any where else? (if yes, how should I use it?)
thanks in advance
___
vpp-dev mailing list
vpp-dev@lists.fd.io
https://lists.fd.io/mailman/listinfo/vpp-dev

[vpp-dev] VPP API

2017-11-27 Thread Holoo Gulakh
Here in this link (https://wiki.fd.io/view/VPP/Code_Walkthrough_VoDs),
there is a video (Code Walkthrough VoD: Chapter 4 | VPP API
) in which the lecturer is
using a program named (vpe_api_test).
Now I just need a program like that (preferably in C) so that I can
understand how it works and then later go deeper and use those wiki pages
about APIs.
could you please help me with such program

thanks in advance
___
vpp-dev mailing list
vpp-dev@lists.fd.io
https://lists.fd.io/mailman/listinfo/vpp-dev

Re: [vpp-dev] vpp api test

2017-11-27 Thread Marek Gradzki -X (mgradzki - PANTHEON TECHNOLOGIES at Cisco)
Hi,

here you can find jvpp (java API) documentation:
https://git.fd.io/vpp/tree/src/vpp-api/java/Readme.txt

and some examples:
https://git.fd.io/vpp/tree/src/vpp-api/java/jvpp-core/io/fd/vpp/jvpp/core/examples

For more, take a look at hc2vpp project which is using jvpp
https://git.fd.io/hc2vpp/tree

Regards,
Marek

From: vpp-dev-boun...@lists.fd.io [mailto:vpp-dev-boun...@lists.fd.io] On 
Behalf Of Gabriel Ganne
Sent: 27 listopada 2017 09:03
To: Holoo Gulakh ; vpp-dev@lists.fd.io
Subject: Re: [vpp-dev] vpp api test


Hi Holoo,



There are two great pages explaining how to use the vpp C and python APIs:

C: https://wiki.fd.io/view/VPP/How_To_Use_The_C_API

python: https://wiki.fd.io/view/VPP/Python_API

I believe you can also use java or lua if you wish.


Regards,



--

Gabriel Ganne


From: vpp-dev-boun...@lists.fd.io 
> on behalf of 
Holoo Gulakh >
Sent: Monday, November 27, 2017 8:55:43 AM
To: vpp-dev@lists.fd.io
Subject: [vpp-dev] vpp api test

Hello
I am working on VPP and now need to use its API to communicate with it.
I want to know how I can use its API?? Is there any example in source code or 
any where else? (if yes, how should I use it?)
thanks in advance
___
vpp-dev mailing list
vpp-dev@lists.fd.io
https://lists.fd.io/mailman/listinfo/vpp-dev

Re: [vpp-dev] VM instantiation failed when I got a tutorial through

2017-11-27 Thread 박민철
Hi Ray,
Thanks for your response.
Could you kindly explain how to user console stdout option?
I don't know how to put the option to 'qemu' command.

Regards,

Daniel.




2017-11-25 2:59 GMT+09:00 Kinsella, Ray :

> Try sending the VM's console to stdout and see what you learn.
>
> ||-append "|console=ttyS0" -serial stdio|
>
>
> Ray K
>
>
> On 24/11/2017 06:48, 박민철 wrote:
>
>> Hi,
>> I'm new to vpp and trying to bench mark this.
>> I followed the tutorial, https://wiki.fd.io/view/VPP/Us
>> e_VPP_to_connect_VMs_Using_Vhost-User_Interface.
>> When I try to instantiate the VM with below command, it seems like to be
>> hanged and there's no reponse from terminal.
>>
>> qemu-system-x86_64 \
>>  -enable-kvm -m 1024 \
>>  -bios OVMF.fd \
>>  -smp 4 -cpu host \
>>  -vga none -nographic \
>>  -drive file="1-clear-14200-kvm.img",if=virtio,aio=threads \
>>  -chardev socket,id=char1,path=/tmp/sock1.sock \
>>  -netdev type=vhost-user,id=mynet1,chardev=char1,vhostforce \
>>  -device virtio-net-pci,mac=00:00:00:00:00:01,netdev=mynet1 \
>>  -object 
>> memory-backend-file,id=mem,size=1024M,mem-path=/dev/hugepages,share=on
>> \
>>  -numa node,memdev=mem -mem-prealloc \
>>  -debugcon file:debug.log -global isa-debugcon.iobase=0x402
>>
>> Is there any one who faced with same problem as me?
>> My environment is like this,
>>  - Dell R730
>>  - Intel X720 NIC
>>  - 16 Core, 2 Socket
>>  - 64GB Ram
>>  - Ubuntu 16.04
>>
>> Regards,
>>
>>
>>
>>
>> ___
>> vpp-dev mailing list
>> vpp-dev@lists.fd.io
>> https://lists.fd.io/mailman/listinfo/vpp-dev
>>
>
>
___
vpp-dev mailing list
vpp-dev@lists.fd.io
https://lists.fd.io/mailman/listinfo/vpp-dev

[vpp-dev] VPP crash observed with 4k sub-interfaces and 4k FIBs

2017-11-27 Thread Balaji Kn
Hello,

I am using VPP 17.07 and initialized heap memory as 3G in startup
configuration.
My use case is to have 4k sub-interfaces to differentiated by VLAN and
associated each sub-interface with unique VRF. Eventually using 4k FIBs.

However i am observing VPP is crashing with memory crunch while adding an
ip route.

backtrace
#0  0x7fae4c981cc9 in __GI_raise (sig=sig@entry=6) at
../nptl/sysdeps/unix/sysv/linux/raise.c:56
#1  0x7fae4c9850d8 in __GI_abort () at abort.c:89
#2  0x004070b3 in os_panic ()
at
/jenkins_home/workspace/vFE/vFE_Release_Master_Build/datapath/vpp/build-data/../src/vpp/vnet/main.c:263
#3  0x7fae4d19007a in clib_mem_alloc_aligned_at_offset
(os_out_of_memory_on_failure=1,
align_offset=, align=64, size=1454172096)
at
/jenkins_home/workspace/vFE/vFE_Release_Master_Build/datapath/vpp/build-data/../src/vppinfra/mem.h:102
#4  vec_resize_allocate_memory (v=v@entry=0x7fade2c44880,
length_increment=length_increment@entry=1,
data_bytes=, header_bytes=,
header_bytes@entry=24,
data_align=data_align@entry=64)
at
/jenkins_home/workspace/vFE/vFE_Release_Master_Build/datapath/vpp/build-data/../src/vppinfra/vec.c:84
#5  0x7fae4db9210c in _vec_resize (data_align=,
header_bytes=,
data_bytes=, length_increment=,
v=)
at
/jenkins_home/workspace/vFE/vFE_Release_Master_Build/datapath/vpp/build-data/../src/vppinfra/vec.h:142

I initially suspected FIB is consuming more heap space. But I do see much
memory consumed by FIB table also and felt 3GB of heap is sufficient

vpp# show fib memory
FIB memory
 Name   Size  in-use /allocated   totals
 Entry   7260010 /  60010 4320720/4320720
 Entry Source3268011 /  68011 2176352/2176352
 Entry Path-Extensions   60  0   /0   0/0
multicast-Entry 1924006  /   4006 769152/769152
   Path-list 4860016 /  60016 2880768/2880768
   uRPF-list 1676014 /  76015 1216224/1216240
 Path8060016 /  60016 4801280/4801280
  Node-list elements 2076017 /  76019 1520340/1520380
Node-list heads  8 68020 /  68020 544160/544160

Is there any way to identify usage of heap memory in other modules?
Any pointers would be helpful.

Regards,
Balaji
___
vpp-dev mailing list
vpp-dev@lists.fd.io
https://lists.fd.io/mailman/listinfo/vpp-dev

Re: [vpp-dev] sr mpls fault

2017-11-27 Thread Neale Ranns (nranns)
Hi Xyxue,

I’ll look into the crash.
In the meantime, perhaps your config is somewhat curious. What is your 
intention with 3.1.1.0/24? It has been added as an extranet route (i.e. it’s in 
table 0 and table 1) but in table 0 you have an override of the route via the 
SR steering. If you remove the ip route for 3.1.1.0/24 in table 0, I expect 
there will be no crash.

The SR policy says ‘forward in the same way as for label 1000’ but there is no 
local-label/route for 1000.
You don’t need MPLS table 1.

/neale

From:  on behalf of 薛欣颖 
Date: Monday, 27 November 2017 at 06:55
To: vpp-dev 
Subject: [vpp-dev] sr mpls fault


Hi guys,

Is the vpp support sr mpls now?
After I configured the following command, I configured 'sr mpls steer l3 
3.1.1.0/24 via sr policy bsid 33 del'.
Then there was a SIGABRT . Are there any illegal command in my configuration?

configuration:
create host-interface name eth2 mac 00:0c:29:6d:b0:82
create host-interface name eth1 mac 00:0c:29:6d:b0:78
create host-interface name eth3 mac 00:0c:29:6d:b0:8c
set interface state host-eth2 up
set interface state host-eth1 up
set interface state host-eth3 up
set interface ip table host-eth2 1
set interface ip address host-eth1 2.1.1.1/24
set interface ip address host-eth2 1.1.1.1/24
set interface ip address host-eth3 4.1.1.1/24
create mpls tunnel out-label 33 out-label 53 via 2.1.1.2 host-eth1
create mpls tunnel out-label 133 out-label 153 via 4.1.1.2 host-eth3
set interface state mpls-tunnel0 up
set interface state mpls-tunnel1 up
mpls table add 0
set interface mpls host-eth1 enable
set interface mpls host-eth3 enable
ip route add 3.1.1.0/24 via interface mpls-tunnel0 table 0
ip route add 3.1.1.0/24 via interface mpls-tunnel1 table 0
mpls local-label add eos 1053 ip4-lookup-in-table 1
mpls local-label add non-eos 1023 mpls-lookup-in-table 0
mpls local-label add eos 1153 ip4-lookup-in-table 1
mpls local-label add non-eos 1123 mpls-lookup-in-table 0
mpls table add 1
ip route add 3.1.1.0/24 via interface mpls-tunnel0 table 1
sr mpls policy add bsid 33 next 1000
sr mpls steer l3 3.1.1.0/24 via sr policy bsid 33


Program received signal SIGABRT, Aborted.
0x2b3ba59a2c37 in __GI_raise (sig=sig@entry=6) at 
../nptl/sysdeps/unix/sysv/linux/raise.c:56
56  ../nptl/sysdeps/unix/sysv/linux/raise.c
(gdb) bt
#0  0x2b3ba59a2c37 in __GI_raise (sig=sig@entry=6) at 
../nptl/sysdeps/unix/sysv/linux/raise.c:56
#1  0x2b3ba59a6028 in __GI_abort () at abort.c:89
#2  0x00406e51 in os_panic () at 
/home/fos/vpp18.01/build-data/../src/vpp/vnet/main.c:272
#3  0x2b3ba52c0ac8 in debugger () at 
/home/fos/vpp18.01/build-data/../src/vppinfra/error.c:84
#4  0x2b3ba52c0ecf in _clib_error (how_to_die=2, function_name=0x0, 
line_number=0,
fmt=0x2b3ba4f61100 "%s:%d (%s) assertion `%s' fails") at 
/home/fos/vpp18.01/build-data/../src/vppinfra/error.c:143
#5  0x2b3ba4e45aaf in fib_attached_export_purge (fib_entry=0x2b3ba6be7838)
at /home/fos/vpp18.01/build-data/../src/vnet/fib/fib_attached_export.c:373
#6  0x2b3ba4e469f1 in fib_attached_export_cover_modified_i 
(fib_entry=0x2b3ba6be7838)
at /home/fos/vpp18.01/build-data/../src/vnet/fib/fib_attached_export.c:491
#7  0x2b3ba4e46a7c in fib_attached_export_cover_update 
(fib_entry=0x2b3ba6be7838)
at /home/fos/vpp18.01/build-data/../src/vnet/fib/fib_attached_export.c:513
#8  0x2b3ba4e2c037 in fib_entry_cover_updated (fib_entry_index=43)
at /home/fos/vpp18.01/build-data/../src/vnet/fib/fib_entry.c:1390
#9  0x2b3ba4e34ed6 in fib_entry_cover_update_one (cover=0x2b3ba6be7428, 
covered=43, args=0x0)
at /home/fos/vpp18.01/build-data/../src/vnet/fib/fib_entry_cover.c:168
#10 0x2b3ba4e34d01 in fib_entry_cover_walk_node_ptr (depend=0x2b3ba6bec68c, 
args=0x2b3ba6f0f670)
at /home/fos/vpp18.01/build-data/../src/vnet/fib/fib_entry_cover.c:80
#11 0x2b3ba4e27aa3 in fib_node_list_walk (list=60, fn=0x2b3ba4e34cb6 
,
args=0x2b3ba6f0f670) at 
/home/fos/vpp18.01/build-data/../src/vnet/fib/fib_node_list.c:375
#12 0x2b3ba4e34d91 in fib_entry_cover_walk (cover=0x2b3ba6be7428, 
walk=0x2b3ba4e34eaa ,
args=0x0) at 
/home/fos/vpp18.01/build-data/../src/vnet/fib/fib_entry_cover.c:104
#13 0x2b3ba4e34f24 in fib_entry_cover_update_notify 
(fib_entry=0x2b3ba6be7428)
at /home/fos/vpp18.01/build-data/../src/vnet/fib/fib_entry_cover.c:177
#14 0x2b3ba4e2b4c9 in fib_entry_post_update_actions 
(fib_entry=0x2b3ba6be7428, source=FIB_SOURCE_CLI,
old_flags=FIB_ENTRY_FLAG_LOOSE_URPF_EXEMPT) at 
/home/fos/vpp18.01/build-data/../src/vnet/fib/fib_entry.c:885
#15 0x2b3ba4e2bc85 in fib_entry_special_remove (fib_entry_index=30, 
source=FIB_SOURCE_CLI)
at /home/fos/vpp18.01/build-data/../src/vnet/fib/fib_entry.c:1209
#16 0x2b3ba4e2bcd4 in fib_entry_delete (fib_entry_index=30, 
source=FIB_SOURCE_SR)
at /home/fos/vpp18.01/build-data/../src/vnet/fib/fib_entry.c:1226
#17 

[vpp-dev] vpp start fails

2017-11-27 Thread Holoo Gulakh
Hi
I am trying to run the example here (https://wiki.fd.io/view/VPP/Python_API)
to test VPP API usage.
after entering command (make run) vpp fails to start and indicate an error
saying "min heap allocation failure!"
using command "sudo service vpp status" shows that vpp service is dead.
How to fix this error??
___
vpp-dev mailing list
vpp-dev@lists.fd.io
https://lists.fd.io/mailman/listinfo/vpp-dev

[vpp-dev] L2VPN not effective on 3 VPP

2017-11-27 Thread ??????


Hi guys,

I have tested L2VPN on two VPP,and the L2VPN works well.
But I can't get the effective configuration on 3 vpp. The following is all my 
configuration not effective.
Is there anything error in my configuration ?  How can I get the effective 
configuration?


VPP1 - VPP2 - VPP3
VPP1:
  create host-interface name eth4 mac 00:0c:29:4d:af:b5
  create host-interface name eth2 mac 00:0c:29:4d:af:a1
  set interface state host-eth2 up
  set interface state host-eth4 up
  set interface ip address host-eth2 14.1.1.1/24
  mpls table add 0
  set interface mpls host-eth2 enable
  create mpls tunnel out-label 33 out-label 53 via 14.1.1.2 host-eth2 l2-only
  set interface state mpls-tunnel0 up
  create bridge-domain 1
  set interface l2 bridge mpls-tunnel0 1
  set interface l2 bridge host-eth4 1
  mpls local-label add non-eos 1023 mpls-lookup-in-table 0
  mpls local-label add eos 1053 l2-input-on mpls-tunnel0
  set interface mpls host-eth4 enable

VPP3:(packets didn't arrive)
  create host-interface name eth3 mac 00:0c:29:19:8e:76
  create host-interface name eth5 mac 00:0c:29:19:8e:8a
  set interface state host-eth3 up
  set interface state host-eth5 up
  set interface ip address host-eth3 12.1.1.2/24 
  
VPP2:
(only modify the configuration on VPP2)
1.
  create host-interface name eth2 mac 00:0c:29:0f:e2:a8
  create host-interface name eth3 mac 00:0c:29:0f:e2:b2
  set interface state host-eth2 up
  set interface state host-eth3 up
  set interface ip address host-eth3 12.1.1.1/24
  set interface mpls host-eth2 enable
  set interface mpls host-eth3 enable
  set interface ip address host-eth2 14.1.1.2/24
  mpls table add 0
  mpls local-label add non-eos 33 mpls-lookup-in-table 0   decap the 
out-label
  create mpls tunnel out-label 60 via 12.1.1.2 host-eth3== exchange 
the out-label to 60
  set interface state mpls-tunnel0 up
  set interface mpls mpls-tunnel0 enable
  
  the trace info on VPP2,(there is no icmp message between VPP2 to VPP3):
  
  00:21:58:149609: af-packet-input
  af_packet: hw_if_index 1 next-index 4
tpacket2_hdr:
  status 0x2001 len 100 snaplen 100 mac 66 net 80
  sec 0x5a1d04e8 nsec 0xac3650b vlan 0
00:21:58:149643: ethernet-input
  MPLS: 00:0c:29:4d:af:a1 -> 00:0c:29:0f:e2:a8
00:21:58:149743: mpls-input
  MPLS: next mpls-lookup[1]  label 33 ttl 255   


00:21:58:149753: mpls-lookup
  MPLS: next [6], lookup fib index 0, LB index 25 hash 0 label 33 eos 0
00:21:58:149761: lookup-mpls-dst
 fib-index:0 hdr:[53:255:0:eos] load-balance:18
00:21:58:149778: error-drop
  mpls-input: MPLS DROP DPO
 
 2.modify the configuration:
  create host-interface name eth2 mac 00:0c:29:0f:e2:a8
  create host-interface name eth3 mac 00:0c:29:0f:e2:b2
  set interface state host-eth2 up
  set interface state host-eth3 up
  set interface ip address host-eth3 12.1.1.1/24
  set interface mpls host-eth2 enable
  set interface mpls host-eth3 enable
  set interface ip address host-eth2 14.1.1.2/24
  mpls table add 0
  mpls local-label add non-eos 33 mpls-lookup-in-table 0
  create mpls tunnel out-label 60 via 12.1.1.2 host-eth3 l2-only
  set interface state mpls-tunnel0 up
  set interface mpls mpls-tunnel0 enable
  
  the trace info on VPP2,(there is no icmp message between VPP2 to VPP3):
  
00:03:47:172693: af-packet-input
  af_packet: hw_if_index 1 next-index 4
tpacket2_hdr:
  status 0x2001 len 100 snaplen 100 mac 66 net 80
  sec 0x5a1d068e nsec 0x2e14a970 vlan 0
00:03:47:172722: ethernet-input
  MPLS: 00:0c:29:4d:af:a1 -> 00:0c:29:0f:e2:a8
00:03:47:172741: mpls-input
  MPLS: next mpls-lookup[1]  label 33 ttl 255
00:03:47:172748: mpls-lookup
  MPLS: next [6], lookup fib index 0, LB index 24 hash 0 label 33 eos 0
00:03:47:172755: lookup-mpls-dst
 fib-index:0 hdr:[53:255:0:eos] load-balance:17
00:03:47:172763: error-drop
  mpls-input: MPLS DROP DPO
  
3.decap two layer label, and encap two layer label;
  create host-interface name eth2 mac 00:0c:29:0f:e2:a8
  create host-interface name eth3 mac 00:0c:29:0f:e2:b2
  set interface state host-eth2 up
  set interface state host-eth3 up
  set interface ip address host-eth3 12.1.1.1/24
  set interface mpls host-eth2 enable
  set interface mpls host-eth3 enable
  set interface ip address host-eth2 14.1.1.2/24
  mpls table add 0
  mpls local-label add non-eos 33 mpls-lookup-in-table 0
  create mpls tunnel out-label 60 out-label 53 via 12.1.1.2
  set interface state mpls-tunnel0 up
  set interface mpls mpls-tunnel0 enable
  mpls local-label add eos 53 l2-input-on mpls-tunnel0
  
  the trace info on VPP2,(there is no icmp message between VPP2 to VPP3):
  00:01:52:770255: af-packet-input
  af_packet: hw_if_index 1 next-index 4
tpacket2_hdr:
  status 0x2001 len 100 snaplen 100 mac 66 net 80
  sec 0x5a1d0814 nsec 0x9cf10d8 vlan 0
00:01:52:770282: ethernet-input
  MPLS: