Re: [vpp-dev] Vpp apps available

2018-01-22 Thread Marek Gradzki -X (mgradzki - PANTHEON TECHNOLOGIES at Cisco)
Hi,

vbd allows to mount multiple vpp instances and configure bridge domains, also 
provides GUI.
I also recall gbp, but I also suggest asking ODL mailing list.

If you are interested in VPP + ODL integration or want to configure VPP using 
RESTCONF/NETCONF,
you can use honeycomb and you don't need any app on top as described in the 
other thread you started on 
hc2vpp list.

Here is list of VPP features we support and mapping to yang models:
https://docs.fd.io/hc2vpp/1.17.10/hc2vpp-parent/release-notes-aggregator/api_docs_index.html

Regards,
Marek
 
-Original Message-
From: vpp-dev-boun...@lists.fd.io [mailto:vpp-dev-boun...@lists.fd.io] On 
Behalf Of Sarkar, Kawshik
Sent: 22 stycznia 2018 14:05
To: vpp-dev@lists.fd.io
Subject: [vpp-dev] Vpp apps available

I am finding only one vpp app out there called vpp vbd for odl...It only lets u 
create bridge domains but lacks other major stuffs like tap and veth interface 
creation etc..and many more functionsquestion 1.are there other apps 
available? Question 2.can users create their own yang models and model enhance 
the platform?

Sent from my iPhone


This electronic message and any files transmitted with it contains information 
from iDirect, which may be privileged, proprietary and/or confidential. It is 
intended solely for the use of the individual or entity to whom they are 
addressed. If you are not the original recipient or the person responsible for 
delivering the email to the intended recipient, be advised that you have 
received this email in error, and that any use, dissemination, forwarding, 
printing, or copying of this email is strictly prohibited. If you received this 
email in error, please delete it and immediately notify the sender.
___
vpp-dev mailing list
vpp-dev@lists.fd.io
https://lists.fd.io/mailman/listinfo/vpp-dev
___
vpp-dev mailing list
vpp-dev@lists.fd.io
https://lists.fd.io/mailman/listinfo/vpp-dev


Re: [vpp-dev] Question and bug found on GTP performance testing

2018-01-22 Thread Lollita Liu
Hi, John,
The internal mechanism is very clear to me now.

And do you have any thought about the dead lock on main thread?

BR/Lollita Liu

From: John Lo (loj) [mailto:l...@cisco.com]
Sent: Tuesday, January 23, 2018 11:18 AM
To: Lollita Liu ; vpp-dev@lists.fd.io
Cc: David Yu Z ; Kingwel Xie 
; Terry Zhang Z ; Jordy 
You 
Subject: RE: Question and bug found on GTP performance testing

Hi Lolita,

Thank you for providing information from your performance test with observed 
behavior and problems.

On interface creation, including tunnels, VPP always creates dedicated output 
and tx nodes for each interface. As you correctly observed, these dedicated tx 
and output nodes are not used for most tunnel interfaces such as GTPU and 
VXLAN. All these tunnel interfaces of the same tunnel type would use an 
existing tunnel type specific encap node as their output nodes.

I can see that for large scale tunnel deployments, creation of a large number 
of these not-used output and tx nodes can be an issue, especially when multiple 
worker threads are used. The worker threads will be blocked from forwarding 
packets while the main thread is busy creating these nodes and do setups for 
multiple worker threads.

I believe we should improve VPP interface creation to allow a way for creating 
interfaces, such as tunnels, where existing (encap-)nodes can be specified as 
interface output nodes without creating dedicated tx and output nodes.

Your observation that the forwarding PPS impact only occur during initial 
tunnel creation and not subsequent delete and create is as expected. It is 
because on tunnel deletion, the associated interfaces are not deleted but kept 
in a reused pool for subsequent creation of the same tunnel type. It may not be 
the best approach for interface usage flexibility but it certainly helps with 
efficiency of tunnel delete and create cases.

I will work on the interface creation improvement described above when I get a 
chance.  I can let you know when a patch is available on vpp master for you to 
try.  As for 18.01 release, it is probably too late to include this improvement.

Regards,
John

From: vpp-dev-boun...@lists.fd.io 
[mailto:vpp-dev-boun...@lists.fd.io] On Behalf Of Lollita Liu
Sent: Monday, January 22, 2018 5:04 AM
To: vpp-dev@lists.fd.io
Cc: David Yu Z >; 
Kingwel Xie >; Terry 
Zhang Z >; Jordy 
You >
Subject: [vpp-dev] Question and bug found on GTP performance testing

Hi,

We are do performance testing on GTP of VPP source code, 
testing the GTPU performance impact by creating/removing tunnel. Found some 
curious thing and one bug.



Testing GTP encryption via one CPU across different rx and tx 
port on same NUMA, with 10K pre-created GTPU tunnel both with data. The result 
is 4.7Mpps@64B.

Testing GTP encryption via one CPU across different rx and tx 
port on same NUMA, with 10K pre-created GTPU tunnel both with data, and 
creating another 10K GTPU tunnel at the same time.  The result is about 
400K@64B.


The create tunnel command is "create gtpu tunnel src 1.4.1.1 
dst 1.4.1.2 teid 1 decap-next ip4"and "ip route add 10.4.0.1/32 via 
gtpu_tunnel0".

You can see the throughput impact is huge. Looks there are lots of node as 
gtpu_tunnelxx-tx and gtpu_tunnelxx-output been created, and all worker node 
will waiting for the node graph update. But in the output of show runtime, no 
such node been called. In source code, the GTP-U encryption has been takeover 
by gtpu4-encap with following code "hi->output_node_index = encap_index;" What 
do those gtpu_tunnel nodes used for?

Since the nodes are useless. We try another case with following 
procedure:
(1) Create 10K GTP tunnel
(2) Rx-Tx with same NUMA using 1G hugepage and 10K GTPU tunnel 
with 10K tunnel data
(3) Creating another 30K GTP tunnel
(4) Remove the last 30K GTP tunnel
The main thread fall into dead lock, no response on command 
line, no impact to worker thread .
In GDB output, mheap_maybe_lock  has been called twice.
Thread 1 (Thread 0x7f335bef5740 (LWP 27464)):
#0  0x7f335ab518d9 in mheap_maybe_lock (v=0x7f33199dd000) at 
/home/vpp/vpp/build-data/../src/vppinfra/mheap.c:66
#1  mheap_get_aligned (v=0x7f33199dd000, n_user_data_bytes=8, 
n_user_data_bytes@entry=5, align=, align@entry=4,
align_offset=0, align_offset@entry=4, 
offset_return=offset_return@entry=0x7f331a968618)
at 

Re: [vpp-dev] Question and bug found on GTP performance testing

2018-01-22 Thread John Lo (loj)
Hi Lolita,

Thank you for providing information from your performance test with observed 
behavior and problems.

On interface creation, including tunnels, VPP always creates dedicated output 
and tx nodes for each interface. As you correctly observed, these dedicated tx 
and output nodes are not used for most tunnel interfaces such as GTPU and 
VXLAN. All these tunnel interfaces of the same tunnel type would use an 
existing tunnel type specific encap node as their output nodes.

I can see that for large scale tunnel deployments, creation of a large number 
of these not-used output and tx nodes can be an issue, especially when multiple 
worker threads are used. The worker threads will be blocked from forwarding 
packets while the main thread is busy creating these nodes and do setups for 
multiple worker threads.

I believe we should improve VPP interface creation to allow a way for creating 
interfaces, such as tunnels, where existing (encap-)nodes can be specified as 
interface output nodes without creating dedicated tx and output nodes.

Your observation that the forwarding PPS impact only occur during initial 
tunnel creation and not subsequent delete and create is as expected. It is 
because on tunnel deletion, the associated interfaces are not deleted but kept 
in a reused pool for subsequent creation of the same tunnel type. It may not be 
the best approach for interface usage flexibility but it certainly helps with 
efficiency of tunnel delete and create cases.

I will work on the interface creation improvement described above when I get a 
chance.  I can let you know when a patch is available on vpp master for you to 
try.  As for 18.01 release, it is probably too late to include this improvement.

Regards,
John

From: vpp-dev-boun...@lists.fd.io [mailto:vpp-dev-boun...@lists.fd.io] On 
Behalf Of Lollita Liu
Sent: Monday, January 22, 2018 5:04 AM
To: vpp-dev@lists.fd.io
Cc: David Yu Z ; Kingwel Xie 
; Terry Zhang Z ; Jordy 
You 
Subject: [vpp-dev] Question and bug found on GTP performance testing

Hi,

We are do performance testing on GTP of VPP source code, 
testing the GTPU performance impact by creating/removing tunnel. Found some 
curious thing and one bug.



Testing GTP encryption via one CPU across different rx and tx 
port on same NUMA, with 10K pre-created GTPU tunnel both with data. The result 
is 4.7Mpps@64B.

Testing GTP encryption via one CPU across different rx and tx 
port on same NUMA, with 10K pre-created GTPU tunnel both with data, and 
creating another 10K GTPU tunnel at the same time.  The result is about 
400K@64B.


The create tunnel command is "create gtpu tunnel src 1.4.1.1 
dst 1.4.1.2 teid 1 decap-next ip4"and "ip route add 10.4.0.1/32 via 
gtpu_tunnel0".

You can see the throughput impact is huge. Looks there are lots of node as 
gtpu_tunnelxx-tx and gtpu_tunnelxx-output been created, and all worker node 
will waiting for the node graph update. But in the output of show runtime, no 
such node been called. In source code, the GTP-U encryption has been takeover 
by gtpu4-encap with following code "hi->output_node_index = encap_index;" What 
do those gtpu_tunnel nodes used for?

Since the nodes are useless. We try another case with following 
procedure:
(1) Create 10K GTP tunnel
(2) Rx-Tx with same NUMA using 1G hugepage and 10K GTPU tunnel 
with 10K tunnel data
(3) Creating another 30K GTP tunnel
(4) Remove the last 30K GTP tunnel
The main thread fall into dead lock, no response on command 
line, no impact to worker thread .
In GDB output, mheap_maybe_lock  has been called twice.
Thread 1 (Thread 0x7f335bef5740 (LWP 27464)):
#0  0x7f335ab518d9 in mheap_maybe_lock (v=0x7f33199dd000) at 
/home/vpp/vpp/build-data/../src/vppinfra/mheap.c:66
#1  mheap_get_aligned (v=0x7f33199dd000, n_user_data_bytes=8, 
n_user_data_bytes@entry=5, align=, align@entry=4,
align_offset=0, align_offset@entry=4, 
offset_return=offset_return@entry=0x7f331a968618)
at /home/vpp/vpp/build-data/../src/vppinfra/mheap.c:675
#2  0x7f335ab7b0f7 in clib_mem_alloc_aligned_at_offset 
(os_out_of_memory_on_failure=1, align_offset=4, align=4, size=5)
at /home/vpp/vpp/build-data/../src/vppinfra/mem.h:91
#3  vec_resize_allocate_memory (v=, 
length_increment=length_increment@entry=1, data_bytes=5,
header_bytes=, header_bytes@entry=0, 
data_align=data_align@entry=4)
at /home/vpp/vpp/build-data/../src/vppinfra/vec.c:59
#4  0x7f335b8a10ba in _vec_resize (data_align=, 
header_bytes=, data_bytes=,
length_increment=, v=) at 
/home/vpp/vpp/build-data/../src/vppinfra/vec.h:142
#5  unix_cli_add_pending_output (uf=0x7f331ba606b4, buffer=0x7f335b8b774f "\r", 
buffer_bytes=1, cf=)
at 

[vpp-dev] 1G hugepage support in VPP //RE: "ftruncate: Invalid argument" in VPP startup

2018-01-22 Thread Ni, Hongjun
Hi all,

I also tested master and stable/1710 release using 1G hugepage.
Below is the test result:
For stable/1710, it could work well using 1G hugepage.
For master, it failed to start up when using 1G hugepage, just like below.

Because 1G hugepage support is required for some performance tests, and 
virtio-user in DPDK-based container.
Please help to take a look at this issue.

Thanks a lot,
Hongjun

From: vpp-dev-boun...@lists.fd.io [mailto:vpp-dev-boun...@lists.fd.io] On 
Behalf Of Samuel Eliáš
Sent: Monday, January 22, 2018 11:30 PM
To: vpp-dev@lists.fd.io
Subject: [vpp-dev] "ftruncate: Invalid argument" in VPP startup


Hello vpp-dev,



I've encountered an issue when trying to run VPP using default configuration on 
baremetal (512GB RAM, 36 phys cores, Ubuntu16.04). It appears to be related to 
dpdk memory allocation, and only occurs on VPP 18.01, not on 17.10.



Just wondering if anyone's seen this before, and/or whether I should go bother 
the dpdk folks instead. I understand dpdk version was bumped to 17.11 in this 
release, so that's one potential cause.


$ sudo /usr/bin/vpp unix interactive
vlib_plugin_early_init:356: plugin path /usr/lib/vpp_plugins
load_one_plugin:184: Loaded plugin: acl_plugin.so (Access Control Lists)
... more plugins ...
load_one_plugin:63: Loaded plugin: 
/usr/lib/vpp_api_test_plugins/lb_test_plugin.so
clib_sysfs_read: open 
`/sys/devices/system/node/node0/hugepages/hugepages-2048kB/free_hugepages': No 
such file or directory
clib_sysfs_read: open 
`/sys/devices/system/node/node1/hugepages/hugepages-2048kB/free_hugepages': No 
such file or directory
dpdk_bind_devices_to_uio:758: Unsupported PCI device 0x8086:0x0435 found at PCI 
address :08:00.0
vlib_pci_bind_to_uio: Skipping PCI device :0a:00.0 as host interface eth0 
is up
dpdk_bind_devices_to_uio:758: Unsupported PCI device 0x8086:0x0435 found at PCI 
address :84:00.0
dpdk_config:1240: EAL init args: -c 1 -n 4 --huge-dir /run/vpp/hugepages 
--file-prefix vpp -b :0a:00.0 --master-lcore 0 --socket-mem 64,64
EAL: VFIO support initialized
DPDK physical memory layout:
Segment 0: IOVA:0x1, len:1073741824, virt:0x7efbc000, socket_id:0, 
hugepage_sz:1073741824, nchannel:0, nrank:0
Segment 1: IOVA:0x408000, len:1073741824, virt:0x7ee48000, socket_id:1, 
hugepage_sz:1073741824, nchannel:0, nrank:0
clib_mem_vm_ext_alloc: ftruncate: Invalid argument
dpdk_buffer_pool_create: failed to allocate mempool on socket 0


Memory info:
$ cat /proc/meminfo | tail -n 8
HugePages_Total: 128
HugePages_Free:  128
HugePages_Rsvd:0
HugePages_Surp:0
Hugepagesize:1048576 kB
DirectMap4k:  224680 kB
DirectMap2M: 3862528 kB
DirectMap1G:534773760 kB

Thanks,
- Sam
___
vpp-dev mailing list
vpp-dev@lists.fd.io
https://lists.fd.io/mailman/listinfo/vpp-dev

Re: [vpp-dev] "ftruncate: Invalid argument" in VPP startup

2018-01-22 Thread Damjan Marion

Dear Samuel,

VPP needs 2M hugepages, and you don't have them free 

> clib_sysfs_read: open 
> `/sys/devices/system/node/node0/hugepages/hugepages-2048kB/free_hugepages': 
> No such file or directory
> clib_sysfs_read: open 
> `/sys/devices/system/node/node1/hugepages/hugepages-2048kB/free_hugepages': 
> No such file or directory

Regards,

Damjan


> On 22 Jan 2018, at 07:30, Samuel Eliáš  wrote:
> 
> Hello vpp-dev,
> 
> I've encountered an issue when trying to run VPP using default configuration 
> on baremetal (512GB RAM, 36 phys cores, Ubuntu16.04). It appears to be 
> related to dpdk memory allocation, and only occurs on VPP 18.01, not on 17.10.
> 
> Just wondering if anyone's seen this before, and/or whether I should go 
> bother the dpdk folks instead. I understand dpdk version was bumped to 17.11 
> in this release, so that's one potential cause.
> 
> $ sudo /usr/bin/vpp unix interactive
> vlib_plugin_early_init:356: plugin path /usr/lib/vpp_plugins
> load_one_plugin:184: Loaded plugin: acl_plugin.so (Access Control Lists)
> ... more plugins ...
> load_one_plugin:63: Loaded plugin: 
> /usr/lib/vpp_api_test_plugins/lb_test_plugin.so
> clib_sysfs_read: open 
> `/sys/devices/system/node/node0/hugepages/hugepages-2048kB/free_hugepages': 
> No such file or directory
> clib_sysfs_read: open 
> `/sys/devices/system/node/node1/hugepages/hugepages-2048kB/free_hugepages': 
> No such file or directory
> dpdk_bind_devices_to_uio:758: Unsupported PCI device 0x8086:0x0435 found at 
> PCI address :08:00.0
> vlib_pci_bind_to_uio: Skipping PCI device :0a:00.0 as host interface eth0 
> is up
> dpdk_bind_devices_to_uio:758: Unsupported PCI device 0x8086:0x0435 found at 
> PCI address :84:00.0
> dpdk_config:1240: EAL init args: -c 1 -n 4 --huge-dir /run/vpp/hugepages 
> --file-prefix vpp -b :0a:00.0 --master-lcore 0 --socket-mem 64,64 
> EAL: VFIO support initialized
> DPDK physical memory layout:
> Segment 0: IOVA:0x1, len:1073741824, virt:0x7efbc000, 
> socket_id:0, hugepage_sz:1073741824, nchannel:0, nrank:0
> Segment 1: IOVA:0x408000, len:1073741824, virt:0x7ee48000, 
> socket_id:1, hugepage_sz:1073741824, nchannel:0, nrank:0
> clib_mem_vm_ext_alloc: ftruncate: Invalid argument
> dpdk_buffer_pool_create: failed to allocate mempool on socket 0
> 
> 
> Memory info:
> $ cat /proc/meminfo | tail -n 8
> HugePages_Total: 128
> HugePages_Free:  128
> HugePages_Rsvd:0
> HugePages_Surp:0
> Hugepagesize:1048576 kB
> DirectMap4k:  224680 kB
> DirectMap2M: 3862528 kB
> DirectMap1G:534773760 kB
> 
> Thanks,
> - Sam
> ___
> vpp-dev mailing list
> vpp-dev@lists.fd.io 
> https://lists.fd.io/mailman/listinfo/vpp-dev 
> 
___
vpp-dev mailing list
vpp-dev@lists.fd.io
https://lists.fd.io/mailman/listinfo/vpp-dev

Re: [vpp-dev] RFC: Error Codes

2018-01-22 Thread Dave Barach (dbarach)
Dear Jon,

That makes sense to me. Hopefully Ole will comment with respect to adding 
statements of the form

error { FOO_NOT_AVAILABLE, “Resource ‘foo’ is not available } ;

to the new Python PLY-based API generator.

The simple technique used to allocate plugin message-ID’s seems to work OK to 
solve the analogous problem here.

Thanks… Dave

From: vpp-dev-boun...@lists.fd.io [mailto:vpp-dev-boun...@lists.fd.io] On 
Behalf Of Jon Loeliger
Sent: Monday, January 22, 2018 12:13 PM
To: vpp-dev 
Subject: [vpp-dev] RFC: Error Codes

Hey VPP Aficionados,

I would like to make a proposal for a new way to introduce error codes
into the VPP code base.  The two main motivations for the proposal are

1) to improve the over-all error messages coupled to their API calls,
and
2) to clearly delineate the errors for VNET from those of various plugins.

Recently, it was pointed out to me that the errors for the various plugins
should not introduce new, plugin-specific errors into the main VNET list
of errors (src/vnet/api_errno.h) on the basis that plugins shouldn't clutter
VNET, should be more self-sustaining, and should stand alone.

Without a set of generic error codes that can be used by the various plugins,
there would then be no error codes as viable return values from the API calls
defined by plugins.

So here is my proposal:

- Extend the API definition files to allow the definition of error messages
  and codes specific to VNET, or to a plugin.

- Each plugin registers its error codes with a main registry upon being 
loaded.

- The global error table is maintained, perhaps much like API enums today.

- Each API call then has a guaranteed set of return values defined directly
  within its own API definition, thus coupling API calls and their possible
  returned error codes as well.

Other thoughts?

Thanks,
jdl

___
vpp-dev mailing list
vpp-dev@lists.fd.io
https://lists.fd.io/mailman/listinfo/vpp-dev

[vpp-dev] VPP ACL plugin session info

2018-01-22 Thread Pradeep Patel (pradpate)
Team,
I am trying to dump the session table (show acl-plugin sessions) to view the 
session info but don’t see any sessions getting created. Any input will be 
helpful.

Plugin Version
vat# acl_plugin_get_version
vl_api_acl_plugin_get_version_reply_t_handler:133: ACL plugin version: 1.3

Following is the acl plugin configuration:
vat# acl_add_replace deny, ipv4 deny
vl_api_acl_add_replace_reply_t_handler:107: ACL index: 0
vat# acl_interface_set_acl_list sw_if_index 1  input 0  output 0
vat# acl_interface_set_acl_list sw_if_index 2 input 0  output 0
vat# acl_add_replace  0 permit src 192.168.1.10/32, permit
vl_api_acl_add_replace_reply_t_handler:107: ACL index: 0
vat# acl_dump
vl_api_acl_details_t_handler:193: acl_index: 0, count: 2
   tag {}
   ipv4 action 1 src 192.168.1.10/32 dst 0.0.0.0/0 proto 0 sport 0-65535 dport 
0-65535 tcpflags 0 mask 0,
   ipv4 action 1 src 0.0.0.0/0 dst 0.0.0.0/0 proto 0 sport 0-65535 dport 
0-65535 tcpflags 0 mask 0

Client IP : 192.168.1.10

root@localhost:/sandbox/tests/vpp# nc   5.1.1.10 11000
fdsdsf

Server IP :   5.1.1.10
root@localhost:~# nc -l 11000
fdsdsf

Trace Info

Packet X
00:08:21:983273: acl-plugin-out-ip4-fa
  acl-plugin: sw_if_index 2, next index 1, action: 1, match: acl 0 rule 0 
trace_bits 
  pkt info  0a01a8c0  0a010105 
000200062af8a798 05020002
   output sw_if_index 2 (lsb16 2) l3 ip4 192.168.1.10 -> 5.1.1.10 l4 proto 6 
l4_valid 1 port 42904 -> 11000 tcp flags (valid) 02 rsvd 0
00:08:21:983276: host-vpp_outside-output
  host-vpp_outside
  IP4: 02:fe:ec:db:35:b8 -> 92:93:a8:73:cd:7f
  TCP: 192.168.1.10 -> 5.1.1.10
tos 0x00, ttl 63, length 60, checksum 0xee09
fragment id 0x85f5, flags DONT_FRAGMENT
  TCP: 42904 -> 11000
seq. 0xd64e1be2 ack 0x
flags 0x02 SYN, tcp header: 40 bytes
window 29200, checksum 0x

packet Y
00:08:21:983327: acl-plugin-in-ip4-fa
  acl-plugin: sw_if_index 2, next index 1, action: 1, match: acl 0 rule 1 
trace_bits 
  pkt info  0a010105  0a01a8c0 
00020006a7982af8 07120002
   input sw_if_index 2 (lsb16 2) l3 ip4 5.1.1.10 -> 192.168.1.10 l4 proto 6 
l4_valid 1 port 11000 -> 42904 tcp flags (valid) 12 rsvd 0
00:08:21:983329: ip4-lookup
  fib 0 dpo-idx 2 flow hash: 0x
  TCP: 5.1.1.10 -> 192.168.1.10
tos 0x00, ttl 64, length 60, checksum 0x72ff

vpp# show acl-plugin sessions
Sessions total: add 0 - del 0 = 0


Per-thread data:
Thread #0:
  connection add/del stats:
sw_if_index 0: add 0 - del 0 = 0
sw_if_index 1: add 0 - del 0 = 0
sw_if_index 2: add 0 - del 0 = 0
  connection timeout type lists:
  fa_conn_list_head[0]: -1
  fa_conn_list_head[1]: -1
  fa_conn_list_head[2]: -1
  Next expiry time: 0
  Requeue until time: 0
  Current time wait interval: 0
  Count of deleted sessions: 0
  Delete already deleted: 0
  Session timers restarted: 0
  Swipe until this time: 0
  sw_if_index serviced bitmap: 0
  pending clear intfc bitmap : 0
  clear in progress: 0
  interrupt is pending: 0
  interrupt is needed: 0
  interrupt is unwanted: 0
  interrupt generation: 1898


Conn cleaner thread counters:
0: delete_by_sw_index events
0: delete_by_sw_index handled ok
0: unknown events received
0: session idle timers restarted
 1898: event wait with timeout called
1: event wait w/o timeout called
 1898: total event cycles
Interrupt generation: 1899
Sessions per interval: min 1 max 100 increment: 100 ms current: 500 ms

Session lookup hash table:
Hash table ACL plugin FA session bihash
0 active elements
0 free lists
0 linear search buckets
0 cache hits, 0 cache misses


vpp#
___
vpp-dev mailing list
vpp-dev@lists.fd.io
https://lists.fd.io/mailman/listinfo/vpp-dev

[vpp-dev] RFC: Error Codes

2018-01-22 Thread Jon Loeliger
Hey VPP Aficionados,

I would like to make a proposal for a new way to introduce error codes
into the VPP code base.  The two main motivations for the proposal are

1) to improve the over-all error messages coupled to their API calls,
and
2) to clearly delineate the errors for VNET from those of various
plugins.

Recently, it was pointed out to me that the errors for the various plugins
should not introduce new, plugin-specific errors into the main VNET list
of errors (src/vnet/api_errno.h) on the basis that plugins shouldn't clutter
VNET, should be more self-sustaining, and should stand alone.

Without a set of generic error codes that can be used by the various
plugins,
there would then be no error codes as viable return values from the API
calls
defined by plugins.

So here is my proposal:

- Extend the API definition files to allow the definition of error
messages
  and codes specific to VNET, or to a plugin.

- Each plugin registers its error codes with a main registry upon being
loaded.

- The global error table is maintained, perhaps much like API enums
today.

- Each API call then has a guaranteed set of return values defined
directly
  within its own API definition, thus coupling API calls and their
possible
  returned error codes as well.

Other thoughts?

Thanks,
jdl
___
vpp-dev mailing list
vpp-dev@lists.fd.io
https://lists.fd.io/mailman/listinfo/vpp-dev

[vpp-dev] Upcoming memif API and CLI changes.

2018-01-22 Thread Jon Loeliger
Folks,

Just a heads-up that I have submitted a patch that alters both the
API and the CLI for the memif functions.  That patch is being reviewed
and will (hopefully) be merge soon.

Prior to the patch, the memif CLI supported a create and a delete
command roughly like this:

vppctl# create memif id  filename  (master|slave)
...

After the patch, the management of the  will be through
a separate API call and a corresponding CLI command:

vppctl# create memif socket id  filename 

Then in the memif interface command, one references the  instead:

vppctl# create interface memif id  socket-id 
(master|slave) ...

Note that in addition to the  replacing the ,
the command
itself has changed from "create memif" to "create interface memif".

To flesh-out the patch, dump/details API calls were added for the new
socket id-filename
table, and VAT learned direct API call mechanisms for the new APIs as well.

HTH,
jdl
___
vpp-dev mailing list
vpp-dev@lists.fd.io
https://lists.fd.io/mailman/listinfo/vpp-dev

[vpp-dev] "ftruncate: Invalid argument" in VPP startup

2018-01-22 Thread Samuel Eliáš
Hello vpp-dev,


I've encountered an issue when trying to run VPP using default configuration on 
baremetal (512GB RAM, 36 phys cores, Ubuntu16.04). It appears to be related to 
dpdk memory allocation, and only occurs on VPP 18.01, not on 17.10.


Just wondering if anyone's seen this before, and/or whether I should go bother 
the dpdk folks instead. I understand dpdk version was bumped to 17.11 in this 
release, so that's one potential cause.


$ sudo /usr/bin/vpp unix interactive
vlib_plugin_early_init:356: plugin path /usr/lib/vpp_plugins
load_one_plugin:184: Loaded plugin: acl_plugin.so (Access Control Lists)
... more plugins ...
load_one_plugin:63: Loaded plugin: 
/usr/lib/vpp_api_test_plugins/lb_test_plugin.so
clib_sysfs_read: open 
`/sys/devices/system/node/node0/hugepages/hugepages-2048kB/free_hugepages': No 
such file or directory
clib_sysfs_read: open 
`/sys/devices/system/node/node1/hugepages/hugepages-2048kB/free_hugepages': No 
such file or directory
dpdk_bind_devices_to_uio:758: Unsupported PCI device 0x8086:0x0435 found at PCI 
address :08:00.0
vlib_pci_bind_to_uio: Skipping PCI device :0a:00.0 as host interface eth0 
is up
dpdk_bind_devices_to_uio:758: Unsupported PCI device 0x8086:0x0435 found at PCI 
address :84:00.0
dpdk_config:1240: EAL init args: -c 1 -n 4 --huge-dir /run/vpp/hugepages 
--file-prefix vpp -b :0a:00.0 --master-lcore 0 --socket-mem 64,64
EAL: VFIO support initialized
DPDK physical memory layout:
Segment 0: IOVA:0x1, len:1073741824, virt:0x7efbc000, socket_id:0, 
hugepage_sz:1073741824, nchannel:0, nrank:0
Segment 1: IOVA:0x408000, len:1073741824, virt:0x7ee48000, socket_id:1, 
hugepage_sz:1073741824, nchannel:0, nrank:0
clib_mem_vm_ext_alloc: ftruncate: Invalid argument
dpdk_buffer_pool_create: failed to allocate mempool on socket 0


Memory info:
$ cat /proc/meminfo | tail -n 8
HugePages_Total: 128
HugePages_Free:  128
HugePages_Rsvd:0
HugePages_Surp:0
Hugepagesize:1048576 kB
DirectMap4k:  224680 kB
DirectMap2M: 3862528 kB
DirectMap1G:534773760 kB

Thanks,
- Sam
___
vpp-dev mailing list
vpp-dev@lists.fd.io
https://lists.fd.io/mailman/listinfo/vpp-dev

[vpp-dev] Vpp apps available

2018-01-22 Thread Sarkar, Kawshik
I am finding only one vpp app out there called vpp vbd for odl...It only lets u 
create bridge domains but lacks other major stuffs like tap and veth interface 
creation etc..and many more functionsquestion 1.are there other apps 
available? Question 2.can users create their own yang models and model enhance 
the platform?

Sent from my iPhone


This electronic message and any files transmitted with it contains
information from iDirect, which may be privileged, proprietary
and/or confidential. It is intended solely for the use of the individual
or entity to whom they are addressed. If you are not the original
recipient or the person responsible for delivering the email to the
intended recipient, be advised that you have received this email
in error, and that any use, dissemination, forwarding, printing, or
copying of this email is strictly prohibited. If you received this email
in error, please delete it and immediately notify the sender.
___
vpp-dev mailing list
vpp-dev@lists.fd.io
https://lists.fd.io/mailman/listinfo/vpp-dev


Re: [vpp-dev] Why is IP reassemble not supported in VPP?

2018-01-22 Thread Lollita Liu
Hi, Klement.
Thank you for your response. We will check the patch carefully. But I 
still have question about why IP reassemble is not there or be treated with 
lower priority. I think IP reassemble is mandatory feature as tunnel 
endpoint Please share with me about your thought. Thank you.


BR/Lollita Liu

-Original Message-
From: Klement Sekera -X (ksekera - PANTHEON TECHNOLOGIES at Cisco) 
[mailto:ksek...@cisco.com] 
Sent: Friday, January 19, 2018 8:26 PM
To: Lollita Liu ; vpp-dev@lists.fd.io
Cc: Kingwel Xie ; Terry Zhang Z 
; Jordy You 
Subject: RE: Why is IP reassemble not supported in VPP?

Hi Lollita Liu,

There is a pending patch in gerrit, which adds the support. So far, it wasn't 
reviewed nor merged. I don't have an ETA on that...

https://gerrit.fd.io/r/#/c/9532/

Thanks,
Klement

> -Original Message-
> From: vpp-dev-boun...@lists.fd.io [mailto:vpp-dev-boun...@lists.fd.io] 
> On Behalf Of Lollita Liu
> Sent: Friday, January 19, 2018 5:07 AM
> To: vpp-dev@lists.fd.io
> Cc: Kingwel Xie ; Terry Zhang Z 
> ; Jordy You 
> Subject: [vpp-dev] Why is IP reassemble not supported in VPP?
> 
> Hi,
> 
> We are do investigation in the VPP source code now. 
> After checking the source code and doing testing, looks VPP is not able 
> handle IP fragment.
> 
> 
> 
> In source code, in function ip4_local_inline, looks 
> fragment will be treat as error packet finally because of 
> IP4_ERROR_UNKNOWN_PROTOCOL.
> 
>   /* Treat IP frag packets as "experimental" protocol 
> for now
> 
>  until support of IP frag reassembly is 
> implemented */
> 
>   proto0 = ip4_is_fragment (ip0) ? 0xfe : 
> ip0->protocol;
> 
>   proto1 = ip4_is_fragment (ip1) ? 0xfe : 
> ip1->protocol;
> 
> ...
> 
> next0 = lm->local_next_by_ip_protocol[proto0];
> 
> next1 = lm->local_next_by_ip_protocol[proto1];
> 
> ...
> 
> next0 =
> 
> error0 != IP4_ERROR_UNKNOWN_PROTOCOL ?
> IP_LOCAL_NEXT_DROP : next0;
> 
>   next1 =
> 
> error1 != IP4_ERROR_UNKNOWN_PROTOCOL ?
> IP_LOCAL_NEXT_DROP : next1;
> 
> 
> 
> The version is:
> 
> DBGvpp# show version
> 
> vpp v18.04-rc0~46-gc5239ad built by root on k8s1-node1 at Mon Jan 15
> 06:05:03 UTC 2018
> 
> 
> 
> My question is why IP reassemble is not supported in VPP? It is 
> understandable that IP reassemble is not required for pure packet forwarding.
> But as a router platform, there are also plenty of control plane 
> packets should be handled, for example BGP packet, IKE packet, that's 
> the reason why there is local IP stack on VPP, and IP reassemble is a 
> basic requirement of local IP stack. How to handle the case if the BGP 
> peer send BGP message in several IP fragment to VPP? One BGP message 
> could be quite large depending on route number, and even BGP message 
> fragment can be avoid by MSS since it is based on TCP. How about the 
> case of IKE peer sending IKE message as IP fragments? The IKE message also 
> could be quite large with certificate...
> 
> 
> 
> BR/Lollita Liu

___
vpp-dev mailing list
vpp-dev@lists.fd.io
https://lists.fd.io/mailman/listinfo/vpp-dev


Re: [vpp-dev] MPLS-TP support

2018-01-22 Thread Neale Ranns (nranns)
Hi,

I am not aware of any effort now, nor in plan, for MPLS TP. Contributions are 
welcome ☺

Regards,
neale

-Original Message-
From:  on behalf of Алексей Болдырев 

Date: Friday, 19 January 2018 at 22:40
To: vpp-dev , vpp-dev-request 
Subject: [vpp-dev] MPLS-TP support

Please tell me if the MPLS-TP implementation is being implemented in the 
runway?

RFC:
https://tools.ietf.org/html/rfc5654
___
vpp-dev mailing list
vpp-dev@lists.fd.io
https://lists.fd.io/mailman/listinfo/vpp-dev


___
vpp-dev mailing list
vpp-dev@lists.fd.io
https://lists.fd.io/mailman/listinfo/vpp-dev

[vpp-dev] Question and bug found on GTP performance testing

2018-01-22 Thread Lollita Liu
Hi,

We are do performance testing on GTP of VPP source code, 
testing the GTPU performance impact by creating/removing tunnel. Found some 
curious thing and one bug.



Testing GTP encryption via one CPU across different rx and tx 
port on same NUMA, with 10K pre-created GTPU tunnel both with data. The result 
is 4.7Mpps@64B.

Testing GTP encryption via one CPU across different rx and tx 
port on same NUMA, with 10K pre-created GTPU tunnel both with data, and 
creating another 10K GTPU tunnel at the same time.  The result is about 
400K@64B.


The create tunnel command is "create gtpu tunnel src 1.4.1.1 
dst 1.4.1.2 teid 1 decap-next ip4"and "ip route add 10.4.0.1/32 via 
gtpu_tunnel0".

You can see the throughput impact is huge. Looks there are lots of node as 
gtpu_tunnelxx-tx and gtpu_tunnelxx-output been created, and all worker node 
will waiting for the node graph update. But in the output of show runtime, no 
such node been called. In source code, the GTP-U encryption has been takeover 
by gtpu4-encap with following code "hi->output_node_index = encap_index;" What 
do those gtpu_tunnel nodes used for?

Since the nodes are useless. We try another case with following 
procedure:
(1) Create 10K GTP tunnel
(2) Rx-Tx with same NUMA using 1G hugepage and 10K GTPU tunnel 
with 10K tunnel data
(3) Creating another 30K GTP tunnel
(4) Remove the last 30K GTP tunnel
The main thread fall into dead lock, no response on command 
line, no impact to worker thread .
In GDB output, mheap_maybe_lock  has been called twice.
Thread 1 (Thread 0x7f335bef5740 (LWP 27464)):
#0  0x7f335ab518d9 in mheap_maybe_lock (v=0x7f33199dd000) at 
/home/vpp/vpp/build-data/../src/vppinfra/mheap.c:66
#1  mheap_get_aligned (v=0x7f33199dd000, n_user_data_bytes=8, 
n_user_data_bytes@entry=5, align=, align@entry=4,
align_offset=0, align_offset@entry=4, 
offset_return=offset_return@entry=0x7f331a968618)
at /home/vpp/vpp/build-data/../src/vppinfra/mheap.c:675
#2  0x7f335ab7b0f7 in clib_mem_alloc_aligned_at_offset 
(os_out_of_memory_on_failure=1, align_offset=4, align=4, size=5)
at /home/vpp/vpp/build-data/../src/vppinfra/mem.h:91
#3  vec_resize_allocate_memory (v=, 
length_increment=length_increment@entry=1, data_bytes=5,
header_bytes=, header_bytes@entry=0, 
data_align=data_align@entry=4)
at /home/vpp/vpp/build-data/../src/vppinfra/vec.c:59
#4  0x7f335b8a10ba in _vec_resize (data_align=, 
header_bytes=, data_bytes=,
length_increment=, v=) at 
/home/vpp/vpp/build-data/../src/vppinfra/vec.h:142
#5  unix_cli_add_pending_output (uf=0x7f331ba606b4, buffer=0x7f335b8b774f "\r", 
buffer_bytes=1, cf=)
at /home/vpp/vpp/build-data/../src/vlib/unix/cli.c:528
#6  0x7f335b8a3fcd in unix_cli_file_welcome (cf=0x7f331adaf204, 
cm=)
at /home/vpp/vpp/build-data/../src/vlib/unix/cli.c:1137
#7  0x7f335ab85fd1 in timer_interrupt (signum=) at 
/home/vpp/vpp/build-data/../src/vppinfra/timer.c:125
#8  
#9  0x7f335ab518d9 in mheap_maybe_lock (v=0x7f33199dd000) at 
/home/vpp/vpp/build-data/../src/vppinfra/mheap.c:66
#10 mheap_get_aligned (v=0x7f33199dd000, 
n_user_data_bytes=n_user_data_bytes@entry=12, align=, 
align@entry=4,
align_offset=0, align_offset@entry=4, 
offset_return=offset_return@entry=0x7f331a968e68)
at /home/vpp/vpp/build-data/../src/vppinfra/mheap.c:675
#11 0x7f335ab7b0f7 in clib_mem_alloc_aligned_at_offset 
(os_out_of_memory_on_failure=1, align_offset=4, align=4, size=12)
at /home/vpp/vpp/build-data/../src/vppinfra/mem.h:91
#12 vec_resize_allocate_memory (v=v@entry=0x0, length_increment=1, 
data_bytes=12, header_bytes=, header_bytes@entry=0,
data_align=data_align@entry=4) at 
/home/vpp/vpp/build-data/../src/vppinfra/vec.c:59
#13 0x7f335b8a5eca in _vec_resize (data_align=0, header_bytes=0, 
data_bytes=, length_increment=,
v=) at /home/vpp/vpp/build-data/../src/vppinfra/vec.h:142
#14 vlib_process_get_events (data_vector=, vm=0x7f335bac42c0 
)
at /home/vpp/vpp/build-data/../src/vlib/node_funcs.h:562
#15 unix_cli_process (vm=0x7f335bac42c0 , rt=0x7f331a958000, 
f=)
at /home/vpp/vpp/build-data/../src/vlib/unix/cli.c:2414
#16 0x7f335b86fd96 in vlib_process_bootstrap (_a=) at 
/home/vpp/vpp/build-data/../src/vlib/main.c:1231
#17 0x7f335ab463d8 in clib_calljmp () at 
/home/vpp/vpp/build-data/../src/vppinfra/longjmp.S:110
#18 0x7f331b9dcc20 in ?? ()
#19 0x7f335b870f49 in vlib_process_startup (f=0x0, p=0x7f331a958000, 
vm=0x7f335bac42c0 )
at /home/vpp/vpp/build-data/../src/vlib/main.c:1253
#20 dispatch_process (vm=0x7f335bac42c0 , p=0x7f331a958000, 
last_time_stamp=0, f=0x0)
at /home/vpp/vpp/build-data/../src/vlib/main.c:1296
---Type  to continue, or q  to quit---

We modified the previous steps:
(1) Create 10K GTP tunnel
(2) Rx-Tx with same 

Re: [vpp-dev] Why is IP reassemble not supported in VPP?

2018-01-22 Thread Ole Troan
This is not an argument against implementing IP fragmentation and reassembly, 
but...

> My question is why IP reassemble is not supported in VPP? It is 
> understandable that IP reassemble is not required for pure packet forwarding. 
> But as a router platform, there are also plenty of control plane packets 
> should be handled, for example BGP packet, IKE packet, that’s the reason why 
> there is local IP stack on VPP, and IP reassemble is a basic requirement of 
> local IP stack. How to handle the case if the BGP peer send BGP message in 
> several IP fragment to VPP? One BGP message could be quite large depending on 
> route number, and even BGP message fragment can be avoid by MSS since it is 
> based on TCP. How about the case of IKE peer sending IKE message as IP 
> fragments? The IKE message also could be quite large with certificate…….

BGP uses TCP and wouldn't (and shouldn't) use IP fragmentation.
Yes, you are right that you might require it for tunnel endpoints. And we do in 
fact support IP4 fragmentation and virtual reassembly for some tunnel types. 
Like MAP-E/T, LW46...
IP Fragmentation is largely a DOS vector though. And I know there will be a 
draft at IETF in London with a strong recommendation against doing 
fragmentation at the IP layer.

Cheers,
Ole


signature.asc
Description: Message signed with OpenPGP
___
vpp-dev mailing list
vpp-dev@lists.fd.io
https://lists.fd.io/mailman/listinfo/vpp-dev

Re: [vpp-dev] Why is IP reassemble not supported in VPP?

2018-01-22 Thread Klement Sekera -X (ksekera - PANTHEON TECHNOLOGIES at Cisco)
Hi Lollita Liu,

I am not in position of decision-making and I do not know the project 
priorities...

I'll have to leave this question open for others to answer.

Thanks,
Klement

> -Original Message-
> From: Lollita Liu [mailto:lollita@ericsson.com]
> Sent: Monday, January 22, 2018 3:25 AM
> To: Klement Sekera -X (ksekera - PANTHEON TECHNOLOGIES at Cisco)
> ; vpp-dev@lists.fd.io
> Cc: Kingwel Xie ; Terry Zhang Z
> ; Jordy You 
> Subject: RE: Why is IP reassemble not supported in VPP?
> 
> Hi, Klement.
>   Thank you for your response. We will check the patch carefully. But I
> still have question about why IP reassemble is not there or be treated with
> lower priority. I think IP reassemble is mandatory feature as tunnel
> endpoint Please share with me about your thought. Thank you.
> 
> 
> BR/Lollita Liu
> 
> -Original Message-
> From: Klement Sekera -X (ksekera - PANTHEON TECHNOLOGIES at Cisco)
> [mailto:ksek...@cisco.com]
> Sent: Friday, January 19, 2018 8:26 PM
> To: Lollita Liu ; vpp-dev@lists.fd.io
> Cc: Kingwel Xie ; Terry Zhang Z
> ; Jordy You 
> Subject: RE: Why is IP reassemble not supported in VPP?
> 
> Hi Lollita Liu,
> 
> There is a pending patch in gerrit, which adds the support. So far, it wasn't
> reviewed nor merged. I don't have an ETA on that...
> 
> https://gerrit.fd.io/r/#/c/9532/
> 
> Thanks,
> Klement
> 
> > -Original Message-
> > From: vpp-dev-boun...@lists.fd.io [mailto:vpp-dev-boun...@lists.fd.io]
> > On Behalf Of Lollita Liu
> > Sent: Friday, January 19, 2018 5:07 AM
> > To: vpp-dev@lists.fd.io
> > Cc: Kingwel Xie ; Terry Zhang Z
> > ; Jordy You 
> > Subject: [vpp-dev] Why is IP reassemble not supported in VPP?
> >
> > Hi,
> >
> > We are do investigation in the VPP source code now.
> > After checking the source code and doing testing, looks VPP is not able
> handle IP fragment.
> >
> >
> >
> > In source code, in function ip4_local_inline, looks
> > fragment will be treat as error packet finally because of
> IP4_ERROR_UNKNOWN_PROTOCOL.
> >
> >   /* Treat IP frag packets as "experimental" protocol
> > for now
> >
> >  until support of IP frag reassembly is
> > implemented */
> >
> >   proto0 = ip4_is_fragment (ip0) ? 0xfe :
> > ip0->protocol;
> >
> >   proto1 = ip4_is_fragment (ip1) ? 0xfe :
> > ip1->protocol;
> >
> > ...
> >
> > next0 = lm->local_next_by_ip_protocol[proto0];
> >
> > next1 = lm->local_next_by_ip_protocol[proto1];
> >
> > ...
> >
> > next0 =
> >
> > error0 != IP4_ERROR_UNKNOWN_PROTOCOL ?
> > IP_LOCAL_NEXT_DROP : next0;
> >
> >   next1 =
> >
> > error1 != IP4_ERROR_UNKNOWN_PROTOCOL ?
> > IP_LOCAL_NEXT_DROP : next1;
> >
> >
> >
> > The version is:
> >
> > DBGvpp# show version
> >
> > vpp v18.04-rc0~46-gc5239ad built by root on k8s1-node1 at Mon Jan 15
> > 06:05:03 UTC 2018
> >
> >
> >
> > My question is why IP reassemble is not supported in VPP? It is
> > understandable that IP reassemble is not required for pure packet
> forwarding.
> > But as a router platform, there are also plenty of control plane
> > packets should be handled, for example BGP packet, IKE packet, that's
> > the reason why there is local IP stack on VPP, and IP reassemble is a
> > basic requirement of local IP stack. How to handle the case if the BGP
> > peer send BGP message in several IP fragment to VPP? One BGP message
> > could be quite large depending on route number, and even BGP message
> > fragment can be avoid by MSS since it is based on TCP. How about the
> > case of IKE peer sending IKE message as IP fragments? The IKE message also
> could be quite large with certificate...
> >
> >
> >
> > BR/Lollita Liu

___
vpp-dev mailing list
vpp-dev@lists.fd.io
https://lists.fd.io/mailman/listinfo/vpp-dev