[vpp-dev] multiple worker thread failed with vfio

2018-03-07 Thread wuxp
Hi,
when using config
unix { interactive cli-listen /run/vpp/cli.sock gid 0  } cpu { main-core 1 } 
dpdk {  dev :05:00.1 } *

* vpp# set int state TenGigabitEthernet5/0/1 up
vpp# set int ip addr TenGigabitEthernet5/0/1 192.168.1.1/24
vpp# ping 192.168.1.254 repeat 6 verbose
Source address: 192.168.1.1
64 bytes from 192.168.1.254: icmp_seq=3 ttl=64 time=.2041 ms *WORK AS EXPECTED

BUT:
* add corelist-workers 2,3 in config
unix { interactive cli-listen /run/vpp/cli.sock gid 0  } cpu { main-core 1 
*corelist-workers 2,3* } dpdk {  dev :05:00.1 }

ping *cant work,and kernel error:*
3月 08 11:52:09 h7 kernel: DMAR: DRHD: handling fault status reg 102
3月 08 11:52:09 h7 kernel: DMAR: DMAR:[DMA Write] Request device [05:00.1] fault 
addr 85b738000
    DMAR:[fault reason 05] PTE Write access is not set
3月 08 11:52:10 h7 kernel: DMAR: DRHD: handling fault status reg 202
3月 08 11:52:10 h7 kernel: DMAR: DMAR:[DMA Write] Request device [05:00.1] fault 
addr 85b737000
    DMAR:[fault reason 05] PTE Write access is not set
3月 08 11:52:11 h7 kernel: DMAR: DRHD: handling fault status reg 302
3月 08 11:52:11 h7 kernel: DMAR: DMAR:[DMA Write] Request device [05:00.1] fault 
addr 85b736000
    DMAR:[fault reason 05] PTE Write access is not set
3月 08 11:52:12 h7 kernel: DMAR: DRHD: handling fault status reg 402
3月 08 11:52:12 h7 kernel: DMAR: DMAR:[DMA Write] Request device [05:00.1] fault 
addr 85b736000
    DMAR:[fault reason 05] PTE Write access is not set
3月 08 11:52:13 h7 kernel: DMAR: DRHD: handling fault status reg 502
3月 08 11:52:13 h7 kernel: DMAR: DMAR:[DMA Write] Request device [05:00.1] fault 
addr 85b735000
    DMAR:[fault reason 05] PTE Write access is not set
3月 08 11:52:14 h7 kernel: DMAR: DRHD: handling fault status reg 602
3月 08 11:52:14 h7 kernel: DMAR: DMAR:[DMA Write] Request device [05:00.1] fault 
addr 85b735000
    DMAR:[fault reason 05] PTE Write access is not set

enviroment:
DELL R430 with 2 cpus
centos 7.4 kernel 3.10.0-693.17.1.el7.x86_64
IOMMU kernel parameter: intel_iommu=on iommu=pt

*FULL LOG:*
vpp# set int state TenGigabitEthernet5/0/1 up
vpp# set int ip addr TenGigabitEthernet5/0/1 192.168.1.1/24
vpp# trace add dpdk-input 100
vpp# clear hardware
vpp# clear interfaces
vpp# clear error
vpp# clear run
vpp# ping 192.168.1.254 repeat 6 verbose 
Source address: 192.168.1.1  *( DMAR kernel error occur )*
Source address: 192.168.1.1
Source address: 192.168.1.1
Source address: 192.168.1.1
Source address: 192.168.1.1
Source address: 192.168.1.1

Statistics: 6 sent, 0 received, 100% packet loss
vpp# show trace
--- Start of thread 0 vpp_main ---
No packets in trace buffer
--- Start of thread 1 vpp_wk_0 ---
Packet 1

01:45:58:568674: dpdk-input
  TenGigabitEthernet5/0/1 rx queue 0
  buffer 0xa4c2f: current data 0, length 60, free-list 0, clone-count 0, 
totlen-nifb 0, trace 0x0
  ext-hdr-valid
  l4-cksum-computed l4-cksum-correct l2-hdr-offset 0
  PKT MBUF: port 0, nb_segs 1, pkt_len 60
    buf_len 2176, data_len 60, ol_flags 0x180, data_off 128, phys_addr 
0x5b730c40
    packet_type 0x1 l2_len 0 l3_len 0 outer_l2_len 0 outer_l3_len 0
    Packet Offload Flags
  PKT_RX_IP_CKSUM_GOOD (0x0080) IP cksum of RX pkt. is valid
  PKT_RX_L4_CKSUM_GOOD (0x0100) L4 cksum of RX pkt. is valid
    Packet Types
  RTE_PTYPE_L2_ETHER (0x0001) Ethernet packet
  0x: 00:00:00:00:00:00 -> 00:00:00:00:00:00  
*(like ALL packet data is 0)*
01:45:58:568718: ethernet-input
  0x: 00:00:00:00:00:00 -> 00:00:00:00:00:00
01:45:58:568723: error-drop
  ethernet-input: l3 mac mismatch

Packet 2

01:45:59:568713: dpdk-input
  TenGigabitEthernet5/0/1 rx queue 0
  buffer 0xa4c08: current data 0, length 60, free-list 0, clone-count 0, 
totlen-nifb 0, trace 0x1
  ext-hdr-valid
  l4-cksum-computed l4-cksum-correct l2-hdr-offset 0
  PKT MBUF: port 0, nb_segs 1, pkt_len 60
    buf_len 2176, data_len 60, ol_flags 0x180, data_off 128, phys_addr 
0x5b730280
    packet_type 0x1 l2_len 0 l3_len 0 outer_l2_len 0 outer_l3_len 0
    Packet Offload Flags
  PKT_RX_IP_CKSUM_GOOD (0x0080) IP cksum of RX pkt. is valid
  PKT_RX_L4_CKSUM_GOOD (0x0100) L4 cksum of RX pkt. is valid
    Packet Types
  RTE_PTYPE_L2_ETHER (0x0001) Ethernet packet
  0x: 00:00:00:00:00:00 -> 00:00:00:00:00:00
01:45:59:568714: ethernet-input
  0x: 00:00:00:00:00:00 -> 00:00:00:00:00:00
01:45:59:568716: error-drop
  ethernet-input: l3 mac mismatch

Packet 3

01:46:00:573520: dpdk-input
  TenGigabitEthernet5/0/1 rx queue 0
  buffer 0xa4be1: current data 0, length 60, free-list 0, clone-count 0, 
totlen-nifb 0, trace 0x2
  ext-hdr-valid
  l4-cksum-computed l4-cksum-correct l2-hdr-offset 0
  PKT MBUF: port 0, 

Re: [vpp-dev] VPP - mechanism to drop packets

2018-03-07 Thread Avi Cohen (A)
Thank you Dave,
The drop is working perfectly
But  the other path - to let the packet continue in the 'normal' path is 
broken. 
How to set the next0 for the 'normal' path ?
The sample plugin (which my plugin is based on)  set it to INTERFACE_OUTPUT  - 
which is not suitable for this case.

Btw - how can I see  the  whole path on the graph  assigned for a packet ?

Best Regards
Avi 

> -Original Message-
> From: vpp-dev@lists.fd.io [mailto:vpp-dev@lists.fd.io] On Behalf Of Dave
> Barach
> Sent: Wednesday, 07 March, 2018 4:08 PM
> To: vpp-dev@lists.fd.io
> Subject: Re: [vpp-dev] VPP - mechanism to drop packets
> 
> Dear Avi,
> 
> Yes, if you decide to drop b1, set next1 / error1 in the obvious way.
> 
> The macros vlib_validate_buffer_enqueue_x[2|4] sort out the various incorrect
> speculative enqueue / 2 or 4 pkts going to different successor node cases.
> Simply set (nextN, errorN) as desired and let the boilerplate code deal with 
> it...
> 
> D.
> 
> -Original Message-
> From: Avi Cohen (A) 
> Sent: Wednesday, March 7, 2018 9:02 AM
> To: Dave Barach (dbarach) ; vpp-dev@lists.fd.io
> Subject: RE: [vpp-dev] VPP - mechanism to drop packets
> 
> Thank you Dave - this is very helpful
> Please see comments inline
> 
> > -Original Message-
> > From: vpp-dev@lists.fd.io [mailto:vpp-dev@lists.fd.io] On Behalf Of
> > Dave Barach
> > Sent: Wednesday, 07 March, 2018 3:20 PM
> > To: vpp-dev@lists.fd.io
> > Subject: Re: [vpp-dev] VPP - mechanism to drop packets
> >
> > Add an arc from your node to the "error-drop" node, set next0 =
> > MYNODE_NEXT_ERROR and b0->error = node->errors[SOME_ERROR].
> >
> [Avi Cohen (A)]
> I'm using the standard loop - this is with the next0 and next1 - this mean 
> that
> we are processing  2 pkts within a single stage of the loop - correct ?
> In this case if I want to drop both pkts I have to set the next1 to
> NEXT_ERROR
> as well correct ?
> 
> > Please use the standard dual/single - or quad/single - loop code
> > pattern to walk the incoming vector of buffer indices. You will hate
> > your life if you try to code the vector-walk from first principles.
> > It's not impossible, but I it will be a waste of your time / a bunch of 
> > needless
> aggravation.
> >
> > b0->sw_if_index[VLIB_TX] is interpreted in a couple of different ways.
> > b0->"ip4/6-
> > lookup" interprets it as a fib index. "interface-output" interprets it
> > as a [tx] hardware interface ID.
> >
> > I'm not sure what you're trying to do, but if it involves an ip
> > lookup, do NOT set
> > b0->sw_if_index[VLIB_TX]. Let the fib code do its job, and send pkts
> > b0->to either
> > the input nodes - if mandatory input checks / ttl decrement have not
> > been performed - or to the lookup stage if e.g. you've rewritten the
> > ip header(s) in some fashion.
> [Avi Cohen (A)]
> I'm implementing my fwding function and I want to bypass the IP-lookup And
> maybe do some rewrite and then to interface-output
> 
> Best Regards
> Avi
> >
> > HTH... Dave
> >
> > -Original Message-
> > From: vpp-dev@lists.fd.io  On Behalf Of Avi Cohen
> > (A)
> > Sent: Wednesday, March 7, 2018 6:38 AM
> > To: vpp-dev@lists.fd.io
> > Subject: [vpp-dev] VPP - mechanism to drop packets
> >
> > Hi,
> > I'm implementing a simple policy plugin , below is the pseudo-code
> >
> > Go over the packets vector
> > While (there are packets to process )
> > {
> > Check if a packet match a specific rule
> >   If yes - set the out-interface for the packet to be transmitted
> >   Else - drop packet
> > }
> >
> > 2 Question -
> > 1. is there  any function/mechanism that implements a drop ? or any
> > filed in the packet's metadata for drop marking ?
> > 2. Regarding the set tx out interface - I see that I can set the
> > sw_if_index[VLIB_TX] - so the packet will be later transmitted through
> > this interface - is this correct ?
> >
> > Best Regards
> > Avi
> >
> >
> >
> >
> >
> 
> 
> 


-=-=-=-=-=-=-=-=-=-=-=-
Links:

You receive all messages sent to this group.

View/Reply Online (#8463): https://lists.fd.io/g/vpp-dev/message/8463
View All Messages In Topic (5): https://lists.fd.io/g/vpp-dev/topic/14153536
Mute This Topic: https://lists.fd.io/mt/14153536/21656
New Topic: https://lists.fd.io/g/vpp-dev/post

Change Your Subscription: https://lists.fd.io/g/vpp-dev/editsub/21656
Group Home: https://lists.fd.io/g/vpp-dev
Contact Group Owner: vpp-dev+ow...@lists.fd.io
Terms of Service: https://lists.fd.io/static/tos
Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub
-=-=-=-=-=-=-=-=-=-=-=-



Re: [vpp-dev] Error during startup

2018-03-07 Thread Florin Coras
Hi Shashi, 

This can’t possibly be part of 17.10 and 18.01 since it was only merged this 
week on master. Probably you need to clean your repo, rebuild and reinstall 
debs. 

Florin

> On Mar 7, 2018, at 7:44 PM, Shashi Kant Singh  wrote:
> 
> Hi
>
> When I try to start vpp with release 17:10 and 18:01,. I am getting following 
> TLS error
>
> What could be the issue?
>
> Regards
> Shashi
> PS: This worked fine for previous releases.
>
> #  make run-release STARTUP_CONF=../startup.conf
> vlib_plugin_early_init:356: plugin path 
> /bng5/shashi-2/vpp2/vpp/build-root/install-vpp-native/sample-plugin/lib64/vpp_plugins:/bng5/shashi-2/vpp2/vpp/build-root/install-vpp-native/vpp/lib64/vpp_plugins
> load_one_plugin:184: Loaded plugin: acl_plugin.so (Access Control Lists)
> load_one_plugin:184: Loaded plugin: dpdk_plugin.so (Data Plane Development 
> Kit (DPDK))
> load_one_plugin:184: Loaded plugin: flowprobe_plugin.so (Flow per Packet)
> load_one_plugin:184: Loaded plugin: gbp_plugin.so (Group Based Policy)
> load_one_plugin:184: Loaded plugin: gtpu_plugin.so (GTPv1-U)
> load_one_plugin:184: Loaded plugin: ila_plugin.so (Identifier-locator 
> addressing for IPv6)
> load_one_plugin:184: Loaded plugin: ioam_plugin.so (Inbound OAM)
> load_one_plugin:114: Plugin disabled (default): ixge_plugin.so
> load_one_plugin:184: Loaded plugin: kubeproxy_plugin.so (kube-proxy data 
> plane)
> load_one_plugin:184: Loaded plugin: l2e_plugin.so (L2 Emulation)
> load_one_plugin:184: Loaded plugin: lb_plugin.so (Load Balancer)
> load_one_plugin:184: Loaded plugin: libsixrd_plugin.so (IPv6 Rapid Deployment 
> on IPv4 Infrastructure (RFC5969))
> load_one_plugin:184: Loaded plugin: memif_plugin.so (Packet Memory Interface 
> (experimetal))
> load_one_plugin:184: Loaded plugin: nat_plugin.so (Network Address 
> Translation)
> load_one_plugin:184: Loaded plugin: pppoe_plugin.so (PPPoE)
> load_one_plugin:184: Loaded plugin: sample_plugin.so (Sample of VPP Plugin)
> load_one_plugin:184: Loaded plugin: srv6ad_plugin.so (Dynamic SRv6 proxy)
> load_one_plugin:184: Loaded plugin: srv6am_plugin.so (Masquerading SRv6 proxy)
> load_one_plugin:184: Loaded plugin: srv6as_plugin.so (Static SRv6 proxy)
> load_one_plugin:184: Loaded plugin: stn_plugin.so (VPP Steals the NIC for 
> Container integration)
> tls_init_ca_chain:1125: Could not initialize TLS CA certificates
> tls_init:1168: failed to initialize TLS CA chain
> tls_init: failed to initalize TLS CA chain
>
> 



Re: [vpp-dev] [csit-dev] fd.io Community Goals and Objectives for reporting to the LFN board

2018-03-07 Thread George Zhao
For DMM project:

- Commit initial DMM framework
- Commit documentations for APIs, developer guides, etc.
- Plug into CSIT
- Integrate VPP host stack/TLDK
- DMM data-plane EAL on VPP L3
- Performance optimization

Thanks
George



From: vpp-dev@lists.fd.io [mailto:vpp-dev@lists.fd.io] On Behalf Of Edward 
Warnicke
Sent: Wednesday, March 7, 2018 9:31 PM
To: vpp-dev@lists.fd.io
Cc: vpp-dev@lists.fd.io; Trishan de Lanerolle 
; Dave Barach (dbarach) ; 
Maciek Konstantynowicz (mkonstan) ; lmuscarie...@cisco.com; 
Ananyev, Konstantin ; George Zhao 
; francois.o...@linaro.org; Lu, Patrick 

Subject: Re: [vpp-dev] [csit-dev] fd.io Community Goals and Objectives for 
reporting to the LFN board

Ray,

Thanks for calling attention to this.  I've taken the feedback I've received 
thus far and added it to:

https://docs.google.com/presentation/d/1jUoqWt9tbMaiUsLE2IaUSSbTcjKN6c3cQ-PXID5nz0k/edit?usp=sharing

It's a rough cut, let's work it together as a community overnight and at the 
TSC meeting tomorrow.  Feel free to edit to improve.

Ed

On Thu, Mar 1, 2018 at 8:51 AM Ray Kinsella 
> wrote:
Hi folks,

So the LFN TAC is looking to close on the "Community goals and priorities" 
rollup by 9th March.
This exercise is part of the process of obtaining a fair share of resources for 
FD.io in 2018.

So if you haven't already, please canvass your committers and contributors for 
their 2018 project plans.
What new features do they anticipate developing? Are there plans for 
validation, new platform or hardware support etc ?
You can either reply to this email directly, or send a note to myself or Ed W.

Much appreciated, thanks,

Ray K

From: csit-dev-boun...@lists.fd.io 
[mailto:csit-dev-boun...@lists.fd.io] On 
Behalf Of Ed Warnicke
Sent: Thursday 1 February 2018 16:43
To: t...@lists.fd.io; 
disc...@lists.fd.io; vpp-dev 
>; 
csit-...@lists.fd.io; 
cicn-...@lists.fd.io; 
tldk-...@lists.fd.io; 
dmm-...@lists.fd.io; 
odp4vpp-...@lists.fd.io; 
pma_tools-...@lists.fd.io
Subject: [csit-dev] fd.io Community Goals and Objectives for 
reporting to the LFN board

FD.io has recently transitioned from being its own independent foundation, to 
being one of several projects (ONAP, OpNFV, ODL, etc) under the Linux 
Foundation Networking (LFN) umbrella.

As part of this transition, the LFN board has asked to be educated about what 
Goals and Objectives the community intends to undertake in the next year.   
Since an Open Source community intrinsically is a harmonious collection of 
folks working on different things, the only way to ascertain what its is really 
planning to do is to *ask* what folks are planning to do.

What are the plans of various folks participating in the community for 2018?  
What are the themes of the problems you are planning to address, the kinds of 
features you are looking to implement etc?

Ed





Re: [vpp-dev] [csit-dev] fd.io Community Goals and Objectives for reporting to the LFN board

2018-03-07 Thread Edward Warnicke
Ray,

Thanks for calling attention to this.  I've taken the feedback I've
received thus far and added it to:

https://docs.google.com/presentation/d/1jUoqWt9tbMaiUsLE2IaUSSbTcjKN6c3cQ-PXID5nz0k/edit?usp=sharing

It's a rough cut, let's work it together as a community overnight and at
the TSC meeting tomorrow.  Feel free to edit to improve.

Ed

On Thu, Mar 1, 2018 at 8:51 AM Ray Kinsella  wrote:

> Hi folks,
>
> So the LFN TAC is looking to close on the “Community goals and priorities”
> rollup by 9th March.
> This exercise is part of the process of obtaining a fair share of
> resources for FD.io in 2018.
>
> So if you haven’t already, please canvass your committers and contributors
> for their 2018 project plans.
> What new features do they anticipate developing? Are there plans for
> validation, new platform or hardware support etc ?
> You can either reply to this email directly, or send a note to myself or
> Ed W.
>
> Much appreciated, thanks,
>
> Ray K
>
> From: csit-dev-boun...@lists.fd.io [mailto:csit-dev-boun...@lists.fd.io]
> On Behalf Of Ed Warnicke
> Sent: Thursday 1 February 2018 16:43
> To: t...@lists.fd.io; disc...@lists.fd.io; vpp-dev ;
> csit-...@lists.fd.io; cicn-...@lists.fd.io; tldk-...@lists.fd.io;
> dmm-...@lists.fd.io; odp4vpp-...@lists.fd.io; pma_tools-...@lists.fd.io
> Subject: [csit-dev] fd.io Community Goals and Objectives for reporting to
> the LFN board
>
> FD.io has recently transitioned from being its own independent foundation,
> to being one of several projects (ONAP, OpNFV, ODL, etc) under the Linux
> Foundation Networking (LFN) umbrella.
>
> As part of this transition, the LFN board has asked to be educated about
> what Goals and Objectives the community intends to undertake in the next
> year.   Since an Open Source community intrinsically is a harmonious
> collection of folks working on different things, the only way to
> ascertain what its is really planning to do is to *ask* what folks are
> planning to do.
>
> What are the plans of various folks participating in the community for
> 2018?  What are the themes of the problems you are planning to address, the
> kinds of features you are looking to implement etc?
>
> Ed
>
> 
>
>


Re: [vpp-dev] VPP DPDK build failure with Mellanox interface(aarch64)

2018-03-07 Thread Sirshak Das

Going back to the original discussion:
I have VPP working now on aarch64 with Mellanox Card.

Disclaimer:
$ uname -r
4.10.0-28-generic
Ubuntu 16.04.4 LTS (GNU/Linux 4.10.0-28-generic aarch64)
I am aware that the published supported version for mellanox with dpdk
is 4.14+

I am listing down my steps along with errors I faced and the work
arounds I used.

They may not be committable workarounds and I am not familiar with code
base of either vpp or dpdk to guarentee that they are the best workarounds
either.

Prereq for anything:
MOFED Installation(Very Important):

Download from:

http://www.mellanox.com/page/products_dyn?product_family=26

First VPP downloading DPDK does not work for me:
Error:
 $ make dpdk-install-dev DPDK_MLX5_PMD=y

 /vpp/dpdk/deb/_build/dpdk-
 18.02/drivers/net/mlx5/mlx5_flow.c:38:8: error: redefinition of ‘struct
 ibv_flow_spec_counter_action’
   struct ibv_flow_spec_counter_action {
  ^
 In file included from /vpp/dpdk/deb/_build/dpdk-
 18.02/drivers/net/mlx5/mlx5_flow.c:14:0:
 /usr/include/infiniband/verbs.h:1360:8: note: originally defined here
   struct ibv_flow_spec_counter_action {

Workaround:

I use the same DPDK downloaded by VPP master
and make VPP use an external DPDK.

http://fast.dpdk.org/rel/dpdk-18.02.tar.xz

$ make config T=arm64-armv8a-linuxapp-gcc

Edit .config file at build/.config and mark this as yes:
CONFIG_RTE_LIBRTE_MLX5_PMD=y

$ make

VPP Compile:

Errors:
$ sudo ./bin/vpp -c ./etc/startup.conf

./bin/vpp[5054]: vnet_feature_arc_init:205: feature node 
'acl-plugin-out-ip6-fa' not found
./bin/vpp[5054]: vnet_feature_arc_init:205: feature node 
'acl-plugin-out-ip4-fa' not found

./bin/vpp[5054]: dpdk_config:1260: EAL init args: -c 1 -n 4 --huge-dir 
/run/vpp/hugepages --file-prefix vpp -w 0004:01:00.1 -w 0004:01:00.2 
--master-lcore 0 --socket-mem 64
EAL: DPAA2: DPRC not available
EAL: 2048 hugepages of size 2097152 reserved, but no mounted hugetlbfs found 
for that size
EAL: VFIO support initialized
./bin/vpp: symbol lookup error: 
/home/sirdas/code/commita/vpp/build-root/install-vpp-native/vpp/lib64/vpp_plugins/dpdk_plugin.so:
 undefined symbol: mlx5dv_query_device

Note: Before this another error also comes up due to which you will have
compile dpdk_plugin.so with -libverbs its again some symbol
resolution issue. The above error was fixed adding -lmlx5.


Workaround:

I apply the following patch(Dont Copy-Paste, Edit  in your patch):

$ cat mlx5_vpp.diff 
diff --git a/build-data/platforms/vpp.mk b/build-data/platforms/vpp.mk
index 320609d..4587e19 100644
--- a/build-data/platforms/vpp.mk
+++ b/build-data/platforms/vpp.mk
@@ -29,13 +29,13 @@ vpp_uses_dpdk = yes
 vpp_root_packages = vpp
 
 # DPDK configuration parameters
-# vpp_uses_dpdk_mlx5_pmd = yes
-# vpp_uses_external_dpdk = yes
-# vpp_dpdk_inc_dir = /usr/include/dpdk
-# vpp_dpdk_lib_dir = /usr/lib
+vpp_uses_dpdk_mlx5_pmd = yes
+vpp_uses_external_dpdk = yes
+vpp_dpdk_inc_dir = /dpdk-18.02/build/include
+vpp_dpdk_lib_dir = /dpdk-18.02/build/lib
 # vpp_dpdk_shared_lib = yes
 
-vpp_configure_args_vpp =
+vpp_configure_args_vpp += --disable-japi
 
 # load balancer plugin is not portable on 32 bit platform
 ifeq ($(MACHINE),i686)
diff --git a/src/plugins/dpdk.am b/src/plugins/dpdk.am
index 10f2fe4..7a1f2be 100644
--- a/src/plugins/dpdk.am
+++ b/src/plugins/dpdk.am
@@ -26,7 +26,7 @@ if WITH_ISA_L_CRYPTO_LIB
 dpdk_plugin_la_LDFLAGS += 
-Wl,--exclude-libs,libisal_crypto.a,-l:libisal_crypto.a
 endif
 dpdk_plugin_la_CFLAGS = $(AM_CFLAGS)
-dpdk_plugin_la_LDFLAGS += -Wl,-lnuma
+dpdk_plugin_la_LDFLAGS += -Wl,-lnuma,-libverbs,-lmlx5
 
 dpdk_plugin_la_LDFLAGS += -Wl,-lm,-ldl
 dpdk_plugin_la_LIBADD =


+vpp_configure_args_vpp += --disable-japi
- I choose to disable this because at the time of writing, japi build
was broken. 


$ make build-release

These steps work for me. If somebody has faced similar issues and has a
better workaround then let me know.

P.S. I tried an alternate solution of building dpdk libraries as shared
it didnt work out for me. This is the error I got:

ERROR: failed to parse device "0004:01:00.1"
EAL: Unable to parse device '0004:01:00.1'

But this seems like more of DPDK issue as running DPDK examples also
gives similar errors:

sudo LD_LIBRARY_PATH=/dpdk-18.02/build/lib:$LD_LIBRARY_PATH 
./l2fwd -l 4-7 -n 4 -w 0004:01:00.1 -w 0004:01:00.2 -- -p 0x3
EAL: Detected 46 lcore(s)
ERROR: failed to parse device "0004:01:00.1"
EAL: Unable to parse device '0004:01:00.1'
EAL: Error - exiting with code: 1 Cause: Invalid EAL arguments

$ lspci
0004:01:00.0 Ethernet controller: Mellanox Technologies MT27700 Family 
[ConnectX-4]
0004:01:00.1 Ethernet controller: Mellanox Technologies MT27700 Family 
[ConnectX-4 Virtual Function]
0004:01:00.2 Ethernet controller: Mellanox Technologies MT27700 Family 
[ConnectX-4 Virtual Function]

Thank you
Sirshak Das


Dave Wallace writes:

> Ole,
>
> That is what I thought, but I wanted to see if there's any way to close 
> 

[vpp-dev] Error during startup

2018-03-07 Thread Shashi Kant Singh
Hi

When I try to start vpp with release 17:10 and 18:01,. I am getting following 
TLS error

What could be the issue?

Regards
Shashi
PS: This worked fine for previous releases.

#  make run-release STARTUP_CONF=../startup.conf
vlib_plugin_early_init:356: plugin path 
/bng5/shashi-2/vpp2/vpp/build-root/install-vpp-native/sample-plugin/lib64/vpp_plugins:/bng5/shashi-2/vpp2/vpp/build-root/install-vpp-native/vpp/lib64/vpp_plugins
load_one_plugin:184: Loaded plugin: acl_plugin.so (Access Control Lists)
load_one_plugin:184: Loaded plugin: dpdk_plugin.so (Data Plane Development Kit 
(DPDK))
load_one_plugin:184: Loaded plugin: flowprobe_plugin.so (Flow per Packet)
load_one_plugin:184: Loaded plugin: gbp_plugin.so (Group Based Policy)
load_one_plugin:184: Loaded plugin: gtpu_plugin.so (GTPv1-U)
load_one_plugin:184: Loaded plugin: ila_plugin.so (Identifier-locator 
addressing for IPv6)
load_one_plugin:184: Loaded plugin: ioam_plugin.so (Inbound OAM)
load_one_plugin:114: Plugin disabled (default): ixge_plugin.so
load_one_plugin:184: Loaded plugin: kubeproxy_plugin.so (kube-proxy data plane)
load_one_plugin:184: Loaded plugin: l2e_plugin.so (L2 Emulation)
load_one_plugin:184: Loaded plugin: lb_plugin.so (Load Balancer)
load_one_plugin:184: Loaded plugin: libsixrd_plugin.so (IPv6 Rapid Deployment 
on IPv4 Infrastructure (RFC5969))
load_one_plugin:184: Loaded plugin: memif_plugin.so (Packet Memory Interface 
(experimetal))
load_one_plugin:184: Loaded plugin: nat_plugin.so (Network Address Translation)
load_one_plugin:184: Loaded plugin: pppoe_plugin.so (PPPoE)
load_one_plugin:184: Loaded plugin: sample_plugin.so (Sample of VPP Plugin)
load_one_plugin:184: Loaded plugin: srv6ad_plugin.so (Dynamic SRv6 proxy)
load_one_plugin:184: Loaded plugin: srv6am_plugin.so (Masquerading SRv6 proxy)
load_one_plugin:184: Loaded plugin: srv6as_plugin.so (Static SRv6 proxy)
load_one_plugin:184: Loaded plugin: stn_plugin.so (VPP Steals the NIC for 
Container integration)
tls_init_ca_chain:1125: Could not initialize TLS CA certificates
tls_init:1168: failed to initialize TLS CA chain
tls_init: failed to initalize TLS CA chain



Re: [vpp-dev] TCP Proxy Through-put

2018-03-07 Thread Florin Coras
Hi Shaun, 

Glad to see you’re experimenting with the proxy code. Note however that the 
implementation is literally just a proof of concept. We haven’t spent any time 
optimizing it. Nonetheless, it would be interesting to understand why this 
happens. 

Does the difference between apache and vpp diminish if you increase the file 
size (10 or 100 times)? That is, could the difference be attributable to vpp 
being slow to ramp up throughput? Our TCP implementation uses, for now, NewReno 
as opposed to Cubic. 

Also, does “show error” indicate that lots of duplicate acks are exchanged? 
Note that the drop counters you see are actually packets that have been 
consumed by the stack and have afterwards been discarded. To see if the NICs 
have issues keeping up, try “sh hardware” and check the drop counters there. 

>From the output you’ve pasted, I can only conclude that vpp has no problems 
>receiving the traffic but it has issues pushing it out. 

Cheers, 
Florin

> On Mar 7, 2018, at 1:54 PM, Shaun McGinnity  
> wrote:
> 
> Hi,
>
> We are doing some basic testing using the TCP proxy app in stable-18.01 
> build. When I proxy a single HTTP request for a 20MB file through to an 
> Apache web server I get less than one-tenth of the throughput compared to 
> using a Linux TCP Proxy (Apache Traffic Server) on exactly the same setup. 
> The latency also varies significantly between 1 and 3 seconds.
>
> I’ve modified the fifo-size and rcv-buf-size and also the increased the TCP 
> window scaling which helps a bit but I still see a large difference.
>
> Here is a sample output when running the test. The drops on the client-side 
> interface are a lot higher than on the server side – is that significant?
>
> Is there any other tuning that I could apply?
>
> show int addr
> GigabitEthernet0/6/0 (up):
>   192.168.11.5/24
> GigabitEthernet0/7/0 (up):
>   172.16.11.5/24
> local0 (dn):
>
> show int
>   Name   Idx   State  Counter  
> Count
> GigabitEthernet0/6/0  1 up   rx packets   
>5012
>  rx bytes 
>  331331
>  tx packets   
>   14813
>  tx bytes
> 21997981
>  drops
>5008
>  ip4  
>5010
> GigabitEthernet0/7/0  2 up   rx packets   
>   14688
>  rx bytes
> 21941197
>  tx packets   
>   14680
>  tx bytes 
>  968957
>  drops
>  12
>  ip4  
>   14686
> local00down 
>
> show session verbose 2
> Thread 0: 2 active sessions
> [#0][T] 192.168.11.5:12000->192.168.11.4:37270ESTABLISHED   
>  flags:  timers: [RETRANSMIT]
> snd_una 18309357 snd_nxt 18443589 snd_una_max 18443589 rcv_nxt 96 rcv_las 96
> snd_wnd 705408 rcv_wnd 524288 snd_wl1 96 snd_wl2 18309357
> flight size 134232 send space 620 rcv_wnd_av 524288
> cong none cwnd 134852 ssthresh 33244 rtx_bytes 0 bytes_acked 2856
> prev_ssthresh 0 snd_congestion 5101145 dupack 0 limited_transmit 4135592144
> tsecr 1408587266 tsecr_last_ack 1408587266
> rto 200 rto_boff 0 srtt 1 rttvar 1 rtt_ts 1408587268 rtt_seq 177798749
> tsval_recent 1403556248 tsval_recent_age 1
> scoreboard: sacked_bytes 0 last_sacked_bytes 0 lost_bytes 0
> last_bytes_delivered 0 high_sacked 164476297 snd_una_adv 0
> cur_rxt_hole 4294967295 high_rxt 164406209 rescue_rxt 0
> Rx fifo: cursize 0 nitems 524288 has_event 0
> head 95 tail 95
> ooo pool 0 active elts newest 4294967295
> Tx fifo: cursize 272884 nitems 524288 has_event 1
> head 483564 tail 232160
> ooo pool 0 active elts newest 4294967295
> [#0][T] 172.16.11.5:26485->192.168.200.123:80 ESTABLISHED   
>  flags:  timers: []
> snd_una 96 snd_nxt 96 snd_una_max 96 rcv_nxt 18582241 rcv_las 18582241
> snd_wnd 29056 rcv_wnd 229984 snd_wl1 18582185 snd_wl2 96
> flight size 0 send space 4385 rcv_wnd_av 229984
> cong none cwnd 4385 ssthresh 28960 rtx_bytes 0 bytes_acked 0
> prev_ssthresh 0 snd_congestion 4132263094 dupack 0 limited_transmit 4132263094
> tsecr 1408587264 tsecr_last_ack 1408587264
> rto 200 rto_boff 0 srtt 1 rttvar 1 rtt_ts 0 rtt_seq 162704298
> tsval_recent 1403552275 tsval_recent_age 2
> scoreboard: sacked_bytes 0 last_sacked_bytes 0 lost_bytes 0
> last_bytes_delivered 0 high_sacked 0 snd_una_adv 0
> cur_rxt_hole 4294967295 high_rxt 0 rescue_rxt 0
> Rx fifo: 

[vpp-dev] TCP Proxy Through-put

2018-03-07 Thread Shaun McGinnity
Hi,

We are doing some basic testing using the TCP proxy app in stable-18.01 build. 
When I proxy a single HTTP request for a 20MB file through to an Apache web 
server I get less than one-tenth of the throughput compared to using a Linux 
TCP Proxy (Apache Traffic Server) on exactly the same setup. The latency also 
varies significantly between 1 and 3 seconds.

I've modified the fifo-size and rcv-buf-size and also the increased the TCP 
window scaling which helps a bit but I still see a large difference.

Here is a sample output when running the test. The drops on the client-side 
interface are a lot higher than on the server side - is that significant?

Is there any other tuning that I could apply?

show int addr
GigabitEthernet0/6/0 (up):
  192.168.11.5/24
GigabitEthernet0/7/0 (up):
  172.16.11.5/24
local0 (dn):

show int
  Name   Idx   State  Counter  Count
GigabitEthernet0/6/0  1 up   rx packets 
 5012
 rx bytes  
331331
 tx packets 
14813
 tx bytes
21997981
 drops  
 5008
 ip4
 5010
GigabitEthernet0/7/0  2 up   rx packets 
14688
 rx bytes
21941197
 tx packets 
14680
 tx bytes  
968957
 drops  
   12
 ip4
14686
local00down

show session verbose 2
Thread 0: 2 active sessions
[#0][T] 192.168.11.5:12000->192.168.11.4:37270ESTABLISHED
 flags:  timers: [RETRANSMIT]
snd_una 18309357 snd_nxt 18443589 snd_una_max 18443589 rcv_nxt 96 rcv_las 96
snd_wnd 705408 rcv_wnd 524288 snd_wl1 96 snd_wl2 18309357
flight size 134232 send space 620 rcv_wnd_av 524288
cong none cwnd 134852 ssthresh 33244 rtx_bytes 0 bytes_acked 2856
prev_ssthresh 0 snd_congestion 5101145 dupack 0 limited_transmit 4135592144
tsecr 1408587266 tsecr_last_ack 1408587266
rto 200 rto_boff 0 srtt 1 rttvar 1 rtt_ts 1408587268 rtt_seq 177798749
tsval_recent 1403556248 tsval_recent_age 1
scoreboard: sacked_bytes 0 last_sacked_bytes 0 lost_bytes 0
last_bytes_delivered 0 high_sacked 164476297 snd_una_adv 0
cur_rxt_hole 4294967295 high_rxt 164406209 rescue_rxt 0
Rx fifo: cursize 0 nitems 524288 has_event 0
head 95 tail 95
ooo pool 0 active elts newest 4294967295
Tx fifo: cursize 272884 nitems 524288 has_event 1
head 483564 tail 232160
ooo pool 0 active elts newest 4294967295
[#0][T] 172.16.11.5:26485->192.168.200.123:80 ESTABLISHED
 flags:  timers: []
snd_una 96 snd_nxt 96 snd_una_max 96 rcv_nxt 18582241 rcv_las 18582241
snd_wnd 29056 rcv_wnd 229984 snd_wl1 18582185 snd_wl2 96
flight size 0 send space 4385 rcv_wnd_av 229984
cong none cwnd 4385 ssthresh 28960 rtx_bytes 0 bytes_acked 0
prev_ssthresh 0 snd_congestion 4132263094 dupack 0 limited_transmit 4132263094
tsecr 1408587264 tsecr_last_ack 1408587264
rto 200 rto_boff 0 srtt 1 rttvar 1 rtt_ts 0 rtt_seq 162704298
tsval_recent 1403552275 tsval_recent_age 2
scoreboard: sacked_bytes 0 last_sacked_bytes 0 lost_bytes 0
last_bytes_delivered 0 high_sacked 0 snd_una_adv 0
cur_rxt_hole 4294967295 high_rxt 0 rescue_rxt 0
Rx fifo: cursize 272884 nitems 524288 has_event 1
head 483564 tail 232160
ooo pool 0 active elts newest 4294967295
Tx fifo: cursize 0 nitems 524288 has_event 0
head 95 tail 95
ooo pool 0 active elts newest 4294967295


Regards,

Shaun


Re: [vpp-dev] VPP As A Router Between Namespaces - 10ms latency

2018-03-07 Thread Dave Barach
“show run” will probably show a very small vector size.

If so, look at src/vlib/unix/input.c:linux_epoll_input(…). 10ms is exactly the 
epoll_pwait timeout value.

D.

From: vpp-dev@lists.fd.io  On Behalf Of Sara Gittlin
Sent: Wednesday, March 7, 2018 2:02 PM
To: vpp-dev@lists.fd.io
Cc: vpp-dev@lists.fd.io
Subject: Re: [vpp-dev] VPP As A Router Between Namespaces - 10ms latency

Thank you Hau
i tested w iperf got similar results. I cannot find iperf2. Anyway ns to ns 
directly without vpp is perfect 50 gbps throughput and 10us latency. Tested w 
iperf3. This is very bothering since we decided to go w vpp instead of ovs
Thanks in advance
-Sara



בתאריך 6 במרץ 2018 20:00,‏ "Hao Fu (haof)" 
> כתב:
I encountered the similar issue before. Try replacing iperf3 with iperf2.

Hao

On 3/6/18, 8:34 AM, "vpp-dev@lists.fd.io on behalf 
of Sara Gittlin"  on behalf of 
sara.gitt...@gmail.com> wrote:
Also the throughput is very poor - iperf3 TCP ~ 2Mbps
what is wrong here  ?

On Tue, Mar 6, 2018 at 6:20 PM, Sara Gittlin 
> wrote:
> Hi,
> i have 2 namespaces connected with veth-pairs to vpp -  see setup here
> [https://wiki.fd.io/view/VPP/Configure_VPP_As_A_Router_Between_Namespaces]
>
> i see very big latency ~10ms when i ping between the 2 namespaces
> i expected to see latency in the order of 10's us
> ---
> 4 bytes from 172.16.2.2: icmp_seq=1005 ttl=63 
time=11.5 ms
> 64 bytes from 172.16.2.2: icmp_seq=1006 ttl=63 
time=9.60 ms
> 64 bytes from 172.16.2.2: icmp_seq=1007 ttl=63 
time=7.55 ms
> 64 bytes from 172.16.2.2: icmp_seq=1008 ttl=63 
time=5.52 ms
> 64 bytes from 172.16.2.2: icmp_seq=1009 ttl=63 
time=9.60 ms
> 64 bytes from 172.16.2.2: icmp_seq=1010 ttl=63 
time=17.6 ms
> 64 bytes from 172.16.2.2: icmp_seq=1011 ttl=63 
time=15.5 ms
> 64 bytes from 172.16.2.2: icmp_seq=1012 ttl=63 
time=13.6 ms
> 64 bytes from 172.16.2.2: icmp_seq=1013 ttl=63 
time=11.6 ms
> 64 bytes from 172.16.2.2: icmp_seq=1014 ttl=63 
time=9.54 ms
> 64 bytes from 172.16.2.2: icmp_seq=1015 ttl=63 
time=7.67 ms
> 64 bytes from 172.16.2.2: icmp_seq=1016 ttl=63 
time=5.56 ms
> 64 bytes from 172.16.2.2: icmp_seq=1017 ttl=63 
time=3.44 ms
>
> 
>
> Who can assist ?
>
> Thanks in advance
> -Sara
>
>
>











Re: [vpp-dev] VPP As A Router Between Namespaces - 10ms latency

2018-03-07 Thread Florin Coras
Could you try again with taps and large rings?

create tap rx-ring-size 4096 tx-ring-size 4096
create tap rx-ring-size 4096 tx-ring-size 4096

Configure then the two taps in your namespaces and run iperf again. 

Hope this helps,
Florin



> On Mar 7, 2018, at 11:01 AM, Sara Gittlin  wrote:
> 
> Thank you Hau
> i tested w iperf got similar results. I cannot find iperf2. Anyway ns to ns 
> directly without vpp is perfect 50 gbps throughput and 10us latency. Tested w 
> iperf3. This is very bothering since we decided to go w vpp instead of ovs 
> Thanks in advance
> -Sara
> 
> 
> 
> בתאריך 6 במרץ 2018 20:00,‏ "Hao Fu (haof)"  > כתב:
> I encountered the similar issue before. Try replacing iperf3 with iperf2.
> 
> Hao
> 
> On 3/6/18, 8:34 AM, "vpp-dev@lists.fd.io  on 
> behalf of Sara Gittlin"  on 
> behalf of sara.gitt...@gmail.com > wrote:
> 
> Also the throughput is very poor - iperf3 TCP ~ 2Mbps
> what is wrong here  ?
> 
> On Tue, Mar 6, 2018 at 6:20 PM, Sara Gittlin  > wrote:
> > Hi,
> > i have 2 namespaces connected with veth-pairs to vpp -  see setup here
> > 
> [https://wiki.fd.io/view/VPP/Configure_VPP_As_A_Router_Between_Namespaces 
> ]
> >
> > i see very big latency ~10ms when i ping between the 2 namespaces
> > i expected to see latency in the order of 10's us
> > ---
> > 4 bytes from 172.16.2.2 : icmp_seq=1005 ttl=63 
> time=11.5 ms
> > 64 bytes from 172.16.2.2 : icmp_seq=1006 ttl=63 
> time=9.60 ms
> > 64 bytes from 172.16.2.2 : icmp_seq=1007 ttl=63 
> time=7.55 ms
> > 64 bytes from 172.16.2.2 : icmp_seq=1008 ttl=63 
> time=5.52 ms
> > 64 bytes from 172.16.2.2 : icmp_seq=1009 ttl=63 
> time=9.60 ms
> > 64 bytes from 172.16.2.2 : icmp_seq=1010 ttl=63 
> time=17.6 ms
> > 64 bytes from 172.16.2.2 : icmp_seq=1011 ttl=63 
> time=15.5 ms
> > 64 bytes from 172.16.2.2 : icmp_seq=1012 ttl=63 
> time=13.6 ms
> > 64 bytes from 172.16.2.2 : icmp_seq=1013 ttl=63 
> time=11.6 ms
> > 64 bytes from 172.16.2.2 : icmp_seq=1014 ttl=63 
> time=9.54 ms
> > 64 bytes from 172.16.2.2 : icmp_seq=1015 ttl=63 
> time=7.67 ms
> > 64 bytes from 172.16.2.2 : icmp_seq=1016 ttl=63 
> time=5.56 ms
> > 64 bytes from 172.16.2.2 : icmp_seq=1017 ttl=63 
> time=3.44 ms
> >
> > 
> >
> > Who can assist ?
> >
> > Thanks in advance
> > -Sara
> >
> >
> >
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 



Re: [vpp-dev] VPP As A Router Between Namespaces - 10ms latency

2018-03-07 Thread Sara Gittlin
Thank you Hau
i tested w iperf got similar results. I cannot find iperf2. Anyway ns to ns
directly without vpp is perfect 50 gbps throughput and 10us latency. Tested
w iperf3. This is very bothering since we decided to go w vpp instead of
ovs
Thanks in advance
-Sara



בתאריך 6 במרץ 2018 20:00,‏ "Hao Fu (haof)"  כתב:

I encountered the similar issue before. Try replacing iperf3 with iperf2.

Hao

On 3/6/18, 8:34 AM, "vpp-dev@lists.fd.io on behalf of Sara Gittlin" <
vpp-dev@lists.fd.io on behalf of sara.gitt...@gmail.com> wrote:

Also the throughput is very poor - iperf3 TCP ~ 2Mbps
what is wrong here  ?

On Tue, Mar 6, 2018 at 6:20 PM, Sara Gittlin 
wrote:
> Hi,
> i have 2 namespaces connected with veth-pairs to vpp -  see setup here
> [https://wiki.fd.io/view/VPP/Configure_VPP_As_A_Router_
Between_Namespaces]
>
> i see very big latency ~10ms when i ping between the 2 namespaces
> i expected to see latency in the order of 10's us
> ---
> 4 bytes from 172.16.2.2: icmp_seq=1005 ttl=63 time=11.5 ms
> 64 bytes from 172.16.2.2: icmp_seq=1006 ttl=63 time=9.60 ms
> 64 bytes from 172.16.2.2: icmp_seq=1007 ttl=63 time=7.55 ms
> 64 bytes from 172.16.2.2: icmp_seq=1008 ttl=63 time=5.52 ms
> 64 bytes from 172.16.2.2: icmp_seq=1009 ttl=63 time=9.60 ms
> 64 bytes from 172.16.2.2: icmp_seq=1010 ttl=63 time=17.6 ms
> 64 bytes from 172.16.2.2: icmp_seq=1011 ttl=63 time=15.5 ms
> 64 bytes from 172.16.2.2: icmp_seq=1012 ttl=63 time=13.6 ms
> 64 bytes from 172.16.2.2: icmp_seq=1013 ttl=63 time=11.6 ms
> 64 bytes from 172.16.2.2: icmp_seq=1014 ttl=63 time=9.54 ms
> 64 bytes from 172.16.2.2: icmp_seq=1015 ttl=63 time=7.67 ms
> 64 bytes from 172.16.2.2: icmp_seq=1016 ttl=63 time=5.56 ms
> 64 bytes from 172.16.2.2: icmp_seq=1017 ttl=63 time=3.44 ms
>
> 
>
> Who can assist ?
>
> Thanks in advance
> -Sara
>
>
>









[vpp-dev] worker_thread creation #vpp

2018-03-07 Thread mbly
Looking to confirm worker_thread creation/deletion capabilities. 
./src/vpp/conf/startup.conf descriptions suggest we must define the number of 
worker_threads up front. I am looking to understand if that is correct, or if 
we can add/remove worker_threads from a live system over time as workloads come 
and go.
-MikeB


[vpp-dev] #vpp #vnet #counters #1801 im->sw_if_counters[VNET_INTERFACE_COUNTER_MPLS].name not initialized.

2018-03-07 Thread Dzmitry Sautsa
File vnet/interface.h:
*vnet_interface_counter_type_t* has:
...
  VNET_INTERFACE_COUNTER_MPLS = 8,
  VNET_N_SIMPLE_INTERFACE_COUNTER = 9,
...

In vnet/interface.c function vnet_interface_init
im->sw_if_counters[...].name is initialized for all counters except 
VNET_INTERFACE_COUNTER_MPLS

In vnet/interface_format.c function format_vnet_sw_interface_cntrs line 272 
cm->name is not string, but 0.


Re: [vpp-dev] route creating performance issue because of bucket and memory of adj_nbr_tables

2018-03-07 Thread Neale Ranns
Hi Lollita

adj_nbr_tables is the database that stores the adjacencies that represent the 
peers attached to on a given link. It is sized (perhaps overly) to accommodate 
a large segment on multi-access link. For your p2p GTPU interfaces, you could 
scale it down, since there is only ever one peer on a p2p link. A well placed 
call to vnet_sw_interface_is_p2p() would be your friend.

Regards,
neale


From:  on behalf of lollita 
Date: Wednesday, 7 March 2018 at 11:09
To: "vpp-dev@lists.fd.io" 
Cc: Kingwel Xie , David Yu Z 
, Terry Zhang Z , Brant 
Lin , Jordy You 
Subject: [vpp-dev] route creating performance issue because of bucket and 
memory of adj_nbr_tables

Hi,

We have encounter performance issue on batch adding 1 GTPU 
tunnels and 1 routes each taking one gtpu tunnel interface as nexthop via 
API.

The effect is like executing following command:

create gtpu tunnel src 18.1.0.41 dst 18.1.0.31 teid 1 
encap-vrf-id 0 decap-next ip4
create gtpu tunnel src 18.1.0.41 dst 18.1.0.31 teid 2 encap-vrf-id 0 decap-next 
ip4
ip route add 1.1.1.1/32 table 2 via gtpu_tunnel0
ip route add 1.1.1.2/32 table 2 via gtpu_tunnel1

After debugging, the time is mainly cost on init 
adj_nbr_tables[nh_proto][sw_if_index] for “ip route add” following function:

BV(clib_bihash_init) (adj_nbr_tables[nh_proto][sw_if_index],
  "Adjacency Neighbour 
table",
  
ADJ_NBR_DEFAULT_HASH_NUM_BUCKETS,
  
ADJ_NBR_DEFAULT_HASH_MEMORY_SIZE);

We have change the third parameter from ADJ_NBR_DEFAULT_HASH_NUM_BUCKETS 
(64*64) to 64, and change the fourth parameter from 
ADJ_NBR_DEFAULT_HASH_MEMORY_SIZE (32 <<20) to 32<<10. And the time cost has 
been reduced to about one ninth of original result.

The question is what adj_nbr_tables is used for? Why it need so many buckets 
and memory?

BR/Lollita Liu




Re: [vpp-dev] VPP - mechanism to drop packets

2018-03-07 Thread Dave Barach
Dear Avi,

Yes, if you decide to drop b1, set next1 / error1 in the obvious way. 

The macros vlib_validate_buffer_enqueue_x[2|4] sort out the various incorrect 
speculative enqueue / 2 or 4 pkts going to different successor node cases. 
Simply set (nextN, errorN) as desired and let the boilerplate code deal with 
it... 

D. 

-Original Message-
From: Avi Cohen (A)  
Sent: Wednesday, March 7, 2018 9:02 AM
To: Dave Barach (dbarach) ; vpp-dev@lists.fd.io
Subject: RE: [vpp-dev] VPP - mechanism to drop packets

Thank you Dave - this is very helpful
Please see comments inline 

> -Original Message-
> From: vpp-dev@lists.fd.io [mailto:vpp-dev@lists.fd.io] On Behalf Of 
> Dave Barach
> Sent: Wednesday, 07 March, 2018 3:20 PM
> To: vpp-dev@lists.fd.io
> Subject: Re: [vpp-dev] VPP - mechanism to drop packets
> 
> Add an arc from your node to the "error-drop" node, set next0 = 
> MYNODE_NEXT_ERROR and b0->error = node->errors[SOME_ERROR].
> 
[Avi Cohen (A)]
I'm using the standard loop - this is with the next0 and next1 - this mean that 
we are processing  2 pkts within a single stage of the loop - correct ?
In this case if I want to drop both pkts I have to set the next1 to
NEXT_ERROR as well correct ?

> Please use the standard dual/single - or quad/single - loop code 
> pattern to walk the incoming vector of buffer indices. You will hate 
> your life if you try to code the vector-walk from first principles. 
> It's not impossible, but I it will be a waste of your time / a bunch of 
> needless aggravation.
> 
> b0->sw_if_index[VLIB_TX] is interpreted in a couple of different ways. 
> b0->"ip4/6-
> lookup" interprets it as a fib index. "interface-output" interprets it 
> as a [tx] hardware interface ID.
> 
> I'm not sure what you're trying to do, but if it involves an ip 
> lookup, do NOT set
> b0->sw_if_index[VLIB_TX]. Let the fib code do its job, and send pkts 
> b0->to either
> the input nodes - if mandatory input checks / ttl decrement have not 
> been performed - or to the lookup stage if e.g. you've rewritten the 
> ip header(s) in some fashion.
[Avi Cohen (A)]
I'm implementing my fwding function and I want to bypass the IP-lookup And 
maybe do some rewrite and then to interface-output

Best Regards
Avi
> 
> HTH... Dave
> 
> -Original Message-
> From: vpp-dev@lists.fd.io  On Behalf Of Avi Cohen 
> (A)
> Sent: Wednesday, March 7, 2018 6:38 AM
> To: vpp-dev@lists.fd.io
> Subject: [vpp-dev] VPP - mechanism to drop packets
> 
> Hi,
> I'm implementing a simple policy plugin , below is the pseudo-code
> 
> Go over the packets vector
> While (there are packets to process )
> {
> Check if a packet match a specific rule
>   If yes - set the out-interface for the packet to be transmitted
>   Else - drop packet
> }
> 
> 2 Question -
> 1. is there  any function/mechanism that implements a drop ? or any 
> filed in the packet's metadata for drop marking ?
> 2. Regarding the set tx out interface - I see that I can set the 
> sw_if_index[VLIB_TX] - so the packet will be later transmitted through 
> this interface - is this correct ?
> 
> Best Regards
> Avi
> 
> 
> 
> 
> 


-=-=-=-=-=-=-=-=-=-=-=-
Links:

You receive all messages sent to this group.

View/Reply Online (#8449): https://lists.fd.io/g/vpp-dev/message/8449
View All Messages In Topic (4): https://lists.fd.io/g/vpp-dev/topic/14153536
Mute This Topic: https://lists.fd.io/mt/14153536/21656
New Topic: https://lists.fd.io/g/vpp-dev/post

Change Your Subscription: https://lists.fd.io/g/vpp-dev/editsub/21656
Group Home: https://lists.fd.io/g/vpp-dev
Contact Group Owner: vpp-dev+ow...@lists.fd.io
Terms of Service: https://lists.fd.io/static/tos
Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub
-=-=-=-=-=-=-=-=-=-=-=-



Re: [vpp-dev] VPP - mechanism to drop packets

2018-03-07 Thread Avi Cohen (A)
Thank you Dave - this is very helpful
Please see comments inline 

> -Original Message-
> From: vpp-dev@lists.fd.io [mailto:vpp-dev@lists.fd.io] On Behalf Of Dave
> Barach
> Sent: Wednesday, 07 March, 2018 3:20 PM
> To: vpp-dev@lists.fd.io
> Subject: Re: [vpp-dev] VPP - mechanism to drop packets
> 
> Add an arc from your node to the "error-drop" node, set next0 =
> MYNODE_NEXT_ERROR and b0->error = node->errors[SOME_ERROR].
> 
[Avi Cohen (A)] 
I'm using the standard loop - this is with the next0 and next1 - this mean that 
we are processing  2 pkts within a single stage of the loop - correct ?
In this case if I want to drop both pkts I have to set the next1 to
NEXT_ERROR as well correct ?

> Please use the standard dual/single - or quad/single - loop code pattern to 
> walk
> the incoming vector of buffer indices. You will hate your life if you try to 
> code
> the vector-walk from first principles. It's not impossible, but I it will be 
> a waste
> of your time / a bunch of needless aggravation.
> 
> b0->sw_if_index[VLIB_TX] is interpreted in a couple of different ways. "ip4/6-
> lookup" interprets it as a fib index. "interface-output" interprets it as a 
> [tx]
> hardware interface ID.
> 
> I'm not sure what you're trying to do, but if it involves an ip lookup, do 
> NOT set
> b0->sw_if_index[VLIB_TX]. Let the fib code do its job, and send pkts to either
> the input nodes - if mandatory input checks / ttl decrement have not been
> performed - or to the lookup stage if e.g. you've rewritten the ip header(s) 
> in
> some fashion.
[Avi Cohen (A)] 
I'm implementing my fwding function and I want to bypass the IP-lookup 
And maybe do some rewrite and then to interface-output

Best Regards
Avi
> 
> HTH... Dave
> 
> -Original Message-
> From: vpp-dev@lists.fd.io  On Behalf Of Avi Cohen (A)
> Sent: Wednesday, March 7, 2018 6:38 AM
> To: vpp-dev@lists.fd.io
> Subject: [vpp-dev] VPP - mechanism to drop packets
> 
> Hi,
> I'm implementing a simple policy plugin , below is the pseudo-code
> 
> Go over the packets vector
> While (there are packets to process )
> {
> Check if a packet match a specific rule
>   If yes - set the out-interface for the packet to be transmitted
>   Else - drop packet
> }
> 
> 2 Question -
> 1. is there  any function/mechanism that implements a drop ? or any filed in
> the packet's metadata for drop marking ?
> 2. Regarding the set tx out interface - I see that I can set the
> sw_if_index[VLIB_TX] - so the packet will be later transmitted through this
> interface - is this correct ?
> 
> Best Regards
> Avi
> 
> 
> 
> 
> 


-=-=-=-=-=-=-=-=-=-=-=-
Links:

You receive all messages sent to this group.

View/Reply Online (#8448): https://lists.fd.io/g/vpp-dev/message/8448
View All Messages In Topic (3): https://lists.fd.io/g/vpp-dev/topic/14153536
Mute This Topic: https://lists.fd.io/mt/14153536/21656
New Topic: https://lists.fd.io/g/vpp-dev/post

Change Your Subscription: https://lists.fd.io/g/vpp-dev/editsub/21656
Group Home: https://lists.fd.io/g/vpp-dev
Contact Group Owner: vpp-dev+ow...@lists.fd.io
Terms of Service: https://lists.fd.io/static/tos
Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub
-=-=-=-=-=-=-=-=-=-=-=-



Re: [vpp-dev] VPP - mechanism to drop packets

2018-03-07 Thread Dave Barach
Add an arc from your node to the "error-drop" node, set next0 = 
MYNODE_NEXT_ERROR and b0->error = node->errors[SOME_ERROR].

Please use the standard dual/single - or quad/single - loop code pattern to 
walk the incoming vector of buffer indices. You will hate your life if you try 
to code the vector-walk from first principles. It's not impossible, but I it 
will be a waste of your time / a bunch of needless aggravation.

b0->sw_if_index[VLIB_TX] is interpreted in a couple of different ways. 
"ip4/6-lookup" interprets it as a fib index. "interface-output" interprets it 
as a [tx] hardware interface ID.

I'm not sure what you're trying to do, but if it involves an ip lookup, do NOT 
set b0->sw_if_index[VLIB_TX]. Let the fib code do its job, and send pkts to 
either the input nodes - if mandatory input checks / ttl decrement have not 
been performed - or to the lookup stage if e.g. you've rewritten the ip 
header(s) in some fashion.

HTH... Dave

-Original Message-
From: vpp-dev@lists.fd.io  On Behalf Of Avi Cohen (A)
Sent: Wednesday, March 7, 2018 6:38 AM
To: vpp-dev@lists.fd.io
Subject: [vpp-dev] VPP - mechanism to drop packets

Hi,
I'm implementing a simple policy plugin , below is the pseudo-code 

Go over the packets vector
While (there are packets to process )
{
Check if a packet match a specific rule
  If yes - set the out-interface for the packet to be transmitted
  Else - drop packet
} 

2 Question -
1. is there  any function/mechanism that implements a drop ? or any filed in 
the packet's metadata for drop marking ?
2. Regarding the set tx out interface - I see that I can set the 
sw_if_index[VLIB_TX] - so the packet will be later transmitted through this 
interface - is this correct ? 

Best Regards
Avi




-=-=-=-=-=-=-=-=-=-=-=-
Links:

You receive all messages sent to this group.

View/Reply Online (#8447): https://lists.fd.io/g/vpp-dev/message/8447
View All Messages In Topic (2): https://lists.fd.io/g/vpp-dev/topic/14153536
Mute This Topic: https://lists.fd.io/mt/14153536/21656
New Topic: https://lists.fd.io/g/vpp-dev/post

Change Your Subscription: https://lists.fd.io/g/vpp-dev/editsub/21656
Group Home: https://lists.fd.io/g/vpp-dev
Contact Group Owner: vpp-dev+ow...@lists.fd.io
Terms of Service: https://lists.fd.io/static/tos
Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub
-=-=-=-=-=-=-=-=-=-=-=-



[vpp-dev] acl-plugin gerrit 9689: should I change the (default) behavior to reclassify existing sessions not permitted by updated policy ?

2018-03-07 Thread Andrew Yourtchenko
Hi all,

for those of you using in some fashion the acl-plugin code, wanted to
get your eyes on this in-the-works patch:

https://gerrit.fd.io/r/#/c/9689/

as well as get your opinion on the following:

(1) should I KEEP the default as it is now (which is to retain the
sessions which are already created,
even if the new policy disallows them),

or

(2) should I CHANGE the default to do the reclassification by default ?

My opinion so far:

(1) - KEEP the old default, have a startup switch and the CLI to
change (do we need the API on this ? I would think this as a one-time
admin thing rather than something that a control plane needs to do, so
the CLI is merely for the debugging and the main vehicle is the
startup config).

The good trait of this approach is that it perfectly allows to revisit
the decision later, after there
has been a chance to give more exposure to this functionality.

I wanted to get the feeling from the community on whether I am being
excessively cautious here.

And as well maybe get someone to try it out before it is fully done
and provide feedback. :-)

--a

-=-=-=-=-=-=-=-=-=-=-=-
Links:

You receive all messages sent to this group.

View/Reply Online (#8446): https://lists.fd.io/g/vpp-dev/message/8446
View All Messages In Topic (1): https://lists.fd.io/g/vpp-dev/topic/14153723
Mute This Topic: https://lists.fd.io/mt/14153723/21656
New Topic: https://lists.fd.io/g/vpp-dev/post

Change Your Subscription: https://lists.fd.io/g/vpp-dev/editsub/21656
Group Home: https://lists.fd.io/g/vpp-dev
Contact Group Owner: vpp-dev+ow...@lists.fd.io
Terms of Service: https://lists.fd.io/static/tos
Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub
-=-=-=-=-=-=-=-=-=-=-=-



[vpp-dev] Worker threads utilization

2018-03-07 Thread Shiv
Hi Team,

 Have a couple of questions on worker threads and handoff between threads.

  Each lcore has one worker thread associated with it, and assuming that
RSS is not used, interfaces are associated with lcores, and thus with
threads.
--
ID Name   TypeLWP Sched Policy (Priority)  lcore
Core
0  vpp_main  8536other (0)0
  0  0
1  vpp_wk_0   workers  8548other (0)1  1
  0
2  vpp_wk_1   workers  8549other (0)2  0
  0
3  vpp_wk_2   workers  8550other (0)3  1
  0
4 stats   8551other (0)
0  0  0
--
DBGvpp# show interface rx-placement
Thread 1 (vpp_wk_0):
  node dpdk-input:
GigabitEthernet0/1f/6 queue 0 (polling)

Each worker has the entire set of graph nodes, and runs the packet vector
through all the nodes.

So, if the number of lcores is more than the number of interfaces, how are
the others core utilized ?

I see that there is a device handoff function available, but do not see
documentation on when it is enabled, or how it can be used.

Regards,
Shiv


[vpp-dev] route creating performance issue because of bucket and memory of adj_nbr_tables

2018-03-07 Thread lollita
Hi,

We have encounter performance issue on batch adding 1 GTPU 
tunnels and 1 routes each taking one gtpu tunnel interface as nexthop via 
API.

The effect is like executing following command:

create gtpu tunnel src 18.1.0.41 dst 18.1.0.31 teid 1 
encap-vrf-id 0 decap-next ip4
create gtpu tunnel src 18.1.0.41 dst 18.1.0.31 teid 2 encap-vrf-id 0 decap-next 
ip4
ip route add 1.1.1.1/32 table 2 via gtpu_tunnel0
ip route add 1.1.1.2/32 table 2 via gtpu_tunnel1

After debugging, the time is mainly cost on init 
adj_nbr_tables[nh_proto][sw_if_index] for "ip route add" following function:

BV(clib_bihash_init) (adj_nbr_tables[nh_proto][sw_if_index],
  "Adjacency Neighbour 
table",
  
ADJ_NBR_DEFAULT_HASH_NUM_BUCKETS,
  
ADJ_NBR_DEFAULT_HASH_MEMORY_SIZE);

We have change the third parameter from ADJ_NBR_DEFAULT_HASH_NUM_BUCKETS 
(64*64) to 64, and change the fourth parameter from 
ADJ_NBR_DEFAULT_HASH_MEMORY_SIZE (32 <<20) to 32<<10. And the time cost has 
been reduced to about one ninth of original result.

The question is what adj_nbr_tables is used for? Why it need so many buckets 
and memory?

BR/Lollita Liu