Re: [vpp-dev] VPP IPsec queries

2023-03-01 Thread Zhang, Fan

Hi,

You may try with vpp-swan plugin that makes Strongswan offloading IPsec 
to VPP and keep the IKE part to itself.


The project is still not perfect but takes care of child-SA and 
overlapped subnet.


As of ipip interface instead of ipsec interface - it is correct behavior 
as it allows sharing tunnel implementation between IPsec/wireguard/gso etc.


However vpp-swan did not use the ipip interface feature in VPP due to 
the problem you noticed.


You may find the vpp-swan in vpp/extra/strongswan/vpp_swan.

Regards,

Fan

On 3/1/2023 4:02 AM, Ashish Mittal wrote:

Hi Varun,

Pls find my inputs inline.

Regards

Ashish Mittal

On Sat, 25 Feb, 2023, 11:23 pm Varun Tewari,  
wrote:


Hello Team,

I am new to VPP and probing this technology to build an IPSec
responder for our use-cases.
Our initial tests do show the performance might of VPP.
However on probing this further in depth, I noticed a few
limitations and I am dropping this rider seeking clarification
around these.
All my observations are for VPP 23.02 and am using VPP’s Ikev2
plugin.I am using a linux with strongswan as the peer for my tests.

My observations:

1.
VPP seems doesn’t support multiple child-sa (phase 2 sa, ipsec sa)
within the same tunnel.
Single IPsec SA works fine. An interface ipip0 gets created and
SPD shows the correct binding (show ipsec all).
However ,when I bring up the second child-sa for a different TS, I
se the SPD gets overwritten for the interface and the new child-sa
gets installed overwriting the previous one.
For sure this is leading to traffic drop for the traffic hitting
the first TS.

Q: Is this by design or have I got my config wrong in some way.

Here the quick output from the VPP and strongswan
sudo swanctl --list-sas
net-1: #11, ESTABLISHED, IKEv2, abb046c62a60c38a_i* dc95e079629854ca_r
  local  'roadwarrior.vpn.example.com
' @ 17.17.17.1[500]
  remote 'vpp.home' @ 17.17.17.2[500]
AES_CBC-256/HMAC_SHA1_96/PRF_HMAC_SHA1/MODP_2048
  established 848s ago, reauth in 84486s
  net-1: #16, reqid 16, INSTALLED, TUNNEL,
ESP:AES_CBC-192/HMAC_SHA1_96/ESN
    installed 848s ago, rekeying in 84690s, expires in 85552s
    in  cec3d263,  24717 bytes,   107 packets, 687s ago
    out a1816d8f, 179718 bytes,   778 packets,   0s ago
    local 16.16.16.0/24 
    remote 18.18.18.0/24 
  net-2: #17, reqid 17, INSTALLED, TUNNEL,
ESP:AES_CBC-192/HMAC_SHA1_96/ESN
    installed 686s ago, rekeying in 84831s, expires in 85714s
    in  cd14add0, 122199 bytes,   529 packets,   2s ago
    out de989d78, 122199 bytes,   529 packets,   2s ago
    local 16.16.15.0/24 
    remote 18.18.18.0/24 

vpp# show ipsec all
[0] sa 2181038080 (0x8200) spi 3468939875 (0xcec3d263)
protocol:esp flags:[esn anti-replay ]
[1] sa 3254779904 (0xc200) spi 2709613967 (0xa1816d8f)
protocol:esp flags:[esn anti-replay inbound ]
[2] sa 2181038081 (0x8201) spi 3440684496 (0xcd14add0)
protocol:esp flags:[esn anti-replay ]
[3] sa 3254779905 (0xc201) spi 3734543736 (0xde989d78)
protocol:esp flags:[esn anti-replay inbound ]
SPD Bindings:
ipip0 flags:[none]
 output-sa:
  [2] sa 2181038081 (0x8201) spi 3440684496 (0xcd14add0)
protocol:esp flags:[esn anti-replay ]
 input-sa:
  [3] sa 3254779905 (0xc201) spi 3734543736 (0xde989d78)
protocol:esp flags:[esn anti-replay inbound ]
IPSec async mode: off
vpp#

All 4 SAs exist, however the SPD binding shows the latest 2, that
overwrote the SAs for the previous TS leading to traffic drop.

AM=> IKEv2 plugin is experimental state yet. This behaviour that you 
are observing, is unfortunately current implementation. Consider it 
either a bug or simplification done for experimental implementation.




2.
Overlapping subnets between different Ipsec tunnel

When Ikev2 completes, I see, it creates an pip interface and
relevant Child-SAs and ties them to the interface to protect traffic.
So far all is good.
Now, we add an route into VPP to route the traffic via this ipip
interface for each of the source subnet that are expected to be
protected by the tunnel.
This works fine as long as I keep the subnets distinct.

Q: What’s the usual strategy when we have overlapping subnets in
two distinct tunnels ?
T1: SrcSubnet1 DestinationSubnet1
T2: SrcSubnet1 DestinationSubnet2

When T1 is brought up, we add a FIB entry for SrcSubnet1 via
ipipT1 and things works fine.
When T2 comes up, ipipT2 is created and now I need to add FIB
entry for SrcSubnet1 via ipipT2 and as expected things break here.
AM=> I am not sure but it can be done via ABF instead of FIB
entries. I have never tried 

Re: [vpp-dev] DPDK 22.11 bump patch to review/merge

2023-02-07 Thread Zhang, Fan
Sorry forgot the patch link dpdk: bump to dpdk 22.11 (Ie120346c) · 
Gerrit Code Review (fd.io) <https://gerrit.fd.io/r/c/vpp/+/37840>


On 2/7/2023 3:44 PM, Zhang, Fan via lists.fd.io wrote:

Hi,

We have DPDK 22.11 patch hanging there for a while.

I remember the decision in last community call was not to wait DPDK 
23.03 but merge this one instead.


If someone can have a deeper look at it, and/or do we need some tests 
under CSIT first?


Regards,

Fan





-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.
View/Reply Online (#22560): https://lists.fd.io/g/vpp-dev/message/22560
Mute This Topic: https://lists.fd.io/mt/96809594/21656
Group Owner: vpp-dev+ow...@lists.fd.io
Unsubscribe: https://lists.fd.io/g/vpp-dev/leave/1480452/21656/631435203/xyzzy 
[arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-



[vpp-dev] DPDK 22.11 bump patch to review/merge

2023-02-07 Thread Zhang, Fan

Hi,

We have DPDK 22.11 patch hanging there for a while.

I remember the decision in last community call was not to wait DPDK 
23.03 but merge this one instead.


If someone can have a deeper look at it, and/or do we need some tests 
under CSIT first?


Regards,

Fan


-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.
View/Reply Online (#22559): https://lists.fd.io/g/vpp-dev/message/22559
Mute This Topic: https://lists.fd.io/mt/96809594/21656
Group Owner: vpp-dev+ow...@lists.fd.io
Unsubscribe: https://lists.fd.io/g/vpp-dev/leave/1480452/21656/631435203/xyzzy 
[arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-



Re: [vpp-dev] [csit-dev] Bumping dpdk to 22.11 in vpp 23.02 ?

2023-01-12 Thread Zhang, Fan

Then I was an hour late :-( Thanks Dave!

Where can I find the new time calendar?

On 1/12/2023 3:12 PM, Dave Wallace wrote:
There was a VPP Community meeting this Tuesday at the new time (5am 
PST) that was lightly attended.

-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.
View/Reply Online (#22458): https://lists.fd.io/g/vpp-dev/message/22458
Mute This Topic: https://lists.fd.io/mt/96211041/21656
Group Owner: vpp-dev+ow...@lists.fd.io
Unsubscribe: https://lists.fd.io/g/vpp-dev/leave/1480452/21656/631435203/xyzzy 
[arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-



Re: [vpp-dev] [csit-dev] Bumping dpdk to 22.11 in vpp 23.02 ?

2023-01-12 Thread Zhang, Fan

I agree.

It is worth discussing in VPP community call if VPP should access the 
DPDK internal APIs - it surely leaves us more flexibility with the price 
of possible more maintenance effort.


- BTW was there a meeting on this Tuesday? I joined 4 minutes late but 
nobody was there.


Apart from that I believe the patch is in relatively good shape, surely 
lacked testing though.



As of continuous build/sanity between VPP and DPDK main branch - the way 
DPDK function/perf testing the patches now are RC-based, there are build 
tests carried out per-patch based, and there is a crypto unit test 
running nightly with only SW crypto PMDs (@Kai is it still running?).


If VPP does the sanity check against DPDK main branch - I personally 
believe VPP may catch some potential DPDK bugs earlier than DPDK 
validation team.


Hence I believe it is a very good idea as a cooperation between two 
projects. This in my opinion also means we need DPDK tech board members 
attending regular meeting with us in case some problems have been 
catched early.



Regards,

Fan

On 1/11/2023 10:29 PM, Andrew Yourtchenko wrote:

My naive impression looking at the change, seems like it’s still work in 
progress with several comments open. Especially with the autumn DPDK release 
IIRC being the “API-breaking” one, looks a bit risky to me… I think haste may 
get us into places we don’t wanna be in. I would vote to merge this into master 
post-RC1 milestone, thus giving it more time to soak, and not to put undue 
strain on anyone.

At the same time I would like to (again?) bring up the idea of doing some sort 
of continuous build/sanity between VPP and DPDK master branches -  Fan, I think 
we discussed this once ? We could then have a change ready “just in time” in 
the future, potentially ? As I am not well versed with DPDK - does this idea 
even make sense ?

--a


On 11 Jan 2023, at 16:53, Maciek Konstantynowicz (mkonstan) via lists.fd.io 
 wrote:

Hi,

On CSIT call just now Kai made us aware of issues with above (cryptodev, sat), 
as captured in this patch:

37840: dpdk: make impact to VPP for changes in API for DPDK 22.11 | 
https://gerrit.fd.io/r/c/vpp/+/37840

23.02 RC1 is next week and in CSIT we start testing at RC1 milestone, so it’s 
very last minute …

Also, in the past we got burned by DPDK bump requiring bumping firmware 
versions on FVL and CVL NICs, which for our performance testbed fleet is a bit 
of an operation (e.g. on Arm we have to remove the NICs and put them into Xeon 
machines to do firmware upgrade, unless things improved recently).

Asking for views if we could delay dpdk ver bumping to avoid rushing it in, 
especially that there are open issues?

Thoughts?

Cheers,
-Maciek






-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.
View/Reply Online (#22454): https://lists.fd.io/g/vpp-dev/message/22454
Mute This Topic: https://lists.fd.io/mt/96211041/21656
Group Owner: vpp-dev+ow...@lists.fd.io
Unsubscribe: https://lists.fd.io/g/vpp-dev/leave/1480452/21656/631435203/xyzzy 
[arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-



Re: [vpp-dev] crashing in vlib_frame_vector_byte_offset

2023-01-10 Thread Zhang, Fan

Hi,

I believe you missed configuring vnet_hw_if_tx_frame_t for that frame.

Regards,

Fan

On 1/10/2023 12:12 PM, shaligram.prakash wrote:

 Hi,

 I am facing a crash in the below code. It's 12 worker/1 core setup. 
we are using bit old VPP - 20.09.


function under suspect is ---
buffer_send_to_node()

 1.    vlib_frame_t    *frame;
 2.     u32 *to_next;
3.
 4.     frame = vlib_get_frame_to_node (vm, node_index);
 5.     frame->n_vectors = 1;
 6.     to_next = vlib_frame_vector_args (frame);
7.
 8.     to_next[0] = buffer_index;
9.
10.     vlib_put_frame_to_node (vm, node_index, frame );
11.    return;


The crash is seen at

#3  
#4 vlib_frame_vector_byte_offset (scalar_size=) at 
/home/jenkins/vpp/build-root/install-vpp-native/vpp/include/vlib/node_funcs.h:259
#5  vlib_frame_vector_args (f=) at 
/home/jenkins/vpp/build-root/install-vpp-native/vpp/include/vlib/node_funcs.h:270
#6  buffer_send_to_node (vm=vm@entry=0x7fc406c323c0, node_index=332, 
buffer_index=buffer_index@entry=19753015) at 
/home/jenkins/test_buffer.c:1105

..
..
#11 0x7fc43bfbd702 in dispatch_node 
(last_time_stamp=61944131704815156, frame=0x7fc40695c500, 
dispatch_state=VLIB_NODE_STATE_POLLING, type=VLIB_NODE_TYPE_INTERNAL, 
node=0x7fc4107c0480, vm=0x7fc406c323c0) at 
/home/jenkins/vpp/src/vlib/main.c:1197
#12 dispatch_pending_node (vm=vm@entry=0x7fc406c323c0, 
pending_frame_index=pending_frame_index@entry=4, 
last_time_stamp=61944131704815156) at 
/home/jenkins/vpp/src/vlib/main.c:1355



Can an allocated frame at line 4 be freed via dispatch_pending_node in 
any conditions ?





-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.
View/Reply Online (#22438): https://lists.fd.io/g/vpp-dev/message/22438
Mute This Topic: https://lists.fd.io/mt/96175435/21656
Group Owner: vpp-dev+ow...@lists.fd.io
Unsubscribe: https://lists.fd.io/g/vpp-dev/leave/1480452/21656/631435203/xyzzy 
[arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-



Re: [vpp-dev] Slow VPP performance vs. DPDK l2fwd / l3wfd

2023-01-06 Thread Zhang, Fan

Hi Benoit,

What I will state in below all based on our understanding to FVL/CVL, 
not MLX NICs.


It is not the HW queue as the queue size can be bigger than 256. It is 
an interim buffer (please forgive me that I forgot the official terms of 
it) that NIC to fill descriptors and the CPU to fetch.


So when VPP requests 256 packets, FVL/CVL driver actually only provides 
maximum 64 descriptors for CPU to fetch at any given time, so the buffer 
is depleted. Since nowadays CPU is really fast and we eager for more 
packets, the CPU will keep asking the NIC - the awkward situation 
happens that NIC is busy telling CPU no more please come next time, but 
never refill the interim buffer ever. So it becomes a special "deadlock" 
between the NIC and the CPU.


To answer your retry question - I actually wrote the code to retry 
indefinitely and the code goes 100% real deadlock, and total packets 
fetched is 64 no matter how many times I tried.


The solution is simple, instead of depleting the interim buffer of 
descriptors, we always asking for half of 64 packets, when doing rx 
burst next time, the NIC is more than happy to give the rest 32 packets 
to CPU while refilling 32 packets to prepare with no problem.


The problem was first found by CSIT team. You may found more log in 
dpdk: improve rx burst count per loop (I804dce6d) · Gerrit Code Review 
(fd.io) <https://gerrit.fd.io/r/c/vpp/+/35620>


Regards,

Fan

On 1/6/2023 3:25 PM, Benoit Ganne (bganne) via lists.fd.io wrote:

Interesting! Thanks Fan for bringing that up.
So if I understand correctly, with the previous DPDK behavior we could have say 
128 packets in the rxq, VPP would request 256, get 32, and the request 224 
(256-32) again, etc.
While VPP request more packets, the NIC has the opportunity to add packets in 
the rxq and VPP could end up with 256...
With the new behavior, with the same initial state, VPP requests 256 packets, 
get 128 and call it a day.
If that's the case, maybe a better heuristic could be to retry up to 8 times 
(256/32) before giving up?

Best
ben


-Original Message-
From:vpp-dev@lists.fd.io    On Behalf Of Zhang, Fan
Sent: Friday, January 6, 2023 16:04
To:vpp-dev@lists.fd.io
Subject: Re: [vpp-dev] Slow VPP performance vs. DPDK l2fwd / l3wfd

There was a change in DPDK 21.11 to impact no-multi-seg options for VPP.

In VPP's DPDK RX, the original implementation was to fetch 256 packets. If
not enough packets are fetched from NIC queue then try again with smaller
amount.

DPDK 21.11 introduced the change by not slicing the big burst size to
smaller (say 32) ones and performing NIC RX multiple times when "no-multi-
seg" was enabled, this caused VPP always drained NIC queue in the first
attempt and the NIC cannot keep up to fill enough descriptors into the
queue before CPU does another RX burst - at least it was the case for
Intel FVL and CVL.

This caused a lot of empty polling in the end, and the vpp vector size was
always 64 instead of 256 (for CVL and FVL).


I addressed the problem for CVL/FVL by letting VPP only does smaller burst
size (up to 32) multiple times manually instead. However I didn't test on
MLX NICs due to the lack of the HW. (a9fe20f4b dpdk: improve rx burst
count per loop)


Since different HW has its sweet point of the burst size that makes it
capable working with CPU in harmony - possibly with different problems as
well, this won't be easily addressed by non-vendor developers.




Regards,

Fan





On 1/6/2023 2:16 PM,r...@gmx.net  <mailto:r...@gmx.net>   wrote:


Hi Matt,

thanks a lot. I ended temporarily solving it via downgrade to
v21.10 and there the option `no-multi-seg` provides full line speed 100
Gbps ( tested with mixed TRex profile avg. pkt 900 Bytes).
Weirdly enough any v22.xx causes major performance drop with MLX5
DPDK PMD enabled. I will open another thread to discuss usage of TRex with
rdma driver.

Below the working config for v21.10 with my Mellanox-ConnectX-6-DX
cards:

unix {
  exec /etc/vpp/exec.cmd
# l2fwd mode based on mac
#  exec /etc/vpp/l2fwd.cmd
  nodaemon
  log /var/log/vpp/vpp.log
  full-coredump
  cli-listen /run/vpp/cli.sock
  gid vpp

  ## run vpp in the interactive mode
  # interactive

  ## do not use colors in terminal output
  # nocolor

  ## do not display banner
  # nobanner
}

api-trace {
## This stanza controls binary API tracing. Unless there is
a very strong reason,
## please leave this feature enabled.
  on
## Additional parameters:
##
## To set the number of binary API tr

Re: [vpp-dev] Support for VPP compilation in offline mode

2023-01-04 Thread Zhang, Fan

Hi Chinmaya,

In VPP's Makefile (vpp/Makefile at master · FDio/vpp · GitHub 
) line 65 to 182 
contains the necessary packages to be installed for compiling vpp 
(depends on your OS). You may  install these packages on your host 
manually before compiling VPP.


Regards,

Fan

On 1/3/2023 7:43 PM, Chinmaya Aggarwal wrote:
Currently as part of VPP compilation, there are commands such as "make 
install-dep" and "make install-ext-deps" which downloads required 
dependent packages from internet. We want to automate the VPP 
compilation in offline mode i.e. our environment will not have 
internet access and we want to compile VPP in such an environment. 
Since the above mentioned commands by default reach out to internet 
for downloading the required packages, how can we have an offline 
setup for VPP compilation? 

-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.
View/Reply Online (#22411): https://lists.fd.io/g/vpp-dev/message/22411
Mute This Topic: https://lists.fd.io/mt/96035761/21656
Group Owner: vpp-dev+ow...@lists.fd.io
Unsubscribe: https://lists.fd.io/g/vpp-dev/leave/1480452/21656/631435203/xyzzy 
[arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-



Re: [vpp-dev] Exclude a rx-queue from RSS #vpp_qos #dpdk

2022-12-06 Thread Zhang, Fan

Adding Kai Ji to see if he could help.

Thanks Kai's help in advance.

On 12/6/2022 9:13 AM, ltham...@usc.edu wrote:
Once I enable a flow on an interface to redirect certain packets to 
queue 0, I don't want other packets to use this queue 0. This can be 
done by disabling RSS for queue 0. I am looking for a way to disable 
RSS for queue 0 or exclude queue 0 from RSS. Any pointer on this would 
be helpful.

-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.
View/Reply Online (#22293): https://lists.fd.io/g/vpp-dev/message/22293
Mute This Topic: https://lists.fd.io/mt/95465574/21656
Mute #vpp_qos:https://lists.fd.io/g/vpp-dev/mutehashtag/vpp_qos
Mute #dpdk:https://lists.fd.io/g/vpp-dev/mutehashtag/dpdk
Mute #vpp:https://lists.fd.io/g/vpp-dev/mutehashtag/vpp
Group Owner: vpp-dev+ow...@lists.fd.io
Unsubscribe: https://lists.fd.io/g/vpp-dev/leave/1480452/21656/631435203/xyzzy 
[arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-



Re: [vpp-dev] Exclude a rx-queue from RSS #dpdk #vpp #vpp_qos

2022-12-05 Thread Zhang, Fan

Hi,

vpp/flow_cli.c at master · FDio/vpp · GitHub 
 
contains some useful information to add a flow to redirect to queue X 
(check out test_flow() function's redirect_to_queue option, or 
vl_api_flow_enable_t_handler if you are using API).


After a flow is configured you may enable the flow to an interface. If 
the interface is managed by DPDK the vpp flow will be translated into 
DPDK rte_flow and configure the NIC HW. Please note not all NIC HW 
supports the flow configuration you might need.


However you still have to tweak the rest of your flows to not touch queue 0.


Regards,

Fan

On 12/5/2022 9:33 AM, ltham...@usc.edu wrote:

Hi,

I am setting num-rx-queues for a dpdk dev in vpp startup config to 4. 
This enables RSS and distribute packets coming on this interface 
between these 4 queues.


I would like to configure RTE flow and redirect certain packets to 
queue 0. When this is done, I don't want other packets to use queue 0 
due to RSS.


Is there a way to disable RSS on set of rx-queues?

-Nikhil




-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.
View/Reply Online (#22276): https://lists.fd.io/g/vpp-dev/message/22276
Mute This Topic: https://lists.fd.io/mt/95465574/21656
Mute #vpp_qos:https://lists.fd.io/g/vpp-dev/mutehashtag/vpp_qos
Mute #dpdk:https://lists.fd.io/g/vpp-dev/mutehashtag/dpdk
Mute #vpp:https://lists.fd.io/g/vpp-dev/mutehashtag/vpp
Group Owner: vpp-dev+ow...@lists.fd.io
Unsubscribe: https://lists.fd.io/g/vpp-dev/leave/1480452/21656/631435203/xyzzy 
[arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-