Re: [vpp-dev] VPP Stateful NAT64 crashes with segmentation fault

2022-11-16 Thread Gabor LENCSE

Hi Philip,

Thank you very much for the detailed instructions!

It was still not completely straight forward to do the compilation, 
because of my proxy, but I finally managed to do it. However, the 
installation of the packages failed due to an unsatisfied dependency (at 
least is seems to be so).


I document also my compilation steps below, just in case, if someone 
else will be in the same situation of using a proxy. You can jump to the 
"PROBLEM" label.


On 11/12/2022 12:52 AM, filvarga wrote:

Hi Gabor,

I would suggest using ubuntu 20.04 to build your .deb packages and 
then upload them to the servers.

Basically you would do this:

1) git clone https://github.com/FDio/vpp.git && cd vpp
2) git checkout v22.06


Everything worked fine up to this point.

However, the following command did not succeed, because it wanted to 
download something, but I do not have a direct Internet access on the 
StarBED nodes. I use a proxy, it is set for git as follows:


root@p106:~# cat .gitconfig
[http]
    proxy = http://172.16.46.241:8080


3) make install-ext-dep


The output of this command was the following:

make -C build/external install-deb
make[1]: Entering directory '/root/vpp/build/external'
make[2]: Entering directory '/root/vpp/build/external'
dpkg-buildpackage: info: source package vpp-ext-deps
dpkg-buildpackage: info: source version 22.06-7
dpkg-buildpackage: info: source distribution unstable
dpkg-buildpackage: info: source changed by VPP Dev 
dpkg-buildpackage: info: host architecture amd64
 dpkg-source --before-build .
 debian/rules clean
make[3]: Entering directory '/root/vpp/build/external/deb'
dh clean
   debian/rules override_dh_clean
make[4]: Entering directory '/root/vpp/build/external/deb'
make -C .. clean
make[5]: Entering directory '/root/vpp/build/external'
make[5]: Leaving directory '/root/vpp/build/external'
make[4]: Leaving directory '/root/vpp/build/external/deb'
make[3]: Leaving directory '/root/vpp/build/external/deb'
 debian/rules build
make[3]: Entering directory '/root/vpp/build/external/deb'
dh build
   dh_update_autotools_config
   dh_autoreconf
   create-stamp debian/debhelper-build-stamp
make[3]: Leaving directory '/root/vpp/build/external/deb'
 debian/rules binary
make[3]: Entering directory '/root/vpp/build/external/deb'
dh binary
   dh_testroot
   dh_prep
   debian/rules override_dh_install
make[4]: Entering directory '/root/vpp/build/external/deb'
make -C .. install
make[5]: Entering directory '/root/vpp/build/external'
mkdir -p downloads
Downloading http://github.com/01org/intel-ipsec-mb/archive/v1.2.tar.gz
  % Total    % Received % Xferd  Average Speed   Time Time Time  
Current

 Dload  Upload   Total Spent    Left  Speed
  0 0    0 0    0 0  0 0 --:--:--  0:02:09 --:--:-- 0
curl: (28) Failed to connect to github.com port 80: Connection timed out
make[5]: *** [packages/ipsec-mb.mk:48: downloads/v1.2.tar.gz] Error 28
make[5]: Leaving directory '/root/vpp/build/external'
make[4]: *** [debian/rules:25: override_dh_install] Error 2
make[4]: Leaving directory '/root/vpp/build/external/deb'
make[3]: *** [debian/rules:17: binary] Error 2
make[3]: Leaving directory '/root/vpp/build/external/deb'
dpkg-buildpackage: error: debian/rules binary subprocess returned exit 
status 2

make[2]: *** [Makefile:74: vpp-ext-deps_22.06-7_amd64.deb] Error 2
make[2]: Leaving directory '/root/vpp/build/external'
make[1]: *** [Makefile:81: install-deb] Error 2
make[1]: Leaving directory '/root/vpp/build/external'
make: *** [Makefile:627: install-ext-deps] Error 2
root@p106:~/vpp#

As far as I understand, I should set the proxy for your downloader, 
which seems to be curl. So, I tried with:


export http_proxy="http://172.16.46.241:8080";

Now, the downloader seemed to go one step further, but finally it failed:

root@p106:~/vpp# make install-ext-dep
make -C build/external install-deb
make[1]: Entering directory '/root/vpp/build/external'
make[2]: Entering directory '/root/vpp/build/external'
dpkg-buildpackage: info: source package vpp-ext-deps
dpkg-buildpackage: info: source version 22.06-7
dpkg-buildpackage: info: source distribution unstable
dpkg-buildpackage: info: source changed by VPP Dev 
dpkg-buildpackage: info: host architecture amd64
 dpkg-source --before-build .
 debian/rules clean
make[3]: Entering directory '/root/vpp/build/external/deb'
dh clean
   debian/rules override_dh_clean
make[4]: Entering directory '/root/vpp/build/external/deb'
make -C .. clean
make[5]: Entering directory '/root/vpp/build/external'
make[5]: Leaving directory '/root/vpp/build/external'
make[4]: Leaving directory '/root/vpp/build/external/deb'
make[3]: Leaving directory '/root/vpp/build/external/deb'
 debian/rules build
make[3]: Entering directory '/root/vpp/build/external/deb'
dh build
   dh_update_autotools_config
   dh_autoreconf
   create-stamp debian/debhelper-build-stamp
make[3]: Leaving directory '/root/vpp/build/external/deb'
 debian/rules b

Re: [vpp-dev] Understanding the use of FD.io VPP as a network stack

2022-11-16 Thread Florin Coras
Hi Federico, 

Apologies, I missed your first email. 

More Inline.

Regards,
Florin


> On Nov 16, 2022, at 7:53 AM, Federico Strati via lists.fd.io 
>  wrote:
> 
> Hello Ben, All
> 
> first of all, thanks for the prompt reply.
> 
> Now let me clarify the questions, which confused you because of my 
> ignorance
> 
> 1. How to use the VPP host stack (TCP):
> 
> I refined my understanding having seen the presentation of Florin 
> (https://www.youtube.com/watch?v=3Pp7ytZeaLk):
> Up to now the revised diagram would be:
> 
> Process A (TCP client or server) <--> VCL (VPP Comms Library) or VCL+LDP or 
> ??? <--> VPP (Session -> TCP -> Eth) <--> Memif or ??? custom plug-in <--> 
> Process B (our radio stack including driver)
> 
> i.e. we would like to use VPP network stack (also termed host stack) as an 
> user-space TCP stack over our radio stack.

So let’s first clarify what library to use. VCL is a library that apps link 
against and can interact with the host stack in vpp (session layer and 
transports) in a POSIX-like fashion. LDP is an LD_PRELOAD shim that intercepts 
socket related syscalls and redirects them into VCL. 

Now, regarding your diagram above. I’m assuming you're trying to generate TCP 
packets and feed them into a separate process. So yes, memif or tap should be 
pretty good options to get packets into your radio stack. You could also build 
some shared memory mechanisms whereby you could pass/expose vpp buffers to 
process B. 

Another option, which I assume is not trivial, would be to move your radio 
stack into a vpp node. TCP can be forced to feed packets to custom next nodes. 

> 
> Florin was saying that there is an alternative for VCL: we don't have legacy 
> BSD socket apps, hence we are free to use the most advanced interface.

I’m guessing you’re thinking about using raw session layer apis. I’d first 
start with VCL to keep things simple. 

> 
> Possibly we would like to be zero-copy insofar as possible.

There is no zero-copy api between vcl and session layer in vpp currently. 

> 
> The "North" of the TCP stack is the (client/server) apps, the "South" of the 
> stack are IP or Eth frames.
> 
> Ideally we would like to know what are the best options to interface with VPP 
> north-bound and south-bound.
> 
> We don't exit into a NIC card, that would be:
> 
> Process A --> VCL --> VPP --> DPDK plug-in (or ??? AF_Packet / AF_XDP) --> NIC
> 
> Hence what are the best possible solutions?

See if above works for you. 

> 
> 2. VPP multi-instances.
> 
> I'm not asking for multi-threading (which I already use successfully), but 
> for running multiple VPP processes in parallel, of course paying attention to 
> core pinning.
> 
> My question was, what are the options to change in startup.conf ?

Yes, you can use multiple startup.conf files just point vpp to them with -c. 
Note that you can’t run multiple vpp instances with dpdk plugin loaded. 

> 
> 3. My tests and the RSS mechanism.
> 
> My set-up is the following: two identical machines, X12's (two xeon, 38+38 
> cores), each one equipped with one Mellanox 100Gbps NIC card (one connectx-4 
> one connectx-5)
> 
> Using iperf3 with LDP+VCL to interface with VPP, hence the flow is:
> 
> Iperf3 client <--> VCL+LDP -> VPP -> DPDK plug-in -> Mlx NIC <-link-> Mlx NIC 
> -> DPDK plug-in -> VPP -> VCL+LDP <--> Iperf3 server
> Machine A 
> <--->  Machine B
> 
> Distribution Ubuntu 20.04 LTS, kernel low latency customised, isolated all 
> cores except two.
> 
> VPP version 21.10 recompiled natively on the machines.
> 
> I'm using DPDK not the RDMA driver.
> 
> What I'm observing is strange variations in throughput for the following 
> scenario:
> 
> Iperf3 single tcp stream on one isolated core, VPP 8 cores pinned to 8 NIC 
> queues
> 
> sometimes it is 15Gbps, sometimes it is 36Gbps ("show hardware" says 3 queues 
> are used)

Some comments here:
- TCP flows are pinned to cores. So only one core will ever be active in your 
test above. 
- Are iperf, vpp’s cores and the nic on the same numa? To see numa for the nic 
“show hardware” in vpp. 
- If you plan to use multiple iperf streams, make sure you have as many rx 
queues as vpp workers and number of tx queues should be rx queues + 1. 

> 
> Hence I was a bit dazzled about RSS. I'm not expecting such large variations 
> from run to run.
> 
> I'm not a VPP expert so if you have suggestions to what to look for, they 
> are welcome :-)
> 
> Thank you in advance for your patience and for your time
> 
> Kind regards
> 
> Federico
> 
> 
> 
> 


-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.
View/Reply Online (#22190): https://lists.fd.io/g/vpp-dev/message/22190
Mute This Topic: https://lists.fd.io/mt/95024801/21656
Group Owner: vpp-dev+ow...@lists.fd.io
Unsubscribe: https://lists.fd.io/g/vpp-dev/leave/1480452/21656/631435203/xyzzy 
[arch...@ma

[vpp-dev] How to make VPP work with Mellanox ConnectX-6 NICs?

2022-11-16 Thread Elias Rudberg
Hello VPP experts,

We have been using VPP with Mellanox ConnectX-5 cards for a while,
which has been working great.

Now we have a new server where we want to run VPP in a similar way that
we are used to, the difference is that the new server has ConnectX-6
cards instead of ConnectX-5.

The lspci command shows each ConnectX-6 card as follows:

51:00.0 Infiniband controller: Mellanox Technologies MT28908 Family
[ConnectX-6]

Trying to create an interface using the following command:

create int rdma host-if ibs1f1 name if1 num-rx-queues 4

gives the following error:

DBGvpp# create int rdma host-if ibs1f1 name if1 num-rx-queues 4
create interface rdma: Queue Pair create failed: Invalid argument

and journalctl shows the following:

Nov 16 16:06:39 [...] vnet[3147]: rdma: rdma_txq_init: Queue Pair
create failed: Invalid argument
Nov 16 16:06:39 [...] vnet[3147]: create interface rdma: Queue Pair
create failed: Invalid argument
Nov 16 16:06:39 [...] kernel: infiniband mlx5_3: create_qp:3206:(pid
3147): Create QP type 8 failed

We are using Ubuntu 22.04 and the VPP version tested was vpp v22.10.

Do we need to do something different when using ConnectX-6 cards
compared to the ConnectX-5 case?

Best regards,
Elias


-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.
View/Reply Online (#22189): https://lists.fd.io/g/vpp-dev/message/22189
Mute This Topic: https://lists.fd.io/mt/95069595/21656
Group Owner: vpp-dev+ow...@lists.fd.io
Unsubscribe: https://lists.fd.io/g/vpp-dev/leave/1480452/21656/631435203/xyzzy 
[arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-



Re: [vpp-dev] Understanding the use of FD.io VPP as a network stack

2022-11-16 Thread Federico Strati via lists.fd.io
Hello Ben, All

first of all, thanks for the prompt reply.

Now let me clarify the questions, which confused you because of my ignorance

1. How to use the VPP host stack (TCP):

I refined my understanding having seen the presentation of Florin 
(https://www.youtube.com/watch?v=3Pp7ytZeaLk):
Up to now the revised diagram would be:

Process A (TCP client or server) <--> VCL (VPP Comms Library) or VCL+LDP or ??? 
<--> VPP (Session -> TCP -> Eth) <--> Memif or ??? custom plug-in <--> Process 
B (our radio stack including driver)

i.e. we would like to use VPP network stack (also termed host stack) as an 
user-space TCP stack over our radio stack.

Florin was saying that there is an alternative for VCL: we don't have legacy 
BSD socket apps, hence we are free to use the most advanced interface.

Possibly we would like to be zero-copy insofar as possible.

The "North" of the TCP stack is the (client/server) apps, the "South" of the 
stack are IP or Eth frames.

Ideally we would like to know what are the best options to interface with VPP 
north-bound and south-bound.

We don't exit into a NIC card, that would be:

Process A --> VCL --> VPP --> DPDK plug-in (or ??? AF_Packet / AF_XDP) --> NIC

Hence what are the best possible solutions?

2. VPP multi-instances.

I'm not asking for multi-threading (which I already use successfully), but for 
running multiple VPP processes in parallel, of course paying attention to core 
pinning.

My question was, what are the options to change in startup.conf ?

3. My tests and the RSS mechanism.

My set-up is the following: two identical machines, X12's (two xeon, 38+38 
cores), each one equipped with one Mellanox 100Gbps NIC card (one connectx-4 
one connectx-5)

Using iperf3 with LDP+VCL to interface with VPP, hence the flow is:

Iperf3 client <--> VCL+LDP -> VPP -> DPDK plug-in -> Mlx NIC <-link-> Mlx NIC 
-> DPDK plug-in -> VPP -> VCL+LDP <--> Iperf3 server
Machine A                                                     <--->         
                             Machine B

Distribution Ubuntu 20.04 LTS, kernel low latency customised, isolated all 
cores except two.

VPP version 21.10 recompiled natively on the machines.

I'm using DPDK not the RDMA driver.

What I'm observing is strange variations in throughput for the following 
scenario:

Iperf3 single tcp stream on one isolated core, VPP 8 cores pinned to 8 NIC 
queues

sometimes it is 15Gbps, sometimes it is 36Gbps ("show hardware" says 3 queues 
are used)

Hence I was a bit dazzled about RSS. I'm not expecting such large variations 
from run to run.

I'm not a VPP expert so if you have suggestions to what to look for, they 
are welcome :-)

Thank you in advance for your patience and for your time

Kind regards

Federico

-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.
View/Reply Online (#22188): https://lists.fd.io/g/vpp-dev/message/22188
Mute This Topic: https://lists.fd.io/mt/95024801/21656
Group Owner: vpp-dev+ow...@lists.fd.io
Unsubscribe: https://lists.fd.io/g/vpp-dev/leave/1480452/21656/631435203/xyzzy 
[arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-



[vpp-dev] bihash_8_8 corruption issue

2022-11-16 Thread anirudhrhytm
Hello fd.io ( http://fd.io/ ) team,
We are using the bihash_8_8 as a part of our project.
And what we see under load conditions is that a part of the  bucket metadata ( 
BVT (clib_bihash_bucket)) i.e last 32 bits seems to contain an unexpected value 
for one of the buckets.
this second 32 bits contain fields like lock, linear_search, log2_pages . We 
can say that the  last 32 bits contains unexpected value because we saw that 
the log2_pages is a very high value like( > 100).
Because the last 32 bits does not contain expected data, some threads trying to 
search in the hash map get struck forever waiting for the lock.

#4  clib_bihash_lock_bucket_8_8 (b=0x7fd7f93b5208)
at src/vppinfra/bihash_template. h:292
#5  clib_bihash_add_del_inline_ with_hash_8_8 (arg=0x0, is_stale_cb=0x0, 
is_add=1, hash=2708476431,
add_v=0x7fcf819f0960, h=0x7fd7a2fad9c0 )
at src/vppinfra/bihash_template. c:710
#6  clib_bihash_add_del_inline_8_ 8 (arg=0x0, is_stale_cb=0x0, is_add=1, 
add_v=0x7fcf819f0960,
h=0x7fd7a2fad9c0 )
at src/vppinfra/bihash_template. c:989
#7  clib_bihash_add_del_8_8 (h=0x7fd7a2fad9c0 , add_v=add_v@entry= 
0x7fcf819f0960,

#4  clib_bihash_lock_bucket_8_8 (b=0x7fd7f93b5208)
at src/vppinfra/bihash_template. h:292
292       if (PREDICT_FALSE (old & mask.as_u64))

We are using fdio.2106 version.
Have such issues been encountered before?. Any ideas on what could be causing 
this issue?

-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.
View/Reply Online (#22187): https://lists.fd.io/g/vpp-dev/message/22187
Mute This Topic: https://lists.fd.io/mt/95068525/21656
Group Owner: vpp-dev+ow...@lists.fd.io
Unsubscribe: https://lists.fd.io/g/vpp-dev/leave/1480452/21656/631435203/xyzzy 
[arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-



[vpp-dev] https://www.centos.org/

2022-11-16 Thread anirudhrhytm
Hello fd.io ( http://fd.io/ ) team,
We are using the bihash_8_8 as a part of our project.
And what we see under load conditions is that a part of the  bucket metadata ( 
BVT (clib_bihash_bucket)) i.e last 32 bits seems to contain an unexpected value 
for one of the buckets.
this second 32 bits contain fields like lock, linear_search, log2_pages . We 
can say that the  last 32 bits contains unexpected value because we saw that 
the log2_pages is a very high value like( > 100).
Because the last 32 bits does not contain expected data, some threads trying to 
search in the hash map get struck forever waiting for the lock.

#4  clib_bihash_lock_bucket_8_8 (b=0x7fd7f93b5208)
at src/vppinfra/bihash_template. h:292
#5  clib_bihash_add_del_inline_ with_hash_8_8 (arg=0x0, is_stale_cb=0x0, 
is_add=1, hash=2708476431,
add_v=0x7fcf819f0960, h=0x7fd7a2fad9c0 )
at src/vppinfra/bihash_template. c:710
#6  clib_bihash_add_del_inline_8_ 8 (arg=0x0, is_stale_cb=0x0, is_add=1, 
add_v=0x7fcf819f0960,
h=0x7fd7a2fad9c0 )
at src/vppinfra/bihash_template. c:989
#7  clib_bihash_add_del_8_8 (h=0x7fd7a2fad9c0 , add_v=add_v@entry= 
0x7fcf819f0960,

#4  clib_bihash_lock_bucket_8_8 (b=0x7fd7f93b5208)
at src/vppinfra/bihash_template. h:292
292       if (PREDICT_FALSE (old & mask.as_u64))

We are using fdio.2106 version.
Have such issues been encountered before?. Any ideas on what could be causing 
this issue?

-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.
View/Reply Online (#22186): https://lists.fd.io/g/vpp-dev/message/22186
Mute This Topic: https://lists.fd.io/mt/95068443/21656
Group Owner: vpp-dev+ow...@lists.fd.io
Unsubscribe: https://lists.fd.io/g/vpp-dev/leave/1480452/21656/631435203/xyzzy 
[arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-