Re: [vpp-dev] TLS configuration & throughput

2020-02-28 Thread dchons via Lists.Fd.Io
Hi Florin, Thanks for the info. I pulled the latest and retried, things seem a bit better now from a stability perspective. The performance issue looks like a CPU limitation on the client side (10K loops/s) based on the results. I have these: *Client* : Intel Xeon CPU E5-2650 v2 @ 2.60GHz -

Re: [vpp-dev] TLS configuration & throughput

2020-02-28 Thread dchons via Lists.Fd.Io
Hi Florin, I got another test run and was able to do the clear run; show run as you suggested about 10 seconds into the test run just before it ended. I'm not sure where to see the loops/s stat, so I've pasted the output from both client and server below if you would not mind pointing out what

Re: [vpp-dev] TLS configuration & throughput

2020-02-26 Thread dchons via Lists.Fd.Io
Hi Florin, Thanks once again! I was in the middle of collecting a bunch of information to respond (basically nothing interesting in logs and the client does not crash, it just sits there), and then on one miraculous run it actually worked. I was hoping for a bit more performance (I only got

Re: [vpp-dev] TLS configuration & throughput

2020-02-26 Thread dchons via Lists.Fd.Io
Hi Florin, Thanks so much for trying this out and for the suggestions. Unfortunately this isn't working in my setup. Here's what I did just to make sure I'm not missing anything. I generated the key and cert as follows: *openssl req -newkey rsa:2048 -nodes -keyout ldp.key -x509 -days 365 -out

Re: [vpp-dev] TLS configuration & throughput

2020-02-25 Thread dchons via Lists.Fd.Io
Hi Florin, Thank you for your response. I used different rx/tx buffer sizes and it didn't really make a difference. For this stage, it's good enough to know that there are known performance limitations, thank you again for your help. Regards, Dom -=-=-=-=-=-=-=-=-=-=-=- Links: You receive all

[vpp-dev] TLS configuration & throughput

2020-02-25 Thread dchons via Lists.Fd.Io
Hello, I'm trying to get an idea of TLS throughput using openssl without hardware acceleration, and I'm using the vpp_echo application as follows: * *Server :* taskset --cpu-list 4,6,8 ./vpp_echo socket-name /tmp/vpp-api.sock uri tls://10.0.0.71/ fifo-size 200 uni RX=50Gb TX=0 stats 1

Re: [vpp-dev] vpp_echo UDP performance

2020-02-20 Thread dchons via Lists.Fd.Io
Hi Florin, I'm not sure why the echo app was crashing on me yesterday, built the debug version, no crash, so I rebuilt the release version and also no crash. For some reason, even though udp-cksum now shows as an active rx offload, the function ip4_local_check_l4_csum is still being called,

Re: [vpp-dev] vpp_echo UDP performance

2020-02-19 Thread dchons via Lists.Fd.Io
Hi again, For what it's worth, I added a hack in src/plugins/dpdk/device/init.c and set xd->port_conf.rxmode.offloads |= DEV_RX_OFFLOAD_UDP_CKSUM , and now I have: vpp# sh hardware-interfaces Name                Idx   Link  Hardware TenGigabitEthernet5/0/0            1    down 

Re: [vpp-dev] vpp_echo UDP performance

2020-02-19 Thread dchons via Lists.Fd.Io
Hi Florin, Thanks for the suggestion. It looks like in my case *ip4_local_l4_csum_validate* is being called: Breakpoint 1, ip4_local_l4_csum_validate (vm=0x7fffb4ecef40, p=0x10026d9980, ip=0x10026d9a8e, is_udp=1 '\001', error=0x7fffb517b1d8 "\016\023", good_tcp_udp=0x7fffb517b19d

Re: [vpp-dev] vpp_echo UDP performance

2020-02-19 Thread dchons via Lists.Fd.Io
[Edited Message Follows] ** Edit **: Corrected typo, udp-cksum not active in rx-offloads, but is active in tx-offloads Hi Florin, Thanks for the response. I'm not so concerned about the packet drops (as you point out it is to be expected), however increasing the number of rx descriptors did

Re: [vpp-dev] vpp_echo UDP performance

2020-02-19 Thread dchons via Lists.Fd.Io
Hi Florin, Same NIC on both machines: root# dpdk-devbind --status Network devices using DPDK-compatible driver :05:00.0 'Ethernet 10G 2P X520 Adapter 154d' drv=uio_pci_generic unused=ixgbe -=-=-=-=-=-=-=-=-=-=-=- Links: You receive all messages

Re: [vpp-dev] vpp_echo UDP performance

2020-02-19 Thread dchons via Lists.Fd.Io
Hi Florin, Thanks for the response. I'm not so concerned about the packet drops (as you point out it is to be expected), however increasing the number of rx descriptors did help a lot, so thank you very much for that! I'm still at around 6.5 Gbps, "sh session verbose" shows the following:

[vpp-dev] vpp_echo UDP performance

2020-02-19 Thread dchons via Lists.Fd.Io
Hello, I've been trying to do some basic performance testing on 20.01 using the vpp_echo application, and while I'm getting the expected performance with TCP, I'm not quite able to achieve what I would expect with UDP. The NICs are 10G X520, and on TCP I get around 9.5 Gbps, but with UDP I get

Re: [vpp-dev] VPP / tcp_echo performance

2019-12-16 Thread dchons
Hi Florin, >From my logs it seems that TSO is not on even when using the native driver, >logs attached below. I'm going to do a deeper dive into the various networking >layers involved in this setup, will post any interesting findings back on this >thread. Thank you for all the help so far!

Re: [vpp-dev] VPP / tcp_echo performance

2019-12-13 Thread dchons
Hi, I rebuilt VPP on master and updated startup.conf to enable tso as follows: dpdk { dev :00:03.0{ num-rx-desc 2048 num-tx-desc 2048 tso on } uio-driver vfio-pci enable-tcp-udp-checksum } I'm not sure whether it is working or not, there is nothing in show session verbose 2 to indicate

Re: [vpp-dev] VPP / tcp_echo performance

2019-12-12 Thread dchons
Hi Florin, The saga continues, a little progress and more questions. In order to reduce the variables, I am now only using VPP on one of the VMs: iperf3 server is running on a VM with native Linux networking, and iperf3+VCL client running on the second VM. I've pasted the output from a few

Re: [vpp-dev] VPP / tcp_echo performance

2019-12-06 Thread dchons
Hi Florin, Some progress, at least with the built-in echo app, thank you for all the suggestions so far! By adjusting the fifo-size and testing in half-duplex I was able to get close to 5 Gbps between the two openstack instances using the built-in test echo app: vpp# test echo clients gbytes

Re: [vpp-dev] VPP / tcp_echo performance

2019-12-04 Thread dchons
Hi Florin, Those are tcp echo results. Note that the "show session verbose 2" command was issued while there was still traffic being sent. Interesting that on the client (sender) side the tx fifo is full (cursize 65534 nitems 65534) and on the server (receiver) side the rx fifo is empty

Re: [vpp-dev] VPP / tcp_echo performance

2019-12-04 Thread dchons
It turns out I was using DPDK virtio, with help from Moshin I changed the configuration and tried to repeat the tests using VPP native virtio, results are similar but there are some interesting new observations, sharing them here in case they are useful to others or trigger any ideas. After

Re: [vpp-dev] VPP / tcp_echo performance

2019-12-04 Thread dchons
Hi, Thank you Florin and Jerome for your time, very much appreciated. * For VCL configuration, FIFO sizes are 16 MB * "show session verbose 2" does not indicate any retransmissions. Here are the numbers during a test run where approx. 9 GB were transferred (the difference in values between

[vpp-dev] VPP / tcp_echo performance

2019-12-03 Thread dchons
Hi all, I've been running some performance tests and not quite getting the results I was hoping for, and have a couple of related questions I was hoping someone could provide some tips with. For context, here's a summary of the results of TCP tests I've run on two VMs (CentOS 7 OpenStack

[vpp-dev] tcp_echo

2019-11-22 Thread dchons
Hello, I'm trying to run a simple test using the tcp_echo application using 19.08.1-release, but it just crashes pretty early on. I built a debug version and tried to debug the crash that happens in mspace_malloc: void* mspace_malloc(mspace msp, size_t bytes) { mstate ms = (mstate)msp; if

Re: [vpp-dev] Need Help on an ipsec Problem

2019-08-16 Thread dchons
Hi Bin & Neale, I know this exchange is a couple of years old now, but I'm having the exact issue where the Ping reply from VPP is being sent in cleartext rather than going through the IPSec tunnel. Could you provide some information on how this was resolved? Thanks in advance. Dom

[vpp-dev] Trouble with tunnel mode IPSec to VPP

2019-08-15 Thread dchons
[Edited Message Follows] Hello again, My apologies if this is not the correct place for these kinds of question, I'm relatively new to VPP. I would really appreciate any suggestions as to why the response to a PING that was received over an IPSec tunnel is not going through the tunnel as

[vpp-dev] Trouble with tunnel mode IPSec to VPP

2019-08-12 Thread dchons
Hello devs, I've been trying to establish an IPSec tunnel between libreswan and VPP using IKEv2, I'm able to get the tunnel established and packets coming in to VPP decrypted, but it looks like outbound packets from VPP are not going through IPSec. The VPP trace is shown below where I can see