Hi Vijay, 

Cool experiment. More inline. 

> On Sep 11, 2020, at 9:42 AM, Vijay Sampath <vsamp...@gmail.com> wrote:
> 
> Hi,
> 
> I am using iperf3 as a client on an Ubuntu 18.04 Linux machine connected to 
> another server running VPP using 100G NICs. Both servers are Intel Xeon with 
> 24 cores.

May I ask the frequency for those cores? Also what type of nic are you using?

> I am trying to push 100G traffic from the iperf Linux TCP client by starting 
> 10 parallel iperf connections on different port numbers and pinning them to 
> different cores on the sender side. On the VPP receiver side I have 10 worker 
> threads and 10 rx-queues in dpdk, and running iperf3 using VCL library as 
> follows
> 
> taskset 0x00400 sh -c 
> "LD_PRELOAD=/usr/lib/x86_64-linux-gnu/libvcl_ldpreload.so 
> VCL_CONFIG=/etc/vpp/vcl.conf iperf3 -s -4 -p 9000" &
> taskset 0x00800 sh -c 
> "LD_PRELOAD=/usr/lib/x86_64-linux-gnu/libvcl_ldpreload.so 
> VCL_CONFIG=/etc/vpp/vcl.conf iperf3 -s -4 -p 9001" &
> taskset 0x01000 sh -c "LD_PRELOAD=/usr/lib/x86_64
> ...
> 
> MTU is set to 9216 everywhere, and TCP MSS set to 8200 on client:
> 
> taskset 0x0001 iperf3 -c 10.1.1.102 -M 8200 -Z -t 6000 -p 9000
> taskset 0x0002 iperf3 -c 10.1.1.102 -M 8200 -Z -t 6000 -p 9001
> ...

Could you try first with only 1 iperf server/client pair, just to see where 
performance is with that? 

> 
> I see that I am not able to push beyond 50-60G. I tried different sizes for 
> the vcl rx-fifo-size - 64K, 256K and 1M. With 1M fifo size, I see that tcp 
> latency as reported on the client increases, but not a significant 
> improvement in bandwidth. Are there any suggestions to achieve 100G 
> bandwidth? I am using a vpp build from master.

Depends a lot on how many connections you’re running in parallel. With only one 
connection, buffer occupancy might go up, so 1-2MB might be better. 

Could you also check how busy vpp is with “clear run” wait at least 1 second 
and then “show run”. That will give you per node/worker vector rates. If they 
go above 100 vectors/dispatch the workers are busy so you could increase their 
number and implicitly the number of sessions. Note however that RSS is not 
perfect so you can get more connections on one worker.

> 
> Pasting below the output of vpp and vcl conf files:
> 
> cpu {
>   main-core 0
>   workers 10

You can pin vpp’s workers to cores with: corelist-workers c1,c3-cN to avoid 
overlap with iperf. You might want to start with 1 worker and work your way up 
from there. In my testing, 1 worker should be enough to saturate a 40Gbps nic 
with 1 iperf connection. Maybe you need a couple more to reach 100, but I 
wouldn’t expect more. 

> }
> 
> buffers {
>   buffers-per-numa 65536

Unless you need the buffers for something else, 16k might be enough. 

>   default data-size 9216

Hm, no idea about the impact of this on performance. Session layer can build 
chained buffers so you can also try with this removed to see if it changes 
anything. 

> }
> 
> dpdk {
>   dev 0000:15:00.0 {
>         name eth0
>         num-rx-queues 10

Keep this in sync with the number of workers

>   }
>   enable-tcp-udp-checksum

This enables sw checksum. For better performance, you’ll have to remove it. It 
will be needed however if you want to turn tso on. 

> }
> 
> session {
>   evt_qs_memfd_seg
> }
> socksvr { socket-name /tmp/vpp-api.sock}
> 
> tcp {
>   mtu 9216
>   max-rx-fifo 262144

This is only used to compute the window scale factor. Given that your fifos 
might be larger, I would remove it. By default the value is 32MB and gives a 
wnd_scale of 10 (should be okay). 

> }
> 
> vcl.conf:
> vcl {
>   max-workers 1

No need to constrain it

>   rx-fifo-size 262144
>   tx-fifo-size 262144

As previously mentioned you can configure them to be larger. 

Regards,
Florin

>   app-scope-local
>   app-scope-global
>   api-socket-name /tmp/vpp-api.sock
> }
> 
> Thanks,
> 
> Vijay
> 

-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.

View/Reply Online (#17379): https://lists.fd.io/g/vpp-dev/message/17379
Mute This Topic: https://lists.fd.io/mt/76783803/21656
Group Owner: vpp-dev+ow...@lists.fd.io
Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub  [arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-

Reply via email to