Hi, I am using iperf3 as a client on an Ubuntu 18.04 Linux machine connected to another server running VPP using 100G NICs. Both servers are Intel Xeon with 24 cores. I am trying to push 100G traffic from the iperf Linux TCP client by starting 10 parallel iperf connections on different port numbers and pinning them to different cores on the sender side. On the VPP receiver side I have 10 worker threads and 10 rx-queues in dpdk, and running iperf3 using VCL library as follows
taskset 0x00400 sh -c "LD_PRELOAD=/usr/lib/x86_64-linux-gnu/libvcl_ldpreload.so VCL_CONFIG=/etc/vpp/vcl.conf iperf3 -s -4 -p 9000" & taskset 0x00800 sh -c "LD_PRELOAD=/usr/lib/x86_64-linux-gnu/libvcl_ldpreload.so VCL_CONFIG=/etc/vpp/vcl.conf iperf3 -s -4 -p 9001" & taskset 0x01000 sh -c "LD_PRELOAD=/usr/lib/x86_64 ... MTU is set to 9216 everywhere, and TCP MSS set to 8200 on client: taskset 0x0001 iperf3 -c 10.1.1.102 -M 8200 -Z -t 6000 -p 9000 taskset 0x0002 iperf3 -c 10.1.1.102 -M 8200 -Z -t 6000 -p 9001 ... I see that I am not able to push beyond 50-60G. I tried different sizes for the vcl rx-fifo-size - 64K, 256K and 1M. With 1M fifo size, I see that tcp latency as reported on the client increases, but not a significant improvement in bandwidth. Are there any suggestions to achieve 100G bandwidth? I am using a vpp build from master. Pasting below the output of vpp and vcl conf files: cpu { main-core 0 workers 10 } buffers { buffers-per-numa 65536 default data-size 9216 } dpdk { dev 0000:15:00.0 { name eth0 num-rx-queues 10 } enable-tcp-udp-checksum } session { evt_qs_memfd_seg } socksvr { socket-name /tmp/vpp-api.sock} tcp { mtu 9216 max-rx-fifo 262144 } vcl.conf: vcl { max-workers 1 rx-fifo-size 262144 tx-fifo-size 262144 app-scope-local app-scope-global api-socket-name /tmp/vpp-api.sock } Thanks, Vijay
-=-=-=-=-=-=-=-=-=-=-=- Links: You receive all messages sent to this group. View/Reply Online (#17378): https://lists.fd.io/g/vpp-dev/message/17378 Mute This Topic: https://lists.fd.io/mt/76783803/21656 Group Owner: vpp-dev+ow...@lists.fd.io Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub [arch...@mail-archive.com] -=-=-=-=-=-=-=-=-=-=-=-