Hi folks,

I am messing around trying to do some throughput testing with v18.01 VPP TCP and I keep ending up getting the error "stuck clients" for what feels like a pretty modest setup.


* I am using two containers connected via a VPP vSwitch w/memif interfaces.
* In the default namespace I am running VPP 18.01 as vSwitch (binary packaging).
* In the containers I am running VPP 18.01 as source (make run etc).
* I am connecting from the containers to the VPP vSwitch via memif.
* Pinging between containers works fine, as does throughput testing for small amount of traffic. * When I try running throughput testing for larger amounts of traffic (16mb) I get a 'stuck clients' error.

What is the fastest way to go about debugging this?

Ray K

------- Container One -------

DBGvpp# create memif socket /var/sockets/cone.socket slave
DBGvpp# set interface ip address memif0/0 192.168.1.1/24
DBGvpp# set interface state memif0/0 up
DBGvpp# ping 192.168.1.2
64 bytes from 192.168.1.2: icmp_seq=2 ttl=64 time=78.8152 ms
64 bytes from 192.168.1.2: icmp_seq=3 ttl=64 time=59.1398 ms
64 bytes from 192.168.1.2: icmp_seq=4 ttl=64 time=74.4145 ms
64 bytes from 192.168.1.2: icmp_seq=5 ttl=64 time=65.0234 ms

Statistics: 5 sent, 4 received, 20% packet loss
DBGvpp# test tcp server uri tcp://192.168.1.1/9000
DBGvpp# 0: builtin_server_rx_callback:201: session stuck: [#0][T] 192.168.1.1:9000->192.168.1.2:10483 ESTABLISHED
 flags:  timers: [RETRANSMIT]
snd_una 5589193 snd_nxt 5653453 snd_una_max 5653453 rcv_nxt 5655425 rcv_las 5655425
 snd_wnd 65536 rcv_wnd 64840 snd_wl1 5653997 snd_wl2 5589193
 flight size 64260 send space 1276 rcv_wnd_av 64840
 cong none cwnd 546924 ssthresh 524288 rtx_bytes 0 bytes_acked 0
prev_ssthresh 0 snd_congestion 2187035221 dupack 0 limited_transmit 2187035221
 tsecr 4041949 tsecr_last_ack 4041949
 rto 200 rto_boff 0 srtt 76 rttvar 1 rtt_ts 4042025 rtt_seq 2113522696
 tsval_recent 4041985 tsval_recent_age 2
 scoreboard: sacked_bytes 0 last_sacked_bytes 0 lost_bytes 0
 last_bytes_delivered 0 high_sacked 0 snd_una_adv 0
 cur_rxt_hole 4294967295 high_rxt 0 rescue_rxt 0
 Rx fifo: cursize 696 nitems 65536 has_event 1
 head 18632 tail 19328
 ooo pool 0 active elts newest 4294967295
 Tx fifo: cursize 65536 nitems 65536 has_event 1
 head 18632 tail 18632
 ooo pool 0 active elts newest 4294967295

------- Container Two -------

DBGvpp# create memif socket /var/sockets/ctwo.socket slave
DBGvpp# set interface ip address memif0/0 192.168.1.2/24
DBGvpp# set interface state memif0/0 up
DBGvpp# test tcp client ?
test tcp clients test tcp clients [nclients %d] [[m|g]bytes <bytes>] [test-timeout <time>][syn-timeout <time>][no-return][fifo-size <size>][private-segment-count <count>][private-segment-size <bytes>[m|g]][preallocate-fifos][preallocate-sessions][client-batch <batch-size>][uri <tcp://ip/port>][test-bytes][no-output]
DBGvpp# test tcp clients nclients 1 mbytes 16
0: transport_alloc_local_endpoint:293: no resolving interface for 6.0.1.1
DBGvpp# test tcp clients nclients 1 mbytes 16 uri tcp://192.168.1.1/9000
1 three-way handshakes in .09 seconds 10.51/s
Test started at 350.388513
0: builtin_client_node_fn:216: stuck clients
Timeout with 1 sessions still active...
test failed

------- Startup.conf -------

vagrant@ctwo:~/vpp$ cat ~/startup.conf
plugins {
  plugin dpdk_plugin.so { disable }
}
heapsize 64M

unix {
  interactive
  nodaemon
  log /tmp/vpp.log
  full-coredump
}

api-trace {
  on
}

api-segment {
  gid vpp
}

-=-=-=-=-=-=-=-=-=-=-=-
Links:

You receive all messages sent to this group.

View/Reply Online (#8191): https://lists.fd.io/g/vpp-dev/message/8191
View All Messages In Topic (1): https://lists.fd.io/g/vpp-dev/topic/11145094
Mute This Topic: https://lists.fd.io/mt/11145094/21656
New Topic: https://lists.fd.io/g/vpp-dev/post

Change Your Subscription: https://lists.fd.io/g/vpp-dev/editsub/21656
Group Home: https://lists.fd.io/g/vpp-dev
Contact Group Owner: vpp-dev+ow...@lists.fd.io
Terms of Service: https://lists.fd.io/static/tos
Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub
-=-=-=-=-=-=-=-=-=-=-=-

Reply via email to