Hi all,

I've done some digging into my UDP problems and here's what I've
found.  I have a custom script that launches iperf at regular
intervals and parses the results, placing them into a database.  I
have some sanity checks in there to ensure that iperf doesn't spin
forever if there is a problem.  In short, it adds an additional minute
to the time the test is run for and kills the process if it exceeds
that time.

What appears to be happening is that when doing UDP testing
(dualtest), the final packets do not always make it to the server, so
the server leaves the socket open indefinitely.  The client side seems
to finish up with no problems, issuing an error about the final
results missing.  In subsequent tests, the time recorded for upstream
tests increases and we end up with something insane like this :

[EMAIL PROTECTED] ~]$ iperf-2.0.2 -c <ip address> -d -l 90 -b 90000 -u
WARNING: option -b implies udp testing
------------------------------------------------------------
Server listening on UDP port 5001
Receiving 90 byte datagrams
UDP buffer size:   107 KByte (default)
------------------------------------------------------------
------------------------------------------------------------
Client connecting to <ip address>, UDP port 5001
Sending 90 byte datagrams
UDP buffer size:   107 KByte (default)
------------------------------------------------------------
[  4] local 192.168.254.1 port 33547 connected with <ip address> port 5001
[  3] local 192.168.254.1 port 5001 connected with <ip address> port 34735
[  4]  0.0-10.0 sec    110 KBytes  90.0 Kbits/sec
[  4] Sent 1251 datagrams
[  4] Server Report:
[  4]  0.0-7232.8 sec    315 KBytes    357 bits/sec  1.538 ms  168/ 3750 (4.5%)
[  4]  0.0-7232.8 sec  1 datagrams received out-of-order
[  3]  0.0-10.0 sec    110 KBytes  90.1 Kbits/sec  0.335 ms    0/ 1250 (0%)
[  3]  0.0-10.0 sec  1 datagrams received out-of-order

I counted..  It took 10 seconds to get this result, yet the upstream
shows over 7000 seconds.

My guess is that the results are coming from a previously opened
socket, because the upstream always connects to port 5001.  Note: I'm
watching the output from netstat and the "stuck" sockets are still
there, with the same source ports on the client side :

Active Internet connections (servers and established)
Proto Recv-Q Send-Q Local Address               Foreign Address
     State       PID/Program name
tcp        1      0 <server ip>:12865         <client ip>:60739
  CLOSE_WAIT  25429/netserver
tcp        1      0 <server ip>:12865         <client ip>:61707
  CLOSE_WAIT  20195/netserver
udp        0      0 0.0.0.0:5001                0.0.0.0:*
                 2128/iperf-2.0.2
udp        0      0 <server ip>:5001          <client ip>:61637
  ESTABLISHED 2128/iperf-2.0.2
udp        0      0 <server ip>:5001          <client ip>:61557
  ESTABLISHED 2128/iperf-2.0.2
udp        0      0 <server ip>:5001          <client ip>:61394
  ESTABLISHED 2128/iperf-2.0.2

Is there any way to prevent this?

Thanks,

-- 
Jason 'XenoPhage' Frisvold
[EMAIL PROTECTED]
http://blog.godshell.com

-------------------------------------------------------------------------
This SF.Net email is sponsored by the Moblin Your Move Developer's challenge
Build the coolest Linux based applications with Moblin SDK & win great prizes
Grand prize is a trip for two to an Open Source event anywhere in the world
http://moblin-contest.org/redirect.php?banner_id=100&url=/
_______________________________________________
Iperf-users mailing list
Iperf-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/iperf-users

Reply via email to