Howdy!

When you run UDP tests, you should never rely on the results reported by sending peer[*]. Any network node between peers is free to drop any packet in case of congestion. TCP resolves this by retransmissions and application will normally detect this by seeing lower throughput. Even sending peer will detect lower end-to-end throughput with some delay (depending on Tx buffer size and TCP Window size) and in long-term the results (as reported by sending and receiving peer) should be roughly the same. UDP however does not implement retransmissions by itself. Which means that application at the receiving peer will see missing (dropped) packets while application at the sending peer will have no idea about that[**].

If you're doing tests using UDP I'd suggest to run tests single direction only and alternating peer configuration (first run Helsinki peer as client and London peer as server - results reported from London peer will tell you bandwidth from FI to UK - then run Helsinki peer as sevfer and London peer as client and results from Helsinki peer will tell you bandwidth from UK to FI).

[*] The only time that sendin peer will tell you correct throughput is when the first leg is the bottleneck as IP stack on sending peer will not drop packets but rather throttle down application (Tx buffer full) - this can happen with dial-up setups.

[**] When using UDP as transport protocol, it's up to application to detect any inconsistencies - such as dropped packets, out-of-order delivery etc. and to react to those as desired. One app that does all of it is NFS - it traditionally ran over UDP. Recent implementations avoid burden of data integrity checking by using TCP as transport protocol.

Peace!
 Mkx

-- perl -e 'print $i=pack(c5,(41*2),sqrt(7056),(unpack(c,H)-2),oct(115),10);'
-- echo 16i[q]sa[ln0=aln100%Pln100/snlbx]sbA0D4D465452snlb xq | dc

------------------------------------------------------------------------

BOFH excuse #127:

Sticky bits on disk.



martin je dne 12/02/10 01:18 napisal-a:
I did iperf test from one Linux server to another. One server is in
Helsinki, other is in London and they are connected with a VPN
tunnel(should be 100Mbps/100Mbps connection). I started iperf server
in London(192.168.1.2) and iperf client in Helsinki(192.168.1.1). I
started server with this "iperf -s -u -fm"  command and client with
"iperf -c 192.168.1.2 -fm -d -t600 -u -b 100m" command. Output was
fallowing:

#iperf -c 192.168.1.2 -fm -d -t600 -u -b 100m
Server listening on UDP port 5001
Receiving 1470 byte datagrams
UDP buffer size: 0.10 MByte (default)
------------------------------------------------------------
------------------------------------------------------------
Client connecting to 192.168.1.2, UDP port 5001
Sending 1470 byte datagrams
UDP buffer size: 0.10 MByte (default)
------------------------------------------------------------
[  4] local 192.168.1.1 port 44456 connected with 192.168.1.2 port 5001
[  3] local 192.168.1.1 port 5001 connected with 192.168.1.2 port 31435

[ ID] Interval       Transfer     Bandwidth
[  4]  0.0-600.0 sec  7189 MBytes    101 Mbits/sec
[  4] Sent 5128207 datagrams
[ ID] Interval       Transfer     Bandwidth       Jitter   Lost/Total Datagrams
[  3]  0.0-600.2 sec  2396 MBytes  33.5 Mbits/sec  0.244 ms
3419123/5128199 (67%)
[  4] Server Report:
[ ID] Interval       Transfer     Bandwidth       Jitter   Lost/Total Datagrams
[  4]  0.0-600.1 sec  2411 MBytes  33.7 Mbits/sec  0.199 ms
3408321/5128207 (66%)
[  4]  0.0-600.1 sec  299 datagrams received out-of-order

How can there be bandwidth from Helsinki to London 101 Mbits/sec when
Iperf server from London raports 33.7 Mbits/sec? How can there be such
a huge packet loss(67% from London to Helsinki and 66% from Helsinki
to London)? I would appriciate any comments about inner workings of
Iperf or explanations about Iperf output :)

Thank you in advance!!

------------------------------------------------------------------------------
SOLARIS 10 is the OS for Data Centers - provides features such as DTrace,
Predictive Self Healing and Award Winning ZFS. Get Solaris 10 NOW
http://p.sf.net/sfu/solaris-dev2dev
_______________________________________________
Iperf-users mailing list
Iperf-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/iperf-users

------------------------------------------------------------------------------
SOLARIS 10 is the OS for Data Centers - provides features such as DTrace,
Predictive Self Healing and Award Winning ZFS. Get Solaris 10 NOW
http://p.sf.net/sfu/solaris-dev2dev
_______________________________________________
Iperf-users mailing list
Iperf-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/iperf-users

Reply via email to