Hi!
I tried to replicate your tests ... using 3G HSPA+, iperf 2.0.4 on
debian on internet side and jperf (iperf 2.0.2) on windows xp on dial-up
side.
I can see delay jitter reported by receiving side (in my case it's
around 7.5ms in average in either direction when pushing 1.8Mbps of
UDP), but I don't see any information about delay? Perhaps your version
of iperf is wording output wrongly or something? 8.35ms seems fine for
delay jitter.
Ping RTT is 36.06+-4.82 ms (100 pings) with small packets in my case. It
varies a lot depending on 3G technology used (R99 UMTS vs. HSPA+),
particular terminal in use (make and firmware) and packet size (see
graphs http://www.mkx.si/ping-rtt.png (3G UMTS) and
http://www.mkx.si/ping-hs-rtt.png (3G HSPA)).
The most interesting feature of the two graphs is min RTT as shown in
red colour on 3G UMTS graph. It clearly show the 10ms TTI nature of 3G
(steps are 20ms, 10ms in each direction). Which explains quite large
proportion of delay jitter as measured by iperf.
Another interesting feature is missing data on HSPA graph for packet
sizes between around 1300 and 1500 bytes: it is due to one particular IP
stack implementation that didn't want to respond to ICMP echo request if
the packet size was in that range.
The problem with measuring RTT is that results get messy if rquired
bandwidth (for packets send forth and back - usually ICMP) exceeds
bandwidth available. Then packets get buffered and times start to get
real high which skews statistics.
Peace!
Mkx
-- perl -e 'print $i=pack(c5,(41*2),sqrt(7056),(unpack(c,H)-2),oct(115),10);'
-- echo 16i[q]sa[ln0=aln100%Pln100/snlbx]sbA0D4D465452snlb xq | dc
------------------------------------------------------------------------
BOFH excuse #127:
Sticky bits on disk.
Tobi Hofer wrote:
Hello together
I have some problems with the Delay, Jitter from Iperf.
How does Iperf calculate the Jitter Delay? When I measure with UDP the Jitter
Delay over a 100Mb-Ethernet, the results are ok (avg delay: 124us, avg jitter
31us).
I'm on my Bachelor-Thesis and I have to measure the Delay, Jitter and Packet
loss ratio over a PPP-Device (3G) to a Server in the Internet in both ways,
up- and downlink.
Iperf UDP downlink, 1800k (no loss):
avg Delay: 8.35ms
avg Jitter: 0.43ms (standard deviation).
When I variate the bitrate, the change is about 1 or 2ms.
This result can't be correct, because with thrulay the udp delay is avg 75ms.
With NDT I have an avg RTT 1142ms
After Internet research the Delay over 3G is between 70ms.....180ms
How does Iperf calculate the Delay? Is it a problem over 3G UMTS?
I work with Iperf 2.0.4-4 for UDP, on the computers is Ubuntu 9.04 installed.
Thanks
Regards
Tobias
------------------------------------------------------------------------------
Join us December 9, 2009 for the Red Hat Virtual Experience,
a free event focused on virtualization and cloud computing.
Attend in-depth sessions from your desk. Your couch. Anywhere.
http://p.sf.net/sfu/redhat-sfdev2dev
_______________________________________________
Iperf-users mailing list
Iperf-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/iperf-users
------------------------------------------------------------------------------
Join us December 9, 2009 for the Red Hat Virtual Experience,
a free event focused on virtualization and cloud computing.
Attend in-depth sessions from your desk. Your couch. Anywhere.
http://p.sf.net/sfu/redhat-sfdev2dev
_______________________________________________
Iperf-users mailing list
Iperf-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/iperf-users