Thanks for your response.  We were coming to the conclusion that iperf must 
measure the buffer fill rate on the client side.  I am glad that you can 
confirm this.  On systems with larger buffers we can see the dynamics you 
described where iperf can initially fill the buffer rapidly then settles to the 
link bandwidth as the test progresses.

When testing under UDP we are convinced that packets are flushed out of the 
buffer without being pulled into the radio.  We are having difficulty 
determining if a similar buffer behavior is present with TCP traffic.  Though 
such behavior may be better tolerated due to the TCP retry mechanism I am 
concerned that it would at least trigger more retries at the TCP level than 
necessary.

Again, thanks.

Regards,
Mark

From: Metod Kozelj [mailto:metod.koz...@lugos.si]
Sent: Monday, February 25, 2013 8:43 AM
To: Hickman, Mark
Cc: iperf-users@lists.sourceforge.net
Subject: Re: [Iperf-users] iperf: How does iperf -c xxxxx -1 y -b ??M calculate 
the bandwidth at each interval?

Howdy!

It seems like nobody answered this question. Or I never got the answer. Anyhow 
...

Hickman, Mark je dne 15/02/13 22:56 napisal-a:
We are confused about the inner workings of iperf which raises questions of the 
validity of the reports under these conditions.


1.      Does the iperf client measure the interval bandwidth by the amount of 
data it wrote do a buffer per interval or by the amount of data the radio 
pulled out of the buffer per interval or the amount of data the radio put on 
the air?  The first two should be equivalent.

2.      The other question I should ask but cannot think of.

iperf as a perfect userland application does not have any idea about physical 
layer connectivity. Hence sending party measures bandwidth of pushing data into 
send buffers. It is known to iperf if layers below that (TCP/UDP; IP; ethernet 
or any other L2 technology; wire, wireless or any other L1 technology) drop 
data.
The above statement does not imply which of the cases you enumerated is 
actually the correct. However, if the transmit device behaves (ie does not drop 
data due to buffer full), then one an observe typical behaviour: a surge of UL 
data with high peak throughput at the beginning and a drop to real L2 speed 
afterwards. Hence conclusion: iperf client measures the interval bandwidth by 
the amount of data it wrote do a buffer per interval

If it was the second (iperf client measures the interval bandwidth by the 
amount of data the radio pulled out of the buffer per interval), one could not 
see the spike right at the beginning of test.

The visibility of this spike is proportional to the size of transmit buffer 
size (wmem in linux) and inversely proportional to the first leg link speed.

Now to the drops: I have extensive experience with broadband wireless 
(WCDMA/HSPA and LTE) devices and most of them are transparent in a sense that 
they don't buffer data. A few of them buffer data (and few of them even act as 
a kind of router performing NAT etc) and those are more than happy to drop 
packets.
When using the former breed one can see the TX speed at the sending side with 
no (or seldom) dropped packets while when using the later ones one can only see 
the real throughput on the receiving (iperf server) side ... and there are 
plenty of dropped packets, amount depends on the ratio between iperf tx 
bandwidth versus real link bandwidth.

In case of high buffering on the way, reports from iperf server tend to be too 
late for the client to make note of them. All in all, it's safest to only rely 
on reports from the receiving side (server for uplink and client for downlink).

BR,
 Metod
------------------------------------------------------------------------------
Everyone hates slow websites. So do we.
Make your web apps faster with AppDynamics
Download AppDynamics Lite for free today:
http://p.sf.net/sfu/appdyn_d2d_feb
_______________________________________________
Iperf-users mailing list
Iperf-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/iperf-users

Reply via email to