Howdy!
I'm quite sure that "dropping" devices will drop any kind of packet when in
buffer full state. This is particularly true for layer 2 devices (such as
Ethernet switches and/or adapters) who know nothing about IP stuff. If you're
lucky this will not be a completely sporadic process if those devices know
about QoS, such as TOS(Type Of Service).
This behaviour will not show too extermely for TCP due to several factors:
* TCP slow start. Transmit rate at the beginning is quite slow for TCP and
if sender receives ACKs in timely fashion and no NACKs in between, the TX
rate will get up. This is also governed by initial/maximum TCP window size.
* as already mentioned in paragraph above, if there's a missing packet,
receiving side of a TCP stream will request for retransmission by NACKing
the missing packet. This will reduce transmission rate.
* probably there are more factors
If there's a box with too small buffer size on the link path (leaky wireless
device being most suspicious), TCP will handle that through ACK/NACK/window
size mechanism and you will not see many retransmissions - most probably less
than 10%. BTW, TCP will also handle out-of-order arrival of packets. If using
UDP, application will see (and will have to handle) such out-of-order packets.
As to the iperf itself: retransmissions are transparent to userland
application and iperf receiving side (eg. iperf server) will not see if there
are missing TCP packets at all. The only indication of link misbehaviour will
be reduced end-to-end throughput. Similar effect will have large link latency
- any packet needs to be ACKed and if ACKs arrive late, sending party will not
send out more, or in worst case it will already re-transmit non-ACKed packet.
The only way to see missing and/or retransmitted TCP packets is via some
capturing software (such as wireshark).
BR,
Metod
Hickman, Mark je dne 25/02/13 15:16 napisal-a:
Thanks for your response. We were coming to the conclusion that iperf must
measure the buffer fill rate on the client side. I am glad that you can
confirm this. On systems with larger buffers we can see the dynamics you
described where iperf can initially fill the buffer rapidly then settles to
the link bandwidth as the test progresses.
When testing under UDP we are convinced that packets are flushed out of the
buffer without being pulled into the radio. We are having difficulty
determining if a similar buffer behavior is present with TCP traffic.
Though such behavior may be better tolerated due to the TCP retry mechanism
I am concerned that it would at least trigger more retries at the TCP level
than necessary.
Again, thanks.
Regards,
Mark
*From:*Metod Kozelj [mailto:metod.koz...@lugos.si]
*Sent:* Monday, February 25, 2013 8:43 AM
*To:* Hickman, Mark
*Cc:* iperf-users@lists.sourceforge.net
*Subject:* Re: [Iperf-users] iperf: How does iperf -c xxxxx -1 y -b ??M
calculate the bandwidth at each interval?
Howdy!
It seems like nobody answered this question. Or I never got the answer.
Anyhow ...
Hickman, Mark je dne 15/02/13 22:56 napisal-a:
We are confused about the inner workings of iperf which raises questions of
the validity of the reports under these conditions.
1.Does the iperf client measure the interval bandwidth by the amount of data
it wrote do a buffer per interval or by the amount of data the radio pulled
out of the buffer per interval or the amount of data the radio put on the
air? The first two should be equivalent.
2.The other question I should ask but cannot think of.
iperf as a perfect userland application does not have any idea about
physical layer connectivity. Hence sending party measures bandwidth of
pushing data into send buffers. It is known to iperf if layers below that
(TCP/UDP; IP; ethernet or any other L2 technology; wire, wireless or any
other L1 technology) drop data.
The above statement does not imply which of the cases you enumerated is
actually the correct. However, if the transmit device behaves (ie does not
drop data due to buffer full), then one an observe typical behaviour: a
surge of UL data with high peak throughput at the beginning and a drop to
real L2 speed afterwards. Hence conclusion: iperf client measures the
interval bandwidth by the amount of data it wrote do a buffer per interval
If it was the second (iperf client measures the interval bandwidth by the
amount of data the radio pulled out of the buffer per interval), one could
not see the spike right at the beginning of test.
The visibility of this spike is proportional to the size of transmit buffer
size (wmem in linux) and inversely proportional to the first leg link speed.
Now to the drops: I have extensive experience with broadband wireless
(WCDMA/HSPA and LTE) devices and most of them are transparent in a sense
that they don't buffer data. A few of them buffer data (and few of them even
act as a kind of router performing NAT etc) and those are more than happy to
drop packets.
When using the former breed one can see the TX speed at the sending side
with no (or seldom) dropped packets while when using the later ones one can
only see the real throughput on the receiving (iperf server) side ... and
there are plenty of dropped packets, amount depends on the ratio between
iperf tx bandwidth versus real link bandwidth.
In case of high buffering on the way, reports from iperf server tend to be
too late for the client to make note of them. All in all, it's safest to
only rely on reports from the receiving side (server for uplink and client
for downlink).
BR,
Metod
------------------------------------------------------------------------------
Everyone hates slow websites. So do we.
Make your web apps faster with AppDynamics
Download AppDynamics Lite for free today:
http://p.sf.net/sfu/appdyn_d2d_feb
_______________________________________________
Iperf-users mailing list
Iperf-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/iperf-users