Hi Jim,
Thanks for this post. I'll need to digest it a bit. There are definitely a
myriad of ways an iperf client can send traffic. I think understanding how
the server presents traffic matters as well, i.e. what exactly is being tested
and how the client/server reports allow for that testing? TCP make things
difficult because underneath iperf it's doing a lot. One way to analyze it is
a tool like TCP trace which allows for inspections of congestion windows, round
trip delay, etc. http://www.tcptrace.org/
The quickest way I know to measure buffer depth is to turn them into "bad
buffers" (see buffer bloat http://en.wikipedia.org/wiki/Bufferbloat) where the
intermediate buffers can never drain and run at a full state. Then from there
measure end/end latency. I run UDP traffic rate line rate to do this (and
that's also part of the reason why end/end latency was exposed in the new iperf
udp server output.)
I guess I don't really understand what's needed to test and measure TCP
microbursts. Let me digest things a bit though do feel free to add more
information as that may be helpful.
Bob
From: Jim Young [mailto:jyo...@gsu.edu]
Sent: Thursday, January 08, 2015 10:28 PM
To: Bob (Robert) McMahon; Steve Baillargeon
Cc: iperf-users@lists.sourceforge.net
Subject: Re: [Iperf-users] iperf UDP burst mode
Hello Bob and Steve,
Interesting discussion. I recently considered the possibility of trying to add
some type of TCP "duty cycle" mode. Using the proposed UDP burst mode one
might be able to emulate the TCP "duty cycle" mode. But extending iperf to
support a TCP burst mode would likely prove useful to help expose microburst
issues.
Last year we investigated some nagging port congestion issues seen on several
1G ports that were "only" nominally 15% utilized; these ports were consistently
transmitting data at about 150Mb/sec 24/7. But I could not replicate the port
congestion issue when using iperf even when pushing much higher average
throughput.
Using the technique documented in the following Cisco technote I eventually
confirmed the root cause was likely microbursts:
http://www.cisco.com/c/en/us/support/docs/lan-switching/switched-port-analyzer-span/116260-technote-wireshark-00.pdf
Instead of saying this 1G network port was 15% utilized I tend to say that 15%
of the time this 1G network port was 100% utilized. The question: How much
buffering can the switch support before it must start dropping packets if
traffic is buffered faster than can be transmitted?
In our particular case the source of the microburst (and packet congestion) was
the occasional convergence of packets from several TCP streams of video
surveillance camera traffic to the single 1G port. Each individual video
surveillance TCP stream has a very specific bursty pattern. The DVR sends 30
video frames a second, with each video frame composed of a TCP packet train of
about 35 to 45 full sized ethernet frames. So for each TCP stream we see a
burst of 35 to 45 packets followed by about a 30th of a second of quiet. If
just five TCP video frame packet trains arrive at the access layer switch port
buffer at virtually the same instant then we will see the port congestion
counter increment. In our case the switch has dual 10G uplink ports to the
building distribution switch and the several DVR servers that source the TCP
streams are all connected at 10Gb. That means each DVR TCP stream has the
potential of arriving at the 1G egress port buffer 10 times faster than the
port can transmit. With two 10g uplinks to this switch there's the possibility
for traffic to arrive 20 times faster than the 1 Gig port can send. We
determined that the switch had egress port buffer size of about 250,000 bytes.
That means the port could buffer up to about 165 full size ethernet frames
before it would be forced to drop packets.
Augmenting iperf to support a TCP burst mode in addition to the proposed UPD
burst mode would allow one to simulate the bursty TCP packet trains which might
help in exposing microburst issues.
Best regards,
Jim Y.
From: "Bob McMahon (Robert)"
<rmcma...@broadcom.com<mailto:rmcma...@broadcom.com>>
Date: Thursday, January 8, 2015 5:35 PM
To: Steve Baillargeon
<steve.baillarg...@ericsson.com<mailto:steve.baillarg...@ericsson.com>>
Cc:
"iperf-users@lists.sourceforge.net<mailto:iperf-users@lists.sourceforge.net>"
<iperf-users@lists.sourceforge.net<mailto:iperf-users@lists.sourceforge.net>>
Subject: Re: [Iperf-users] iperf UDP burst mode
Yes, that's basically what I suggested. It would look something like
iperf -c 10.10.10.10 -b 100M -l 1470/16,32 -u where 16 is the min burst and
32 is the max
If one really doesn't want speed ups and doesn't care about converging on -b
then either:
iperf -c 10.10.10.10 -b 100M -l 1470/16 -u
or
iperf -c 10.10.10.10 -b 100M -l 1470/16,16 -u
Though, it's worth repeating, this will only control application level gaps.
If something below (os/driver) or an intermediate device (router/switch) clumps
them prior to the iperf server there would be no way of knowing, at least not
from the iperf client or server. This would be a best try type of solution
(with caveat emptor.)
Note: I believe a tool like an IXIA chassis can provide guarantees.
Bob
From: Steve Baillargeon [mailto:steve.baillarg...@ericsson.com]
Sent: Thursday, January 08, 2015 2:18 PM
To: Bob (Robert) McMahon
Cc: iperf-users@lists.sourceforge.net<mailto:iperf-users@lists.sourceforge.net>
Subject: RE: [Iperf-users] iperf UDP burst mode
Hi Bob
I really think UDP burst mode with some well-understood expectations and
restrictions will be useful to test network BW and buffering capacity. What is
the next step to hopefully support it?
Regarding burst vs bandwidth configuration at the client. What if the user only
need to configure bandwidth (below client line rate), burst size and packet
size with some restrictions on the possible values. The client would then
estimate the gap to satisfy the bandwidth for a given train size (train is
probably maybe better than burst) and packet size. Is that what you suggested?
Regards
Steve
<snip>
------------------------------------------------------------------------------
Dive into the World of Parallel Programming! The Go Parallel Website,
sponsored by Intel and developed in partnership with Slashdot Media, is your
hub for all things parallel software development, from weekly thought
leadership blogs to news, videos, case studies, tutorials and more. Take a
look and join the conversation now. http://goparallel.sourceforge.net
_______________________________________________
Iperf-users mailing list
Iperf-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/iperf-users