Attached is the initial version of the loadlatency tool. I'm getting some rather interesting results from it already, although it does take a very long time to run.
It works under Linux, Cygwin and MacOS X on both little- and big-endian
machines (and between machines of different byte-sexes), and therefore it
should also work under FreeBSD and other UNIXes. I haven't yet tried compiling
it for iOS or Android.
It produces useful results even when one of the machines is rather old and
slow, despite using a proper PRNG for traffic generation. My ancient
Pentium-MMX proved capable of generating at least 4.4 MB/s of traffic steadily,
and probably produces spikes of even greater bandwidth. Anything of Y2K
vintage or newer should be able to saturate it's network with this.
There are four measures produced: Upload Capacity, Download Capacity, Link
Responsiveness and Flow Smoothness. All of these are reported in "bigger is
better" units to help deal with Layers 8 and 9.
The Capacity calculations are a bit complex to understand. For each flow (in
either direction), the average goodput is measured over the entire lifetime of
the flow. All of the flows in each direction for that scenario are the
aggregated by taking the harmonic mean and multiplying by the number of flows;
this biases the total downwards if the flows were mutually unfair. Finally,
the relevant measures across all scenarios are aggregated using the harmonic
mean, thus biasing the overall measure towards cases of under-utilisation. The
totals are then reported in binary kilobytes per second.
The Link Responsiveness measure is much easier. I simply take the maximum
latency of the "pinger" channel (which is also used for commanding flows to
stop) and invert it to produce a measure in Hertz. This is rounded down to the
next integer for display; if you see a zero here (which is surprisingly
likely), it means that a latency of more than one second was encountered at
some point during the entire test.
The Flow Smoothness measure refers to the application-level data inter-arrival
timings. The maximum delay between data chunks arriving is measured across all
flows in all scenarios, but excludes the first megabyte of each flow so as to
avoid picking up the RTT during TCP startup.
Here are some numbers I got from a test over a switched 100base-TX LAN:
Upload Capacity: 1018 KiB/s
Download Capacity: 2181 KiB/s
Link Responsiveness: 2 Hz
Flow Smoothness: 1 Hz
Horrible, isn't it? I deliberately left these machines with standard
configurations in order to show that.
- Jonathan
loadlatency.tar.gz
Description: GNU Zip compressed data
_______________________________________________ Bloat mailing list [email protected] https://lists.bufferbloat.net/listinfo/bloat
