Network Latency Under Load testing tool
(c) 2011 Jonathan Morton <chromatix99@gmail.com>


COMPILING:

gcc -O3 loadlatency.c -o loadlatency -lgsl -lblas -lpthread

This should work on most UNIX-like platforms, including Cygwin.  Substitute your choice of BLAS library (eg. Cygwin prefrs -llapack), it's required by GSL rather than the program itself.  On Ubuntu, try 'sudo apt-get install gcc libgsl0-dev libblas-dev' if you have problems.

There may be a few compilation warnings emitted.  These are harmless.


RUNNING:

Loadlatency will run as an unprivileged user, since it uses only high-numbered TCP/IP ports.

1) Choose a host to act as server.  Start loadlatency in server mode by running it without arguments.

2) From one or more other hosts, start loadlatency providing the IPv4 or IPv6 address of the server host on the command line.  The server will act for only one client at a time, but will automatically switch to a new client once one finishes.

3) Wait.  The test's 19 scenarios take a substantial amount of time to complete.

4) Overall statistics will be displayed on stdout on completion.


RESULTS:

All results (except the MinRTT displayed at the beginning) are displayed in units where "bigger is better", which may help in selling improvements to management and end-users.  The overall results are also rounded down to the nearest integer to eliminate excess precision.

There are four overall results computed:

Upload Capacity and Download Capacity are calculated as the harmonic mean of all scenarios' relevant capacity measurements, and are reported in binary kilobytes per second.  These numbers are likely to be somewhat lower than the theoretical capacity of the link, even after conversion from Kbps.  The 
individual scenarios' capacity measurements are taken as the harmonic mean of the arithmetic-mean goodputs of each flow over the entire scenario duration, multiplied by the number of flows.  This method of measuring therefore incorporates implicit penalties for mutual unfairness of flows and link 
under-utilisation during specific scenarios.

Link Responsiveness is derived from the maximum latency experienced by the "pinger" or "command" channel, which shuttles a command word back and forth in parallel with the data flows.  This is traditionally reported in milliseconds, but here it is reported in Hertz.

Flow Smoothness is derived from the maximum inter-arrival delay experienced by any single flow.  This can be lengthened considerably by packet loss and subsequent retransmission.  To avoid effectively measuring the RTT during TCP startup, inter-arrival delays are ignored until 1 megabyte has been 
transferred.  The result is again reported in Hz, and can therefore be compared directly with (for example) video framerates.

The measured capacity and smoothness is also reported for each scenario as the test runs.  This extra information should not be used for marketing, but may be useful for understanding performance problems.
