"   From each of these sets of measurements, the 10th and 90th
   percentiles and the median value SHOULD be computed.  For each
   scenario, a graph can be generated, with the x-axis showing the end-
   to-end delay and the y-axis the goodput.  This graph provides part of
   a better understanding of (1) the delay/goodput trade-off for a given
   congestion control mechanism, and (2) how the goodput and average
   queue size vary as a function of the traffic load."

This is lame. Capturing *all* the data as in a CDF or an Winstien
ellipsis plot, across the entire range, is to be preferred when
engineering a system.

90th percentile is a very, very low bar to cross, most of the nasty
bufferbloat happens at the top end of the range. Packet crcs, as one
example, are measured out to what, one in 6 million? Would you drive a
car that had the steering wheel fail one time in 10 turns?

as for medians, seven figure summaries, if you must...




-- 
Dave Täht
worldwide bufferbloat report:
http://www.dslreports.com/speedtest/results/bufferbloat
And:
What will it take to vastly improve wifi for everyone?
https://plus.google.com/u/0/explore/makewififast

_______________________________________________
aqm mailing list
[email protected]
https://www.ietf.org/mailman/listinfo/aqm

Reply via email to