>> 5) The gauge makes it appear that moderate latency - 765 msec (0:29) - is 
>> the same as when the value goes to 1768 msec (0:31), and also when it goes 
>> to 4,447 msec (0:35), etc. It might make more sense to have the chart's 
>> full-scale at something like 10 seconds during the test. The scale could be 
>> logarithmic, so that "normal" values occupy up to a third or half of scale, 
>> and bad values get pretty close to the top end. Horrible latency - greater 
>> than 10 sec, say - should peg the indicator at full scale.
> 
> the graph started out logarithmic and it was changed because that made it 
> less obvious to people when the latency was significantly higher (most people 
> are not used to evaluating log scale graphs)

I agree that the results graph should never be logarithmic - it hides the bad 
news of high latency. 

However, the gauge that shows instantaneous latency could be logarithmic. I was 
reacting to the appearance of slamming against the limit at 765 msec, then not 
making it more evident when latency jumped to 1768 msec, then to 4447 msec. 

Imagine the same gauge, with the following gradations at these clock positions, 
with the bar colored to match:

0 msec - 9:00 (straight to the left)
25 msec - 10:00
100 msec - 11:00
250 msec - 12:00
1,000 msec - 1:00
3,000 msec - 2:00
10,000+ msec - 3:00 (straight to the left)

Would that make sense?

Rich
_______________________________________________
Bloat mailing list
[email protected]
https://lists.bufferbloat.net/listinfo/bloat

Reply via email to