On Sun, 27 Jul 2014, Sebastian Moeller wrote:

On Jul 26, 2014, at 22:53 , David Lang <da...@lang.hm> wrote:

On Sat, 26 Jul 2014, Sebastian Moeller wrote:

On Jul 26, 2014, at 01:26 , David Lang <da...@lang.hm> wrote:

But I think that what we are seeing from teh results of the bufferbloat work is 
that a properly configured network doesn't degrade badly as it gets busy.

Individual services will degrade as they need more bandwidth than is available, 
but that sort of degredation is easy for the user to understand.

The current status-quo is where good throughput at 80% utilization may be 80Mb, 
at 90% utilization it may be 85Mb, at 95% utilization it is 60Mb, and at 100% 
utilization it pulses between 10Mb and 80Mb averaging around 20Mb and latency 
goes from 10ms to multiple seconds over this range.

With BQL and fw_codel, 80% utilization would still be 80Mb, 90% utilization 
would be 89Mb, 95% utilization would be 93Mb with latency only going to 20ms

so there is a real problem to solve in the current status-quo, and the question 
is if there is a way to quantify the problem and test for it in ways that are 
repeatable, meaningful and understandable.

This is a place to avoid letting perfect be the enemy of good enough.

If you ask even relatively technical people about the quality of a network 
connection, they will talk to you about bandwidth and latency.

But if you talk to a networking expert, they don't even mention that, they talk 
about signal strength, waveform distortion, bit error rates, error correction 
mechanisms, signal regeneration, and probably many other things that I don't 
know enough to even mention :-)


Everyone is already measuring peak bandwidth today, and that is always going to 
be an important factor, so it will stay around.

So we need to show the degredation of the network, and I think that either 
ping(loaded)-ping(unloaded) or ping(loaded)/ping(unloaded) will give us 
meaningful numbers that people can understand and talk about, while still being 
meaningful in the real world.

        Maybe we should follow Neil and Martin’s lead and consider either 
ping(unloaded)-ping(loaded) or ping(unloaded)/ping(loaded) and call the whole 
thing quality estimator or factor (as negative quality or a factor < 0 
intuitively shows a degradation).

That's debatable, if we call this a bufferbloat factor, the higher the number 
the more bloat you suffer.

there's also the fact that the numeric differences if you do small/large vs 
small/larger aren't impressive while large/small vs larger/small look 
substantially different. This is a psychology question.

        I am not in this for marketing ;) so I am not out for impressive 
numbers ;)

well, part of the problem we have is exactly marketing, so we do need to take that into account.

This is one of the things that has come up in multiple forums after the EFF announcement, people saying that they've heard of bufferbloat but don't have any way of measuring it or comparing notes.

getting a marketing number here would be a huge help.

Also my bet is on the difference not on the ratio, why should people with bad 
latency to begin with (satellite?) be more tolerant to further degradation? I 
would assume that on a high latency link if at all the “budget” for further 
degradation might be smaller than on a low latency link (reasoning: there might 
be a fixed latency budget for acceptable latency for voip).

we'd need to check. The problem with difference is that it's far more affected 
by the bandwidth of the connection than a ratio is. If your measurement packets 
end up behind one extra data packet, your absolute number will grow based on 
the transmission time required for that data packet.

so I'm leaning towards the ratio making more sense when comparing vastly 
different types of lines.

But for a satellite link with hight 1st hop RTT the buffer bloat factor is always going to look minuscule…. (I still think the difference is better)


As for th elatency budget idea, I don't buy that, if it was the case then we 
would have no problems until latency exceeding the magic value and then the 
service would fail entirely.

No rather think of it that with increases latency pain increases, not a threshold but a gradual change from good over acceptable into painful...


What we have in practice is that buffering covers up a lot of latency, as long 
as the jitter isn't bad. You may have a lag between what you say and when 
someone on the other end interrupts you without much trouble (as long as echo 
cancellation takes it into account)

Remember transcontinental long distance calls? If the delay gets too long communication suffers especially in real time applications like voip.

how much of that was due to echo cancellation issues compared to the raw latency? the speed of light across the country hasn't changed. and I'd actually bet that the signalling speed of a direct analog connection across the country was actually faster than the current mic to speaker signalling speed

   analog -> digital -> many routers -> digital -> analog

but the echo cancellation is so much more sophisticated that we don't notice the delay as much


which of the two is more useful is something that we would need to get a bunch 
of people with different speed lines to report and see which is affected less 
by line differences and distance to target.

        Or make sure we always measure against the closest target (which with 
satellite might still be far away)?

It's desirable to test against the closest target to reduce the impact on the 
Internet overall, but ideally the quality measurement would not depend on how 
far away the target is.

No the “quality” will be most affected by the bottleneck link, but the more hops we accumulate the more variance we pick up and the more measurements we need to reach an acceptable confidence in our data...

true

David Lang
_______________________________________________
Cerowrt-devel mailing list
Cerowrt-devel@lists.bufferbloat.net
https://lists.bufferbloat.net/listinfo/cerowrt-devel

Reply via email to