On 2018-08-28 1:07 p.m., Dave Taht wrote:
In looking over the increasingly vast sqm-related deployment, there's
a persistent data point that pops up regarding inbound shaping at high
rates.

We give users a choice - run out of cpu at those rates or do inbound
sqm at a rate their cpu can afford.  A remarkable percentage are
willing to give up tons of bandwidth in order to avoid latency
excursions (oft measured, even in these higher speed 200+Mbit
deployments, in the 100s of ms) -

Humans experience delays directly, and so perceive systems with high latency as "slow". The proverbial "man on the Clapham omnibus" therefor responds to high-latency systems with disgust.

A trained scientist, however, runs the risk of choosing something that requires complicated measurement schemes, and might well choose to optimize for throughput, as that sounds like a desirable measure, one matching their intuitions of what "fast" means.

Alas, in this case the scientist's intuition is far poorer than the random person's direct experience.

At least some users want low delay always. It's just the theorists
that want high utilization right at the edge of capacity. Users are
forgiving about running out of cpu - disgruntled, but forgiving.

Certainly I'm back at the point of recommending tbf+fq_codel for
inbound shaping at higher rates - and looking at restoring the high
speed version of cake - and I keep thinking a better policer is
feasible.

My advice to engineers? First, go for things you can both experience and measure, and only then things you have to measure.

--dave

--
David Collier-Brown,         | Always do right. This will gratify
System Programmer and Author | some people and astonish the rest
[email protected]           |                      -- Mark Twain

_______________________________________________
Bloat mailing list
[email protected]
https://lists.bufferbloat.net/listinfo/bloat

Reply via email to