On Thu, May 6, 2021 at 6:41 AM David Lang <[email protected]> wrote: > > it's sometimesworth reminding technical folks that if you look at a small > enough > time slice, a network is either 0% or 100% utilized, so if the output is 100% > utilized the instant a packet arrives, the device ither dropps the data or > buffers it.
+1. Humans tend to think in terms of Mbit/sec, when a saner interval to think about is bits/ms or less. I tend to care about bits/20 ms as being the rightest number for human perceptible latency. At a ms level, well, we are so far from that. I'd put over here http://flent-fremont.bufferbloat.net/~d/broadcom_aug9.pdf What the "bandwidth" was for a typical web transaction with 50ms latency nowadays. It's zero. The mental image I have of the latest home routers is of one of these: https://www.telegraph.co.uk/news/picturegalleries/howaboutthat/11164032/Jet-powered-VW-Beetle-that-goes-like-a-rocket.html impacted into the side of a mountain. > > David Lang > > On Thu, 6 May 2021, Jason Iannone wrote: > > > It's not a short discussion but I start with a comparison of circuit and > > packet switching, usually with an accompanying drawing. There's a physicist > > joke in here about assuming a frictionless environment but for the intent > > of this explanation, a circuit switched path is bufferless because circuit > > switched networks are point to point and bits are transmitted at the same > > rate that they are received. Packet switching introduces a mechanism for > > nodes supporting multiple ingress, single egress transmission. In order to > > support transient bursts, network nodes hold onto bits for a time while the > > egress interface processes the node's ingress traffic. That hold time > > equates to additional latency. Every node in a path may subject a flow's > > traffic to buffering, increasing latency in transit based on its individual > > load. > > > > Jason > > > > On Tue, May 4, 2021 at 8:02 PM Livingood, Jason via Bloat < > > [email protected]> wrote: > > > >> Like many of you I have been immersed in buffer bloat discussions for many > >> years, almost entirely within the technical community. Now that I am > >> starting to explain latency & latency under load to internal non-technical > >> folks, I have noticed some people don’t really understand “traditional” > >> latency vs. latency under load (LUL). > >> > >> > >> > >> As a result, I am planning to experiment in some upcoming briefings and > >> call traditional latency “idle latency” – a measure of latency conducted on > >> an otherwise idle connection. And then try calling LUL either “active > >> latency” or perhaps “working latency” (suggested by an external colleague – > >> can’t take credit for that one) – to try to communicate it is latency when > >> the connection is experiencing normal usage. > >> > >> > >> > >> Have any of you here faced similar challenges explaining this to > >> non-technical audiences? Have you had any success with alternative terms? > >> What do you think of these? > >> > >> > >> > >> Thanks for any input, > >> > >> Jason > >> _______________________________________________ > >> Bloat mailing list > >> [email protected] > >> https://lists.bufferbloat.net/listinfo/bloat > >> > >_______________________________________________ > Bloat mailing list > [email protected] > https://lists.bufferbloat.net/listinfo/bloat > _______________________________________________ > Bloat mailing list > [email protected] > https://lists.bufferbloat.net/listinfo/bloat -- Latest Podcast: https://www.linkedin.com/feed/update/urn:li:activity:6791014284936785920/ Dave Täht CTO, TekLibre, LLC _______________________________________________ Bloat mailing list [email protected] https://lists.bufferbloat.net/listinfo/bloat
