It seems to me that a 2+ second delay is way too high, and even if it happens only occasionally, users may set up their systems to assume it may happen and compensate for it by ading their own buffering at the endpoints and thereby reduce embarassing glitches. Maybe this explains those long awkward pauses you commonly see when TV interviewers are trying to have a conversation with someone at a remote site via Zoom, Skype, et al.
In the early Internet days we assumed there would be a need for multiple types of service, such as a "bulk transfer" and "interactive", similar to analogs in the non-electronic transport systems (e.g., Air Freight versus Container Ship). The "Type Of Service" field was put in the IP header as a placeholder for such mechanisms to be added to networks in the future,
Of course if network capacity is truly unlimited there would be no need now to provide different types of service. But these latency numbers suggest that users' traffic demands are still sometimes exceeding network capacities. Some of the network traffic is associated with interactive uses, and other traffic is doing tasks such as backups to some cloud. Treating them uniformly seems like bad engineering as well as bad policy.
I'm still not sure whether or not "network neutrality" regulations would preclude offering different types of service, if the technical mechanisms even implement such functionality.
Jack On 2/27/24 14:00, rjmcmahon via Nnagain wrote:
Interesting blog post on the latency part at https://broadbandbreakfast.com/untitled-12/. Looking at the FCC draft report, page 73, Figure 24 – I find it sort of ridiculous that the table describes things as “Low Latency Service” available or not. That is because they seem to really misunderstand the notion of working latency. The table instead seems to classify any network with idle latency <100 ms to be low latency – which as Dave and others close to bufferbloat know is silly. Lots of these networks that are in this report classified as low latency would in fact have working latencies of 100s to 1,000s of milliseconds – far from low latency. I looked at FCC MBA platform data from the last 6 months and here are the latency under load stats, 99th percentile for a selection of ten ISPs: ISP A 2470 ms ISP B 2296 ms ISP C 2281 ms ISP D 2203 ms ISP E 2070 ms ISP F 1716 ms ISP G 1468 ms ISP H 965 ms ISP I 909 ms ISP J 896 ms JasonIt does seem like there is a lot of confusion around idle latency vs working latency. Another common error is to conflate round trip time as two "one way delays." OWD & RTT are different metrics and both have utility. (all of this, including working-loads, is supported in iperf 2 - https://iperf2.sourceforge.io/iperf-manpage.html - so there is free tooling out there that can help.)Bob _______________________________________________ Nnagain mailing list Nnagain@lists.bufferbloat.net https://lists.bufferbloat.net/listinfo/nnagain
OpenPGP_0x746CC322403B8E50.asc
Description: OpenPGP public key
OpenPGP_signature.asc
Description: OpenPGP digital signature
_______________________________________________ Nnagain mailing list Nnagain@lists.bufferbloat.net https://lists.bufferbloat.net/listinfo/nnagain