> On 29 Sep, 2023, at 1:19 am, David Lang via Bloat 
> <bloat@lists.bufferbloat.net> wrote:
> 
> Dave T called out earlier that the rise of bittorrent was a large part of the 
> inital NN discussion here in the US. But a second large portion was a money 
> grab from ISPs thinking that they could hold up large paid websites (netflix 
> for example) for additional fees by threatening to make their service less 
> useful to their users (viewing their users as an asset to be marketed to the 
> websites rather than customers to be satisfied by providing them access to 
> the websites)
> 
> I don't know if a new round of "it's not fair that Netflix doesn't pay us for 
> the bandwidth to service them" would fall flat at this point or not.

I think there were three more-or-less separate concerns which have, over time, 
fallen under the same umbrella:


1:  Capacity-seeking flows tend to interfere with latency-sensitive flows, and 
the "induced demand" phenomenon means that increases in link rate do not in 
themselves solve this problem, even though they may be sold as doing so.

This is directly addressed by properly-sized buffers and/or AQM, and even 
better by FQ and SQM.  It's a solved problem, so long as the solutions are 
deployed.  It's not usually necessary, for example, to specifically enhance 
service for latency-sensitive traffic, if FQ does a sufficiently good job.  An 
increased link rate *does* enhance service quality for both latency-sensitive 
and capacity-seeking traffic, provided FQ is in use.


2:  Swarm traffic tends to drown out conventional traffic, due to congestion 
control algorithms which try to be more-or-less fair on a per-flow basis, and 
the substantially larger number of parallel flows used by swarm traffic.  This 
also caused subscribers using swarm traffic to impair the service of 
subscribers who had nothing to do with it.

FQ on a per-flow basis (see problem 1) actually amplifies this effect, and I 
think it was occasionally used as an argument for *not* deploying FQ.  ISPs' 
initial response was to outright block swarm traffic where they could identify 
it, which was then softened to merely throttling it heavily, before NN 
regulations intervened.  Usage quotas also showed up around this time, and were 
probably related to this problem.

This has since been addressed by several means.  ISPs may use FQ on a 
per-subscriber basis to prevent one subscriber's heavy traffic from degrading 
service for another.  Swarm applications nowadays tend to employ altruistic 
congestion control which deliberately compensates for the large number of 
flows, and/or mark them with one or more of the Least Effort class DSCPs.  
Hence, swarm applications are no longer as damaging to service quality as they 
used to be.  Usage quotas, however, still remain in use as a profit centre, to 
the point where an "unlimited" service is a rare and precious specimen in many 
jurisdictions.


3:  ISPs merged with media distribution companies, creating a conflict of 
interest in which the media side of the business wanted the internet side to 
actively favour "their own" media traffic at the expense of "the competition".  
Some ISPs began to actively degrade Netflix traffic, in particular by refusing 
to provision adequate peering capacity at the nodes through which Netflix 
traffic predominated, or by zero-rating (for the purpose of usage quotas) 
traffic from their own media empire while refusing to do the same for Netflix 
traffic.

**THIS** was the true core of Net Neutrality.  NN regulations forced ISPs to 
carry Netflix traffic with reasonable levels of service, even though they 
didn't want to for purely selfish and greedy commercial reasons.  NN succeeded 
in curbing an anti-competitive and consumer-hostile practice, which I am 
perfectly sure would resume just as soon as NN regulations were repealed.

And this type of practice is just the sort of thing that technologies like L4S 
are designed to support.  The ISPs behind L4S actively do not want a technology 
that works end-to-end over the general Internet.  They want something that can 
provide a domination service within their own walled gardens.  That's why L4S 
is a NN hazard, and why they actively resisted all attempts to displace it with 
SCE.


All of the above were made more difficult to solve by the monopolistic nature 
of the Internet service industry.  It is actively difficult for Internet users 
to move to a truly different service, especially one based on a different link 
technology.  When attempts are made to increase competition, for example by 
deploying a publicly-funded network, the incumbents actively sabotage those 
attempts by any means they can.  Monopolies are inherently customer-hostile, 
and arguments based on market forces fail in their presence.

 - Jonathan Morton

_______________________________________________
Bloat mailing list
Bloat@lists.bufferbloat.net
https://lists.bufferbloat.net/listinfo/bloat

Reply via email to