> On 10 Jul, 2021, at 2:01 am, Leonard Kleinrock <l...@cs.ucla.edu> wrote:
> 
> No question that non-stationarity and instability are what we often see in 
> networks.  And, non-stationarity and instability are both topics that lead to 
> very complex analytical problems in queueing theory.  You can find some 
> results on the transient analysis in the queueing theory literature 
> (including the second volume of my Queueing Systems book), but they are 
> limited and hard. Nevertheless, the literature does contain some works on 
> transient analysis of queueing systems as applied to network congestion 
> control - again limited. On the other hand, as you said, control theory 
> addresses stability head on and does offer some tools as well, but again, it 
> is hairy. 

I was just about to mention control theory.

One basic characteristic of Poisson traffic is that it is inelastic, and 
assumes there is no control feedback whatsoever.  This means it can only be a 
valid model when the following are both true:

1: The offered load is *below* the link capacity, for all links, averaged over 
time.

2: A high degree of statistical multiplexing exists.

If 1: is not true and the traffic is truly inelastic, then the queues will 
inevitably fill up and congestion collapse will result, as shown from ARPANET 
experience in the 1980s; the solution was to introduce control feedback to the 
traffic, initially in the form of TCP Reno.  If 2: is not true then the traffic 
cannot be approximated as Poisson arrivals, regardless of load relative to 
capacity, because the degree of correlation is too high.

Taking the iPhone introduction anecdote as an illustrative example, measuring 
utilisation as very close to 100% is a clear warning sign that the Poisson 
model was inappropriate, and a control-theory approach was needed instead, to 
capture the feedback effects of congestion control.  The high degree of 
statistical multiplexing inherent to a major ISP backhaul is irrelevant to that 
determination.

Such a model would have found that the primary source of control feedback was 
human users giving up in disgust.  However, different humans have different 
levels of tolerance and persistence, so this feedback was not sufficient to 
reduce the load sufficiently to give the majority of users a good service; 
instead, *all* users received a poor service and many users received no usable 
service.  Introducing a technological control feedback, in the form of packet 
loss upon overflow of correctly-sized queues, improved service for everyone.

(BTW, DNS becomes significantly unreliable around 1-2 seconds RTT, due to 
protocol timeouts, which is inherited by all applications that rely on DNS 
lookups.  Merely reducing the delays consistently below that threshold would 
have improved perceived reliability markedly.)

Conversely, when talking about the traffic on a single ISP subscriber's 
last-mile link, the Poisson model has to be discarded due to criterion 2 being 
false.  The number of flows going to even a family household is probably in the 
low dozens at best.  A control-theory approach can also work here.

 - Jonathan Morton
_______________________________________________
Bloat mailing list
Bloat@lists.bufferbloat.net
https://lists.bufferbloat.net/listinfo/bloat

Reply via email to