I will not be gentle here. THe authors deserve my typical peer-review feedback 
as an expert in the field of wireless protocols and congestion. (Many of you on 
the list are as well, I know, and may have different reviews. But I'm very 
troubled by this paper's claims. It's interesting technically, but flawed 
seriously, enough that I would send it back for more work before publication. 
(not that my opinion matters these days)

A separate perspective from me on the paper.

1) There is a problem in the very title wording of the paper. WiFi is not at 
all a "time varying wireless link" Nor is it obvious that a time varying link 
is even a good approximate model of WiFi LANs.  What do I mean here?

  a. WiFi is not a link. In its typical deployment (non-peer-to-peer) it is a 
hub that is multiplexed by many wireless links that share the same spatial 
channel, but follow different paths.
  b. WiFi's spatial shared wireless channel's temporal behavior is not modeled 
by a single scalar variable called "speed" or "error rate" that is varying over 
a range over time.
  c. congestion is typically queueing delay on a shared FIFO queue. In the 
AP-STA operation described, when delays happen they are not at all 
characterized by a shared single FIFO queue. In fact each packet travels twice 
through the air, each time in a highly correlated temporal distribution, and 
each packet travels through two FIFO cuques, plus a strange CSMA exponential 
backoff queue. This is NOT congestion in any real sense.

2) the paper doesn't present any data whatever regarding actual observed 
channel behaviors, or even actual observed effects.
   a. Indoor propagation of OFDM signals is complicated. I've done actual 
measurements, and continue to carry them out. But many others have as well. The 
sources of variability over time and the time constants of that variability are 
not well characterized in the literature at all. My dear friend Ted Rapaport is 
an expert on *outdoor* microwave and mmwave propagation, and has done lots of 
measurements there. But not indoor, where such things as rotating fans, moving 
people, floorand ceiling elements, etc. all affect the propagation of OFDM 
signals in ways that do vary, but not according to any model that has been 
characterized sufficiently to build, say, an ns2 simulation.
   b. the indoor behavior at the MAC layer of signals is highly variable due to 
many effects, not all physical (for example, microwave noise that affects the 
time waiting for a "clear" channel before a station can transmit. This can vary 
a lot. Also, in a Multi-User Dwelling or an enterprise office/campus, other 
WiFi traffic causes delay at the MAC layer that is non trivial. What's the 
problem here is that this "interference" (not radio interference at all, but 
MAC layer variability) is not slowly varying in any sense. The idea that this 
is modelable by a congestion control mechanism of any sort is not clear.
   c. driving all of this is the mix of application traffic in a "local area" 
(the physical region around the access point, and the upstream network to which 
the access point connects. Not all of this traffic is anything like a simple 
distribution. In fact, it's time varying across many time scales. For example, 
Netflix video is typically TCP with controlled bursts (buffer filling) 
separated by long relatively quiet periods. These bursts can use up all 
available airtime. In contrast, web traffic for one "page" often involves many 
independent HTTP streams (or soon HTTP3/UDP streams being rolled out at scale 
by Google on all its services) involving 10's or even hundreds of remote 
distinct sites, where response time is critical (lag under load is 
unacceptable).

3. the paper alludes to, but doesn't really characterize, the issue of 
"fairness" very well. Fairness isn't Harrison Bergeron style exact matching of 
bits delivered among all pairs. Instead, it really amounts to allocation of 
latency degradation (due to excess queueing) among independent applications 
sharing the medium. In other words, it is more like "non-starvation", except 
where the applications themselves may actually back off their load when 
resources are reduced, to be "friendly".
I am afraid that this pragmatic issue, the real goal of congestion control, is 
poorly discussed in the paper, yet it is the crucial measure of a good 
congestion control scheme. Throughput is entirely secondary to avoiding 
starvation unless the starvation can be proved to be inherent in the load 
presented.


Now I will say the mechanism presented may well be quite useful, but I think 
such mechanisms should not just be preseented in the technical literature *as 
if it were obvious that they are useful* at least in some typical real world 
situations.

In other words, before launching into solving a problem, one needs to research 
and characterize the problem being solved. Preferably this research will 
produce good experimentally valid models.

We saw back in the early 1970's a huge volume of theoretical work from some 
famous people (Bob Gallegher of MIT is a good example) where packet networks 
were evaluated under Poisson arrival loads, and asserted to be good. It turns 
out that there are NO real world networks that have anything like Poisson 
arrival processes. The only reason Poisson arrival processas are interesting is 
because they are mathematically trivial to analyze in closed form without 
simulation.

But work on time-shared operating system schedulers in the 1960's (at MIT, in 
the Multics project, but also at Berkeley and other places) had already 
demonstrated that user requests are not at all Poisson. In fact, so far from 
Poisson that any scheduler that assumed Poisson arrivals was dreadful in 
practice. Adding epicycles to Poisson arrivals fixed nothing, but produced even 
richer "closed form" solutions, and a vast literature of research results in 
departments focused on scheduling theory around the US.

The same has been true of Gallegher and his theory students. Poisson random 
arrivals infest the literature, and measurement driven, practical research in 
networking has been despised. 

It's time to focus on the Science of actual real networks, wireless ones in the 
real world, and simulations validated against real world situations (as 
scientists do when they have to model the real world).

I'm very, very sad to see this kind of publication, which is not science, but 
just a mathematical game played based on a hunch about wireless behavior that 
is not based on measurements or characteristic applicaitons. IN contrast, the 
reality centered work being done by people like the bloat project, while not so 
academically abstract, is the state of the art

A proper title would be "a random congestion control mmethod on ann imaginary 
artificial network that might, if we are lucky, be somewhat like a wifi 
network, but honestly we never actually looked at one in the wild"

_______________________________________________
Bloat mailing list
[email protected]
https://lists.bufferbloat.net/listinfo/bloat

Reply via email to