On 16 Mar, 2011, at 10:07 pm, Jim Gettys wrote:

> each cellular link is  made more efficient through multi-user diversity gain 
> that exploits fading channel peaks of independent users while still utilizing 
> the link fully provided there are enough users - my point is, such gains 
> dilute when a user enjoying a channel peak doesn't have data waiting in his 
> buffer at that time... It helps to keep users' buffers non-empty from this 
> perspective..."
> 
> As I pointed out to them, it may (or may not) be that things just work
> out well; if the channel is busy, you'll have more time for
> aggregation of packets to naturally occur.  We don't need to run the
> channel efficiently when the channel isn't saturated.  Whenever the
> air isn't busy, it doesn't matter if we don't bother to aggregate.

So, what they're saying is that if I'm on a moving train (with varying 
relationships between steel cages, 25kV pylons and granite rockfaces - no 
exaggeration in Helsinki), there are times when my reception drops out, and the 
tower will use those times to preferentially serve everyone else, returning to 
serve me when my personal reception conditions improve.  I can grok that.

What's not entirely clear is the timescale for this.  It's most likely to be 
seconds or milliseconds, though I'm not sure which.  But it could conceivably 
be sub-millisecond (which would look like random packet loss), and the amount 
of buffering I've observed for 3G suggests that provision for minutes of 
reduced connectivity is present - which is far too much.

It is of course worth remembering that a few seconds' worth of buffering at the 
network's highest theoretical speed rating is most likely several minutes' 
worth of solid GPRS traffic.  If my train goes behind a rock and there's only a 
2G signal on the other side of it, that's a serious problem.

There is of course another use-case: people who use "wireless broadband" while 
sitting still, whether that's in a cafe, a hotel room, or at home.  I have been 
known to use my tethered 3G phone as a primary Internet connection, when for 
whatever reason a wired link was not available.  In that case I could put the 
phone up on the windowsill, well above street level, and for the most part 
would expect a clean connection to the tower.  Under those circumstances I saw 
practically no random packet loss, but until I tweaked my setup in some 
non-standard ways the connection was often very difficult to use because the 
latency would grow out of control.

It is thus quite understandable why Apple disables updating or downloading 
particularly large 'apps' over the air.

For the benefit of the 3G folks, here are some helpful axioms to discuss:

1) Buffering more than a couple of seconds of data (without employing AQM) is 
unhelpful, and will actually increase network load without increasing goodput.  
Unless there is a compelling reason, you should try to buffer less than a 
second.

This is because congestion and packet-loss information takes longer to 
influence existing flows, and new flows are more difficult to start.  After 
about 3 seconds of no information, most TCPs will start retransmission - 
regardless of whether the packets were physically lost, or are simply 
languishing in a multi-megabyte buffer somewhere.

2) Buffering less than some threshold of data causes link under-utilisation.  
The threshold depends on link data rate, the length of pauses due to 
propagation conditions, and the round-trip delay, and to a lesser extent on the 
congestion control algorithm employed by the sending host.

2a) On a lightly loaded network link under-utilisation does not matter, as the 
buffer will often become empty anyway, so you should aim to minimise latency.  
On a heavily loaded one link utilisation does matter.

3) The number of packets that "a couple of seconds" represents will vary 
according to the link speed - which in a wireless network is strongly 
time-varying.

So even if you know the network, you cannot set the buffer length to a fixed 
value.  This is what eBDP is designed to cope with.  Because the link bandwidth 
is also different on a per-user basis, you'll need to vary the buffer size for 
each user.  But see below...

4) AQM can let you use oversized buffers without destroying the user experience.

The specific features required are a) communication of congestion state to the 
client and server, eg. via packet drop or (preferably) ECN; and b) re-ordering 
of packets so that one flow does not starve another, eg. DNS packets can 
overtake HTTP packets, and packets belonging to a light user can bypass those 
belonging to a heavy user.

RED is the traditional solution to requirement a), but SFB may be a better 
solution.  SFQ is a good way to implement b), and should dovetail nicely with 
SFB.

Axiom 4 is also particularly helpful to wired broadband providers, who have 
been known to complain about a relatively small number of heavy users who 
(allegedly) manage to starve lighter users of bandwidth.  Their existing 
solution seems to be to "charge and discourage" the heavy users.  The more 
intelligent solution is to make the network pay attention to the lighter users, 
while allowing the heavy users to occupy a fair share of the spare capacity.  
The intelligent solution makes for better PR.

 - Jonathan

_______________________________________________
Bloat mailing list
[email protected]
https://lists.bufferbloat.net/listinfo/bloat

Reply via email to