On May 13, 2011, at 1:03 PM, Kevin Gross wrote:

> Do we think that bufferbloat is just a WAN problem? I work on live media 
> applications for LANs and campus networks. I'm seeing what I think could be 
> characterized as bufferbloat in LAN equipment. The timescales on 1 Gb 
> Ethernet are orders of magnitude shorter and the performance problems caused 
> are in many cases a bit different but root cause and potential solutions are, 
> I'm hoping, very similar.

Bufferbloat is most noticeable on WANs, because they have longer delays, but 
yes LAN equipment does the same thing. It shows up as extended delay or as an 
increase in loss rates. A lot of LAN equipment has very shallow buffers due to 
cost (LAN markets are very cost-sensitive). One myth with bufferbloat is that a 
reasonable solution is to make the buffer shallow; no, because when the queue 
fills you now have an increased loss rate, which shows up in timeout-driven 
retransmissions - you really want a deep buffer (for bursts and temporary 
surges) that you keep shallow using AQM techniques.

> Keeping the frame byte size small while the frame time has shrunk maintains 
> the overhead at the same level. Again, this has been a conscious decision not 
> a stubborn relic. Ethernet improvements have increased bandwidth by orders of 
> magnitude. Do we really need to increase it by a couple percentage points 
> more by reducing overhead for large payloads?

You might talk with folks who do the LAN Speed records. They generally view end 
to end jumboframes as material to the achievement. It's not about changing the 
serialization delay, it's about changing the amount of processing at the 
endpoints.

> The cost of that improved marginal bandwidth efficiency is a 6x increase in 
> latency. Many applications would not notice an increase from 12 us to 72 us 
> for a Gigabit switch hop. But on a large network it adds up, some 
> applications are absolutely that sensitive (transaction processing, cluster 
> computing, SANs) and (I thought I'd be preaching to the choir here) there's 
> no way to ever recover the lost performance.

Well, the extra delay is solvable in the transport. The question isn't really 
what the impact on the network is; it's what the requirements of the 
application are. For voice, if a voice sample is delayed 50 ms the jitter 
buffer in the codec resolves that - microseconds are irrelevant. Video codecs 
generally keep at least three video frames in their jitter buffer; at 30 fps, 
that's 100 milliseconds of acceptable variation in delay. milliseconds. 

Where it gets dicey is in elastic applications (applications using transports 
with the characteristics of TCP) that are retransmitting or otherwise reacting 
in timeframes comparable to the RTT and the RTT is small, or in elastic 
applications in which the timeout-retransmission interval is on the order of 
hundreds of milliseconds to seconds (true of most TCPs) but the RTT is on the 
order of microseconds to milliseconds. In the former, a deep queue buildup and 
trigger a transmission that further builds the queue; in the latter, a hiccup 
can have dramatic side effects. There is ongoing research on how best to do 
such things in data centers. My suspicion is that the right approach is 
something akin to 802.2 at the link layer, but with NACK retransmission - 
system A enumerates the data it sends to system B, and if system B sees a 
number skip it asks A to retransmit the indicated datagram. You might take a 
look at RFC 5401/5740/5776 for implementation suggestions. 

> Kevin Gross
>  
> From: Dave Taht [mailto:[email protected]] 
> Sent: Friday, May 13, 2011 8:54 AM
> To: [email protected]
> Cc: Kevin Gross; [email protected]
> Subject: Re: [Bloat] Burst Loss
>  
>  
> 
> On Fri, May 13, 2011 at 8:35 AM, Rick Jones <[email protected]> wrote:
> On Thu, 2011-05-12 at 23:00 -0600, Kevin Gross wrote:
> > One of the principal reasons jumbo frames have not been standardized
> > is due to latency concerns. I assume this group can appreciate the
> > IEEE holding ground on this.
> 
> Thusfar at least, bloaters are fighting to eliminate 10s of milliseconds
> of queuing delay.  I don't think this list is worrying about the tens of
> microseconds difference between the transmission time of a 9000 byte
> frame at 1 GbE vs a 1500 byte frame, or the single digit microseconds
> difference at 10 GbE.
> 
> Heh.  With the first iteration of the bismark project I'm trying to get to 
> where I have less than 30ms latency under load and have far larger problems 
> to worry about than jumbo frames. I'll be lucky to manage 1/10th that (300ms) 
> at this point. 
> 
> Not, incidentally that I mind the idea of jumbo frames. It seems silly to be 
> saddled with default frame sizes that made sense in the 70s, and in an age 
> where we will be seeing ever more packet encapsulation, reducing the header 
> size as a ratio to data size strikes me as a very worthy goal.
> 
> _______________________________________________
> Bloat mailing list
> [email protected]
> https://lists.bufferbloat.net/listinfo/bloat

_______________________________________________
Bloat mailing list
[email protected]
https://lists.bufferbloat.net/listinfo/bloat

Reply via email to