On 5 May 2011, at 17:49, Neil Davies <[email protected]> wrote:

> On the issue of loss - we did a study of the UK's ADSL access network back in 
> 2006 over several weeks, looking at the loss and delay that was introduced 
> into the bi-directional traffic.
> 
> We found that the delay variability (that bit left over after you've taken 
> the effects of geography and line sync rates) was broadly
> the same over the half dozen locations we studied - it was there all the time 
> to the same level of  variance and that what did vary by time of day was the 
> loss rate.
> 
> We also found out, at the time much to our surprise - but we understand why 
> now, that loss was broadly independent of the offered load - we used a 
> constant data rate (with either fixed or variable packet sizes) .
> 
> We found that loss rates were in the range 1% to 3% (which is what would be 
> expected from a large number of TCP streams contending for a limiting 
> resource).
> 
> As for burst loss, yes it does occur - but it could be argued that this more 
> the fault of the sending TCP stack than the network.
> 
> This phenomenon was well covered in the academic literature in the '90s (if I 
> remember correctly folks at INRIA lead the way) - it is all down to the 
> nature of random processes and how you observe them.  
> 
> Back to back packets see higher loss rates than packets more spread out in 
> time. Consider a pair of packets, back to back, arriving over a 1Gbit/sec 
> link into a queue being serviced at 34Mbit/sec, the first packet being 'lost' 
> is equivalent to saying that the first packet 'observed' the queue full - the 
> system's state is no longer a random variable - it is known to be full. The 
> second packet (lets assume it is also a full one) 'makes an observation' of 
> the state of that queue about 12us later - but that is only 3% of the time 
> that it takes to service such large packets at 34 Mbit/sec. The system has 
> not had any time to 'relax' anywhere near to back its steady state, it is 
> highly likely that it is still full. 
> 
> Fixing this makes a phenomenal difference on the goodput (with the usual 
> delay effects that implies), we've even built and deployed systems with this 
> sort of engineering embedded (deployed as a network 'wrap') that mean that 
> end users can sustainably (days on end) achieve effective throughput that is 
> better than 98% of (the transmission media imposed) maximum. What we had done 
> is make the network behave closer to the underlying statistical assumptions 
> made in TCP's design.

How did you fix this? What alters the packet spacing? The network or the host?

Sam
_______________________________________________
Bloat mailing list
[email protected]
https://lists.bufferbloat.net/listinfo/bloat

Reply via email to