On Apr 7, 2012, at 4:54 AM, Neil Davies wrote:

> The answer was rather simple - calculate the amount of buffering needed to 
> achieve
> say 99% of the "theoretical" throughput (this took some measurement as to 
> exactly what 
> that was) and limit the sender to that.

So what I think I hear you saying is that we need some form of ioctl interface 
in the sockets library that will allow the sender to state the rate it 
associates with the data (eg, the video codec rate), and let TCP calculate

                           f(rate in bits per second, pmtu)
     cwnd_limit = ceiling (--------------------------------)  + C
                                g(rtt in microseconds)

Where C is a fudge factor, probably a single digit number, and f and g are 
appropriate conversion functions.

I suspect there may also be value in considering Jain's "Packet Trains" paper. 
Something you can observe in a simple trace is that the doubling behavior in 
slow start has the effect of bunching a TCP session's data together. If I have 
two 5 MBPS data exchanges sharing a 10 MBPS pipe, it's not unusual to observe 
one of the sessions dominating the pipe for a while and then the other one, for 
a long time. One of the benefits of per-flow WFQ in the network is that it 
consciously breaks that up - it forces the TCPs to interleave packets instead 
of bursts, which means that a downstream device on a more limited bandwidth 
sees packets arrive at what it considers a more rational rate. It might be nice 
if In its initial burst, TCP consciously broke the initial window into 2, or 3, 
or 4, or ten, individual packet trains - spaced those packets some number of 
milliseconds apart, so that their acknowledgements were similarly spaced, and 
the resulting packet trains in subsequent RTTs were r
 elatively small.
_______________________________________________
Bloat mailing list
[email protected]
https://lists.bufferbloat.net/listinfo/bloat

Reply via email to