The manufacturers would benefit from specific guidance -- if we could
say that the right value were 10 packets or 10kB or 100ms worth of
serialization delay, rounded up to an integer number of packets or
some such, they could implement it.

Trouble is, we don't know any such right value.

What we do know are things like:

* Even a single TCP flow on a very slow link will need a few packets
worth of buffer so that it can keep something in flight and so that
fast retransmit works.  Different values have been argued to be right
from this perspective, from 6 packets to 10.

* Multiple TCP flows likely need more buffer, but probably not much
more, because it becomes increasingly acceptable for fast retransmit
to fail and timeout to fire when there are more connections -- they're
probably going to keep the link busy anyway.

* Any of that is way too much from the POV of delay experienced by
interactive apps (interactive = user waiting, so includes web).

The conjunction implies a lack of right value.  In absence of such,
best we could hope for is giving negative guidance.  We do know that a
64-kB buffer on a 128-kb/s link serves no purpose and could write it
up and explain why.  We'll end up with excessively conservative
recommendations this way, but maybe it's OK.

If we can't agree on specific guidance that people can use to size the
buffers and instead can only agree on general considerations like ones
I listed above, we might be better off not issuing any guidance at all
in the spirit of doing no harm.  After all, if we can't translate that
into bytes (which a manufacturer ultimately must), why should we
expect others to do so?




On Wed, Jul 29, 2009 at 1:59 AM, james woodyatt<[email protected]> wrote:
> On Jul 29, 2009, at 10:34, Iljitsch van Beijnum wrote:
>>
>> The question is, what do we want to tell home gateway builders?
>
> We should tell them that forwarding IP flows from higher bandwidth links
> into much lower bandwidth links generally produces better results when
> queues are only large enough to handle common burst sizes without loss and
> no larger.  Unnecessarily large transmission queues may introduce
> persistently and perversely high path latency on fully loaded low-bandwidth
> links, producing severe negative effects on application throughput.
>
> Beyond that, I don't know what else we should say.  Maybe it would help to
> give some guidance for various common sub-IP links.
>
>
> --
> james woodyatt <[email protected]>
> member of technical staff, communications engineering
>
>
>



-- 
Stanislav Shalunov
BitTorrent Inc
[email protected]

personal: http://shlang.com

Reply via email to