Hi Sebastian,
see comments at the end.
On 06.04.21 at 08:31 Sebastian Moeller wrote:
Hi Eric,
thanks for your thoughts.
On Apr 6, 2021, at 02:47, Erik Auerswald <[email protected]> wrote:
Hi,
On Mon, Apr 05, 2021 at 11:49:00PM +0200, Sebastian Moeller wrote:
all good questions, and interesting responses so far.
I'll add some details below, I mostly concur with your responses.
On Apr 5, 2021, at 14:46, Rich Brown <[email protected]> wrote:
Dave Täht has put me up to revising the current Bufferbloat article
on Wikipedia (https://en.wikipedia.org/wiki/Bufferbloat)
[...]
[...] while too large buffers cause undesirable increase in latency
under load (but decent throughput), [...]
With too large buffers, even throughput degrades when TCP considers
a delayed segment lost (or DNS gives up because the answers arrive
too late). I do think there is _too_ large for buffers, period.
Fair enough, timeouts could be changed though if required ;) but I
fully concur that laergeish buffers require management to become useful ;)
The solution basically is large buffers with adaptive management that
I would prefer the word "sufficient" instead of "large."
If properly managed there is no upper end for the size, it might not be
used though, no?
works hard to keep latency under load increase and throughput inside
an acceptable "corridor".
I concur that there is quite some usable range of buffer capacity when
considering the latency/throughput trade-off, and AQM seems like a good
solution to managing that.
I fear it is the only network side mitigation technique?
My preference is to sacrifice throughput for better latency, but then
I have been bitten by too much latency quite often, but never by too
little throughput caused by small buffers. YMMV.
Yepp, with speedtests being the killer-application for fast end-user
links (still, which is sad in itself), manufacturers and ISPs are incentivized
to err on the side of too large for buffers, so the default buffering typically
will not cause noticeable under-utilisation, as long as nobody wants to run
single-flow speedtests over a geostationary satellite link ;). (I note that
many/most speedtests silently default to test with multiple flows nowadays,
with single stream tests being at least optional in some, which will reduce the
expected buffering need).
[...]
But e.g. for traditional TCPs the amount of expected buffer needs
increases with RTT of a flow
Does it? Does the propagation delay provide automatic "buffering" in the
network? Does the receiver need to advertise sufficient buffer capacity
(receive window) to allow the sender to fill the pipe? Does the sender
need to provide sufficient buffer capacity to retransmit lost segments?
Where are buffers actually needed?
At all those places ;) in the extreme a single packet buffer should be
sufficient, but that places unrealistic high demands on the processing
capabilities at all nodes of a network and does not account for anything
unexpected (like another low starting). And in all cases doing things smarter
can help, like pacing is better at the sender's side (with better meaning
easier in the network), competent AQM is better at the bottleneck link, and at
the receiver something like TCP SACK (and the required buffers to make that
work) can help; all those cases work better with buffers. The catch is that
buffers solve important issues while introducing new issues, that need fixing.
I am sure you know all this, but spelling it out helps me to clarify my
thoughts on the matter, so please just ignore if boring/old news.
I am not convinced that large buffers in the network are needed for high
throughput of high RTT TCP flows.
See, e.g., https://people.ucsc.edu/~warner/Bufs/buffer-requirements for
some information and links to a few papers.
Thanks for the link Erik, but BBR is not properly described there
"When the RTT creeps upward -- this taken as a signal of buffer
occupancy congestion" and Sebastian also mentioned: "measures the
induced latency increase from those, interpreting to much latency as
sign that the capacity was reached/exceeded". BBR does not use
delay or its gradient as congestion signal.
Thanks, I think the bandwidth delay product is still the worst case
buffering required to allow 100% utilization with a single flow (a use case
that at least for home links seems legit, for a back bone link probably not).
But in any case if the buffers are properly managed their maximum size will not
really matter, as long as it is larger than the required minimum ;)
Nope, a BDP-sized buffer is not required to allow 100% utilization with
a single flow, because it depends on the used congestion control. For
loss-based congestion control like Reno or Cubic, this may be true,
but not necessarily for other congestion controls.
Regards,
Roland
_______________________________________________
Bloat mailing list
[email protected]
https://lists.bufferbloat.net/listinfo/bloat