On Wed, 3 Jan 2024 at 01:05, Mike Hammett <na...@ics-il.net> wrote:

> It suggests that 60 meg is what you need at 10G. Is that per interface? Would 
> it be linear in that I would need 600 meg at 100G?

Not at all.

You need to understand WHY buffering is needed, to determine how much
you want to offer buffering.

Big buffering is needed, when:
   - Sender is faster than Receiver
   - Receiver wants to receive single flow at maximum rate
   - Sender is sending window growth at sender-rate, instead of
estimated receiver-rate (Common case, but easy to change, as Linux
already estimates receiver-rate, and 'tc' command can change this
behaviour)

Amount of big buffering depends on:
    - How much can the window grow, when it grows. Windows grow
exponentially, so you need (RTT*receiver-rate)/2, /2 because if the
window grows the first half is already done and is dropping in at
receiver-rate, as ACKs come by.


Let's imagine your sender is 100GE connected, and your receiver is
10GE connected. And you want to achieve a 10Gbps single flow rate.

10ms RTT - 12.5MB window size, worst case you need to grow 6.25MB and
-10% off, because some of the growth you can send to the receiver,
instead of buffering all of the growth, so you'd need 5.5-6MB.
100ms RTT would be ~60MB
200ms RTT would be ~600MB


Now decide the answer you want to give in your products for these. At
what RTT you want to guarantee what single-flow maximum rate?

I do believe many of the CDNs are already using estimated
receiver-rate to grow windows, which basically removes the need for
buffering. But any standard cubic without tuning (i.e. all OS) will
burst at line-rate window growth, causing the need for buffering.

-- 
  ++ytti

Reply via email to