Hi Aaron,

On Fri, Nov 03, 2017 at 07:23:20PM +0000, Aaron West wrote:
> I think I understand that with faster networks giving shorter RTT you
> need less buffer space and then as either RTT or throughput
> increases(Maybe 40G+) then you will need more, am I right?

no :-) It's the opposite, the BDP (bandwidth*delay product) increases
because the bandwidth increases much faster than the delay shrinks, thus
you have more (unacked) bytes in flight. In fact the vast majority of
the delay comes from cable length and switch ports these days. The
serialization time (impacted by the bit rate) is minimal. It's not on
short lengths due to the coding in use but it's still small.

Also on receipt you need to take the time to deliver to the application
into account. When you take one millisecond to wake up an application
to deliver Rx data because it was computing an RSA key for example, you
quickly realize that within this short millisecond you've received 5 MB
of data that has to be buffered somewhere... And a 1ms pause is very
short compared to what can be observed on an application server making
disk accesses.

> So maybe it was changed to take into account modern internet links,
> however, that doesn't explain the observed throughput issue as yet...
> I wonder what else might have changed.

Something else has changed recently, TCP pacing was deployed somewhere
but I don't remember what exact kernel, though I'm almost sure it was
after 3.10 (or at least more aggressive after 3.10). You should notice
a slightly lower bandwidth with a single stream but higher sustained
bandwidth with many streams (due to less losses on switch ports and
routers).

BTW you really need to hurry up switching your kernel, as I'm going to
emit the last 3.10.108 tomorrow and after that the 3.10.x branch is dead!

> Aaron West
> 
> Loadbalancer.org Ltd.
> 
> www.loadbalancer.org
> 
> +1 888 867 9504 / +44 (0)330 380 1064
> [email protected]

Cheers,
Willy

Reply via email to