On 9/6/23 11:20, Benny Lyne Amorsen wrote:

TCP looks quite different in 2023 than it did in 1998. It should handle
packet reordering quite gracefully; in the best case the NIC will
reassemble the out-of-order TCP packets into a 64k packet and the OS
will never even know they were reordered. Unfortunately current
equipment does not seem to offer per-packet load balancing, so we cannot
test how well it works.

I ran per-packet load balancing on a Juniper LAG between 2015 - 2016. Let's just say I won't be doing that again.

It balanced beautifully, but OoO packets made customers' lives impossible. So we went back to adaptive load balancing.


It is possible that per-packet load balancing will work a lot better
today than it did in 1998, especially if the equipment does buffering
before load balancing and the links happen to be fairly short and not
very diverse.

Switching back to per-packet would solve quite a lot of problems,
including elephant flows and bad hashing.

I would love to hear about recent studies.

2016 is not 1998, and certainly not 2023... but I've not heard about any improvements in Internet-based applications being better at handling OoO packets.

Open to new info.

100Gbps ports has given us some breathing room, as have larger buffers on Arista switches to move bandwidth management down to the user-facing port and not the upsteam router. Clever Trio + Express chips have also enabled reasonably even traffic distribution with per-flow load balancing.

We shall revisit the per-flow vs. per-packet problem when 100Gbps starts to become as rampant as 10Gbps did.

Mark.

Reply via email to