> I really liked you initial idea to make the threshold when to segment a 
> superpacket based on the duration that packet would hogg the wire/shaper, as 
> that gives an intuitive feel for the worst case inter-flow latency induced. 
> Especially this would allow on many links intermediate sized superpackets to 
> survive fine while turning 64K "oil-tankers" into a fleet of speedboats ;). 
> This temporal threshold would also automatically solve the higher bandwdth 
> cases elegantly. What was the reason to rip that out again?

It probably had something to do with needing to scale that threshold with the 
number of active flows, which isn't necessarily known at enqueue time with 
sufficient accuracy to be relevant at dequeue, in order to be able to guarantee 
a given peak inter-flow induced latency.  Unconditional segmentation made the 
question easy and didn't seem to have very much effect on the CPU (at the link 
rates we were targeting).

It might still be reasonable to assume the number of active flows won't change 
*much* between enqueue and dequeue.  So then we could set the threshold at, 
say, half the Codel target divided by the number of active flows plus one.

 - Jonathan Morton

_______________________________________________
Cake mailing list
Cake@lists.bufferbloat.net
https://lists.bufferbloat.net/listinfo/cake

Reply via email to