> On Apr 26, 2018, at 16:42, Jonathan Morton <[email protected]> wrote:
> 
>> I really liked you initial idea to make the threshold when to segment a 
>> superpacket based on the duration that packet would hogg the wire/shaper, as 
>> that gives an intuitive feel for the worst case inter-flow latency induced. 
>> Especially this would allow on many links intermediate sized superpackets to 
>> survive fine while turning 64K "oil-tankers" into a fleet of speedboats ;). 
>> This temporal threshold would also automatically solve the higher bandwdth 
>> cases elegantly. What was the reason to rip that out again?
> 
> It probably had something to do with needing to scale that threshold with the 
> number of active flows, which isn't necessarily known at enqueue time with 
> sufficient accuracy to be relevant at dequeue, in order to be able to 
> guarantee a given peak inter-flow induced latency.  Unconditional 
> segmentation made the question easy and didn't seem to have very much effect 
> on the CPU (at the link rates we were targeting).
> 
> It might still be reasonable to assume the number of active flows won't 
> change *much* between enqueue and dequeue.  So then we could set the 
> threshold at, say, half the Codel target divided by the number of active 
> flows plus one.

        I am too simple minded for that, I find it more intuitive (albeit less 
clever) to simply allow the user to give a limit for acceptable superpacket 
serialization delay independent of the number of bulk flows. That effectively 
is a configurable bandwidth threshold, once this works reasonably well, 
optional cleverness might be nice to add ;)

Best Regards
        Sebastian


> 
> - Jonathan Morton
> 

_______________________________________________
Cake mailing list
[email protected]
https://lists.bufferbloat.net/listinfo/cake

Reply via email to