Hi, I’m finally getting to diffserv testing and am wondering about the tin 2 
parameters of the default ‘diffserv3':

             Tin 0       Tin 1       Tin 2  
  thresh    3125Kbit      50Mbit   12500Kbit
  target       5.8ms       5.0ms       5.0ms
interval     100.8ms     100.0ms      10.0ms
Pk-delay         0us         0us         0us
Av-delay         0us         0us         0us
Sp-delay         0us         0us         0us
…

My understanding from the latest CoDel RFC: 
https://www.ietf.org/id/draft-ietf-aqm-codel-06.txt

and notes in the Overview section of this document from K. Nichols: 
http://www.pollere.net/Pdfdocs/noteburstymacs.pdf

are that ‘interval' should generally remain at 100ms and that ‘target' should 
be computed at around 5-10% of interval, and preferably closer to 5%.

In my testing, which I’ll release more results from soon, I’ve seen no 
significant benefit to changing target or interval from their defaults for 
either Wi-Fi or 100Mbit wired links. I question that lowering tin 2’s interval 
to 10ms provides any benefit, and in fact could very well cause undesired 
behavior for those using the defaults.

I could provide a spread of tests comparing ‘diffserv3' to ‘diffserv4’ for rrul 
at various bandwidths, but since ‘diffserv4’s tins all have standard 5/100 
target and interval parameters, that seems preferable to the default of 
‘diffserv3’. Is there a justification for setting the interval outside the 
guidelines suggested by CoDel’s authors?

Pete

_______________________________________________
Cake mailing list
[email protected]
https://lists.bufferbloat.net/listinfo/cake

Reply via email to