> On 9 Jun, 2015, at 21:29, Steven Blake <[email protected]> wrote:
> 
> That's great, but I'm talking about how to do AQM on Nx100 Gbps packet
> processing ASICs, not Linux boxes.  And I agree that FQ is desirable,
> but not always cost effective or feasible to retrofit.

Ah, you are talking about core networks.  That’s a little out of my area of 
expertise, but…

As I understand it, core networks are generally supposed to be 
over-provisioned.  However, even a nominally over-provisioned link can be 
saturated in the short-term and/or at peak load.  A straightforward AQM system 
is useful for coping with that.

There is a well-known theory which states that, for a dumb FIFO, queue length 
can be reduced from the “BDP rule of thumb” to BDP * sqrt(flows) in the case 
where a large number of flows is normally expected.  This is the case for core 
networks, but not the edge (where the link is routinely saturated by a single 
flow).

I think it’s reasonable to suppose that the same theory might apply to Codel 
parameters.  If so, taking a nice round number of 10000 flows (since you can’t 
predict it very precisely, an order of magnitude or two is sufficient), the new 
parameters would be interval=1ms and target=50us.  If you re-run your analysis 
using those parameters, do you get more reasonable behaviour?  Intuitively, I 
think you should get similar per-packet behaviour at 100G using those 
parameters, as at 1G using the defaults.

 - Jonathan Morton

_______________________________________________
aqm mailing list
[email protected]
https://www.ietf.org/mailman/listinfo/aqm

Reply via email to