Hi,

I sent this to tsvwg where we're discussing the LE codepoint. Since I am now talking queue settings, I thought it might be interesting to get feedback from this group as well on what advice we should give operators.

Please take into account that I am aiming for what is possible on currently deployed platforms, seen in the field. Not what might be possible on future hardware/software. So available are generally (W)RED per queue and a few queues per customer.

I am also going to test a 3 queue setup, where each of these groups of DSCP values would go into different queues where LE would perhaps be assured 5% of BW and the rest split evenly between a BE and "everything else" queue. If I did that, I would probably not start dropping LE traffic until 10-20ms buffer fill.

---------- Forwarded message ----------
Date: Thu, 12 Apr 2018 08:39:25 +0200 (CEST)
From: Mikael Abrahamsson <swm...@swm.pp.se>
To: Brian E Carpenter <brian.e.carpen...@gmail.com>
Cc: ts...@ietf.org
Subject: thoughts on operational queue settings (Re: [tsvwg] CC/bleaching
    thoughts for draft-ietf-tsvwg-le-phb-04)

On Thu, 12 Apr 2018, Brian E Carpenter wrote:

BE and LE PHBs should talk about queueing and dropping behaviour, not about capacity share, in any case. It's clear that on a congested link, LE is sacrificed first - peak hour LE throughput might be precisely zero, which is not acceptable for BE.

I have received questions from operational people for configuration examples for how to handle LE/BE etc. So I did some work in our lab to give some kind of example.

So my first goal was to figure out something that'd do something reasonable on a platform that'll only do DSCP based RED (as this is typically available on platforms going back 15 years). This is not optimal, but at least it would be deployable on lots of platforms currently installed and moving packets for customers.

The test was performed with 30ms of RTT, 10 parallel TCP sessions per diffserv RED curve, 800 megabit/s access speed (it's really gig, but in my lab setup I have some contraints that meant if I set it to gig I might get some uncontrolled packet loss due to other equipment sitting on the same shared link, so I opted for 800 megabit/s as "close enough").

What I came up with that would give LE ~10% of access bandwidth compared to BE, and a slight advantage for anything that is not BE/LE (goal was to give this traffic a lossless experience) was this:

This is a Cisco ASR9k that without this RED configuration will buffer packets up to ~90 milliseconds, resulting in 120ms RTT (30ms path RTT and 90ms buffer-bloat).

 class class-default
  shape average 800 mbps
  random-detect dscp 1,8 1 ms 500 ms
  random-detect dscp 0,2-7 5 ms 1000 ms
  random-detect dscp 9-63 10 ms 1000 ms

This basically says that for LE and CS1, start dropping packets at 1ms of buffer fill. Since some applications use CS1 for scavanger, it made sense to me to treat CS1 and LE the same.

For BE (which I made to be DSCP 0,2-7), start dropping packets at 5ms buffer fill, less agressively compared to LE.

For the rest, don't start dropping packets until 10ms buffer fill, giving it slight advantage (thought here being that gaming traffic etc should not see much drops even though they will see some induced RTT because of BE traffic).

This typically results in LE using approximately 30-50 megabit/s when there are 10 LE TCP sessions and 10 BE TCP sessions, all trying to go full out. The BE sessions then get ~750 megabit/s. The added buffer delay is around 5-10ms as that's where the BE sessions settle their BW usage. Platform unfortunately doesn't support ECN marking.

If I were to spend queues on this traffic instead of using RED, I would do this differently. I will do more tests with lower speeds etc, this was just initial testing for one use-case, but also to give an example of what can be done on currently shipping platforms. I know there are much better ways of doing this, but I want this into networks NOW, not in 5-10 years. So the easier the advice, the better chance we get this into production networks.

I don't think it's a good idea to give CS1/LE no bandwidth at all, that might cause failure cases we can't predict. I prefer to give LE traffic a big disadvantage, so that it might only get 5-10% or something of bandwidth, when there is competing traffic.

I will do more testing, I have several typical platforms available to me that are in wide use.

--
Mikael Abrahamsson    email: swm...@swm.pp.se

_______________________________________________
aqm mailing list
aqm@ietf.org
https://www.ietf.org/mailman/listinfo/aqm

Reply via email to