Re: [aqm] Last Call: (FlowQueue-Codel) to Experimental RFC

2016-03-25 Thread Jonathan Morton
>> You state that the new codecs “would have delivered twice as many videos 
>> with the same quality over the same capacity”, and video “was the dominant 
>> traffic”, *and* the network was the bottleneck while running the new codecs.
>> 
>> The logical conclusion must be either that the network was severely 
>> under-capacity

> Nope. The SVLAN buffer (Service VLAN shared by all users on the same DSLAM) 
> at the Broadband Network Gateway (BNG) became the bottleneck during peak 
> hour, while at other times each user's CVLAN (Customer VLAN) at the BNG was 
> the bottleneck.

In other words, yes the network *was* congested…

> The proposition was to halve the SVLAN capacity serving the same CVLANs by 
> exploiting the multiplexing gain of equitable quality video... explained 
> below.

…and this is one of the key facts that helps me understand the use-case.

I’d be a lot more sympathetic if, as your original description strongly 
implied, it was all about getting better quality or more capacity from a given 
network, rather than attempting to *halve the capacity* of a part of the 
network that was *already congested* at peak hour.  The latter strategy is 
wrong-headed, especially from the “customer experience” point of view you 
espouse.

> A typical (not contrived) example bit-rate trace of constant quality video is 
> on slide 20 of a talk I gave for the ICCRG in May 2009, when I first found 
> out about this research: 
> http://www.bobbriscoe.net/presents/0905iccrg-pfldnet/0905iccrg_briscoe.pdf

I understand CQR versus VBR versus CBR.  Now I also understand that “equitable 
quality” means that the codec adapts the quality to the available bandwidth - 
which is not the same as CQR, and is more akin to VBR.  This is the other key 
fact that was omitted from your previous description.

It’s also implicitly clear that the video sources are under the ISP’s control, 
hence relatively centralised, otherwise they wouldn’t be able to dictate which 
codec was used.  In theory they could therefore share information explicitly 
about network conditions each individual stream experiences.

Instead, it seems information was shared only implicitly, by relying on the 
specific behaviour of a single queue at each bottleneck, which is to give the 
same congestion signal (whether that be loss, delay, or ECN) to all flows 
through it.  WFQ broke that assumption, as would any other flow-isolation 
scheme which avoided impeding lightweight flows when controlling bulk flows, 
regardless of the precise definition of a “flow".

But I do have to ask: what is the *qualitative* difference between the action 
of WFQ to equalise capacity per flow, and the off-peak scenario where each flow 
is bottlenecked at the CVLANs?  I don’t see it.

 - Jonathan Morton

___
aqm mailing list
aqm@ietf.org
https://www.ietf.org/mailman/listinfo/aqm


Re: [aqm] Last Call: (FlowQueue-Codel) to Experimental RFC

2016-03-25 Thread Bob Briscoe

Jonathan,

It does make sense.
Inline...

On 24/03/16 20:08, Jonathan Morton wrote:

On 21 Mar, 2016, at 20:04, Bob Briscoe  wrote:

The experience that led me to understand this problem was when a bunch of colleagues 
tried to set up a start-up (a few years ago now) to sell a range of "equitable 
quality" video codecs (ie constant quality variable bit-rate instead of constant 
bit-rate variable quality). Then, the first ISP they tried to sell to had WFQ in its 
Broadband remote access servers. Even tho this was between users, not flows, when video 
was the dominant traffic, this overrode the benefits of their cool codecs (which would 
have delivered twice as many videos with the same quality over the same capacity.

This result makes no sense.

You state that the new codecs “would have delivered twice as many videos with 
the same quality over the same capacity”, and video “was the dominant traffic”, 
*and* the network was the bottleneck while running the new codecs.

The logical conclusion must be either that the network was severely 
under-capacity
Nope. The SVLAN buffer (Service VLAN shared by all users on the same 
DSLAM) at the Broadband Network Gateway (BNG) became the bottleneck 
during peak hour, while at other times each user's CVLAN (Customer VLAN) 
at the BNG was the bottleneck. The proposition was to halve the SVLAN 
capacity serving the same CVLANs by exploiting the multiplexing gain of 
equitable quality video... explained below.

and was *also* the bottleneck, only twice as hard, under the old codecs; or 
that there was insufficient buffering at the video clients to cope with 
temporary shortfalls in link bandwidth;
I think you are imagining that the bit-rate of a constant quality video 
varies around a constant mean over the timescale that a client buffer 
can absorb. It doesn't. The guys who developed constant quality video 
analysed a wide range of commercial videos including feature films, 
cartoons, documentaries etc, and found that, at whatever timescale you 
average over, you get a significantly different mean. This is because, 
to get the same quality, complex passages like a scene in a forest in 
the wind or splashing water require much higher bit-rate than simpler 
passages, e.g. a talking head with a fixed background. A passage of 
roughly the same visual complexity can last for many minutes within one 
video before moving on to a passage of completely different complexity.


Also, I hope you are aware of earlier research from around 2003 that 
found that humans judge the quality of a video by the worst quality 
passages, so there's no point increasing the quality if you can't 
maintain it and have to degrade it again. That's where the idea of 
constant quality encoding came from.


The point these researchers made is that the variable bit-rate model of 
video we have all been taught was derived from the media industry's need 
to package videos in constant size media (whether DVDs or TV channels). 
The information rate that the human brain prefers is very different.


A typical (not contrived) example bit-rate trace of constant quality 
video is on slide 20 of a talk I gave for the ICCRG in May 2009, when I 
first found out about this research: 
http://www.bobbriscoe.net/presents/0905iccrg-pfldnet/0905iccrg_briscoe.pdf
As it says, the blue plot is averaged over 3 frames (0.12s) and red over 
192 frames (7.68s). If FQ gave everyone roughly constant bit-rate, you 
can see that even 7s of client buffer would not be able to absorb the 
difference between what they wanted and what they were given.


Constant quality videos multiplex together nicely in a FIFO. The rest of 
slide 20 quantifies the multiplexing gain:
* If you keep it strictly constant quality, you get 25% multiplexing 
gain compared to CBR.
* If all the videos respond to congestion a little (ie when many peaks 
coincide causing loss or ECN), so they all sacrifice the same proportion 
of quality (called equitable quality video), you get over 200% 
multiplexing gain relative to CBR. That's the x2 gain I quoted originally.


Anyway, even if client buffering did absorb the variations, you wouldn't 
want to rely on it. Constant quality video ought to be applicable to 
conversational and interactive video, not just streamed. Then you would 
want to keep client buffers below a few milliseconds.



or that demand for videos doubled due to the new codecs providing a step-change 
in the user experience (which feeds back into the network capacity conclusion).

Nope, this was a controlled experiment (see below).

In short, it was not WFQ that caused the problem.
Once they worked out that the problem might be the WFQ in the Broadband 
Network Gateway, they simulated the network with and without WFQ and 
proved that WFQ was the problem.


References

The papers below describe Equitable Quality VIdeo, but I'm afraid there 
is no published write-up of the problems they encountered with FQ - an 
unfortunate 

Re: [aqm] Last Call: (FlowQueue-Codel) to Experimental RFC

2016-03-24 Thread Toke Høiland-Jørgensen
Dave Cridland  writes:

> If this isn't standards track because there's no WG consensus for a single
> algorithm (and we'll argue over whether a queueing algorithm is a protocol or
> not some other time), then I think this WG document should reflect that
> consensus and hold back on the recommendations, then, unless you really have 
> WG
> consensus for that position.
>
> If this were an individual submission, it'd be different, but a WG document 
> must
> reflect the Working Group as a whole and not just the authors.

Yes, well, ensuring that it does is what the WG last call and review
process is for, isn't it? Which the draft has been through without
anyone taking issue with it. Not even sure what (if any) the proper
process for handling this is at this time (the tracker lists the status
as "Submitted to IESG for Publication")...?

I explained the reasoning behind the current language in a previous
email. The only proposal for alternative language has been from
Grenville, and as I said I can live with that. However, I'm not terribly
inclined to spend more time editing this until I'm sure that it will
actually put the issue to rest.

-Toke

___
aqm mailing list
aqm@ietf.org
https://www.ietf.org/mailman/listinfo/aqm


Re: [aqm] Last Call: (FlowQueue-Codel) to Experimental RFC

2016-03-24 Thread Wesley Eddy

On 3/24/2016 9:01 AM, Toke Høiland-Jørgensen wrote:

Dave Cridland  writes:


Well, I have to ask why, in this case, it's Experimental and not
Standards-Track?

Heh. Well, I guess the short answer is "because there wasn't WG
consensus to do that". Basically, the working group decided that all the
algorithms we are describing will be experimental rather than standards
track, at least for now. Because they are queueing algorithms and not
protocols (and so do not have the same interoperability requirements),
this was deemed an acceptable way forward, and a way to get it "out
there" without having to have to agree to push for The One True AQM(tm).

(This is my understanding; I'm sure someone will chime in and correct me
if I'm wrong).


Personally, I would have no problem with this being standards track :)





I am one of the WG chairs and document shepherd.  The AQM charter does 
allow for publication on the Standards Track, but at this point in time 
there did not seem to be a consensus that this was necessary, plus given 
some of the open research questions, it seemed like a prudent choice.  
We can always go stronger and make a standard later on.


I think Bob's concerns here, and the disagreement about what happens in 
reality, make it very obvious that Experimental is the right choice!  
The indications so far are that this has a lot of promise to help, but 
there are questions, and it could benefit from even more experience 
deploying in the wild, and watching what happens.



___
aqm mailing list
aqm@ietf.org
https://www.ietf.org/mailman/listinfo/aqm


Re: [aqm] Last Call: (FlowQueue-Codel) to Experimental RFC

2016-03-24 Thread Dave Cridland
On 24 March 2016 at 13:01, Toke Høiland-Jørgensen  wrote:

> Dave Cridland  writes:
>
> > What we meant to say was something along the lines of "You want to
> turn
> > this on; it'll do you good, so get on with it! You won't regret it!
> Now
> > go fix the next 100 million devices!". The current formulation in the
> > draft is an attempt to be slightly less colloquial about it... ;)
> >
> > Well, I have to ask why, in this case, it's Experimental and not
> > Standards-Track?
>
> Heh. Well, I guess the short answer is "because there wasn't WG
> consensus to do that". Basically, the working group decided that all the
> algorithms we are describing will be experimental rather than standards
> track, at least for now. Because they are queueing algorithms and not
> protocols (and so do not have the same interoperability requirements),
> this was deemed an acceptable way forward, and a way to get it "out
> there" without having to have to agree to push for The One True AQM(tm).
>
>
If this isn't standards track because there's no WG consensus for a single
algorithm (and we'll argue over whether a queueing algorithm is a protocol
or not some other time), then I think this WG document should reflect that
consensus and hold back on the recommendations, then, unless you really
have WG consensus for that position.

If this were an individual submission, it'd be different, but a WG document
must reflect the Working Group as a whole and not just the authors.

Of course, this isn't even my biscuit to dunk, let alone my hill to die on.


> (This is my understanding; I'm sure someone will chime in and correct me
> if I'm wrong).
>
>
> Personally, I would have no problem with this being standards track :)
>
> -Toke
>
___
aqm mailing list
aqm@ietf.org
https://www.ietf.org/mailman/listinfo/aqm


Re: [aqm] Last Call: (FlowQueue-Codel) to Experimental RFC

2016-03-24 Thread Dave Cridland
On 24 Mar 2016 3:02 am, "grenville armitage"  wrote:
>
>
>
> On 03/18/2016 21:35, Bob Briscoe wrote:
>>
>> IESG, authors,
>>
>> 1. Safe?
>>
>> My main concern is with applicability. In particular, the sentence in
section 7 on Deployment Status: "We believe it to be a safe default and
encourage people running Linux to turn it on: ...". and a similar sentiment
repeated in the conclusions. "and we believe it to be safe to turn on by
default, as has already happened in a number of Linux distributions."
>
>
> At the risk of incurring further wrath, and noting that the IESG did
request "final comments on this action" (hence all the CCs), I think
there's something to Bob's observation about the word "safe".
>
> What about:
>
> Section 1: "...and we believe it to be safe to turn on by default, ..."
-> "...and we believe it to be significantly beneficial to turn on by
default, ..."
> Section 7: "We believe it to be a safe default and ..." -> "We believe it
to be a significantly beneficial default and ..."
>

Actually I'd read that as more of a recommendation than merely safe. I
think by safe, the authors mean that no significant harm has been found to
occur. Simply restating that the protocol is experimental should be enough,
I'd have thought, though if you really want:

Although Experimental, this is believed to do no harm as a default in
practise, and ...

> (Yes, this is going to be an Experimental RFC. And yes, turning on
FQ_CoDel generally results in awesome improvements wrt pfifo. But the two
instances of "safe" in draft-ietf-aqm-fq-codel-05.txt do imply to me a
wider degree of applicability than is probably warranted at this juncture.
I just hadn't noticed until Bob mentioned it.)
>
> cheers,
> gja
>
>
>
>
>
>
>
___
aqm mailing list
aqm@ietf.org
https://www.ietf.org/mailman/listinfo/aqm


Re: [aqm] Last Call: (FlowQueue-Codel) to Experimental RFC

2016-03-24 Thread Toke Høiland-Jørgensen
Dave Cridland  writes:

> What we meant to say was something along the lines of "You want to turn
> this on; it'll do you good, so get on with it! You won't regret it! Now
> go fix the next 100 million devices!". The current formulation in the
> draft is an attempt to be slightly less colloquial about it... ;)
>
> Well, I have to ask why, in this case, it's Experimental and not
> Standards-Track?

Heh. Well, I guess the short answer is "because there wasn't WG
consensus to do that". Basically, the working group decided that all the
algorithms we are describing will be experimental rather than standards
track, at least for now. Because they are queueing algorithms and not
protocols (and so do not have the same interoperability requirements),
this was deemed an acceptable way forward, and a way to get it "out
there" without having to have to agree to push for The One True AQM(tm).

(This is my understanding; I'm sure someone will chime in and correct me
if I'm wrong).


Personally, I would have no problem with this being standards track :)

-Toke

___
aqm mailing list
aqm@ietf.org
https://www.ietf.org/mailman/listinfo/aqm


Re: [aqm] Last Call: (FlowQueue-Codel) to Experimental RFC

2016-03-24 Thread Toke Høiland-Jørgensen
Dave Cridland  writes:

> Actually I'd read that as more of a recommendation than merely safe. I
> think by safe, the authors mean that no significant harm has been
> found to occur.

What we meant to say was something along the lines of "You want to turn
this on; it'll do you good, so get on with it! You won't regret it! Now
go fix the next 100 million devices!". The current formulation in the
draft is an attempt to be slightly less colloquial about it... ;)

-Toke

___
aqm mailing list
aqm@ietf.org
https://www.ietf.org/mailman/listinfo/aqm


Re: [aqm] Last Call: (FlowQueue-Codel) to Experimental RFC

2016-03-24 Thread Toke Høiland-Jørgensen
grenville armitage  writes:

> What about:
>
> Section 1: "...and we believe it to be safe to turn on by default, ..." ->
> "...and we believe it to be significantly beneficial to turn on by default, 
> ..."
> Section 7: "We believe it to be a safe default and ..." -> "We believe it to 
> be
> a significantly beneficial default and ..."

Aha! Finally someone is being constructive! Thank you!

> (Yes, this is going to be an Experimental RFC. And yes, turning on FQ_CoDel
> generally results in awesome improvements wrt pfifo. But the two instances of
> "safe" in draft-ietf-aqm-fq-codel-05.txt do imply to me a wider degree of
> applicability than is probably warranted at this juncture. I just hadn't 
> noticed
> until Bob mentioned it.)

Still not sure I agree that having the word 'safe' in there is such a
big deal, but, well, if multiple people think it's an issue that in
itself might be reason enough to change it. And I can live with your
alternative formulation. :)

-Toke

___
aqm mailing list
aqm@ietf.org
https://www.ietf.org/mailman/listinfo/aqm


Re: [aqm] Last Call: (FlowQueue-Codel) to Experimental RFC

2016-03-23 Thread grenville armitage



On 03/18/2016 21:35, Bob Briscoe wrote:

IESG, authors,

1. Safe?

My main concern is with applicability. In particular, the sentence in section 7 on Deployment 
Status: "We believe it to be a safe default and encourage people running Linux to turn it on: 
...". and a similar sentiment repeated in the conclusions. "and we believe it to be safe 
to turn on by default, as has already happened in a number of Linux distributions."


At the risk of incurring further wrath, and noting that the IESG did request "final comments 
on this action" (hence all the CCs), I think there's something to Bob's observation about the 
word "safe".

What about:

Section 1: "...and we believe it to be safe to turn on by default, ..." -> "...and 
we believe it to be significantly beneficial to turn on by default, ..."
Section 7: "We believe it to be a safe default and ..." -> "We believe it to be a 
significantly beneficial default and ..."

(Yes, this is going to be an Experimental RFC. And yes, turning on FQ_CoDel generally 
results in awesome improvements wrt pfifo. But the two instances of "safe" in 
draft-ietf-aqm-fq-codel-05.txt do imply to me a wider degree of applicability than is 
probably warranted at this juncture. I just hadn't noticed until Bob mentioned it.)

cheers,
gja







___
aqm mailing list
aqm@ietf.org
https://www.ietf.org/mailman/listinfo/aqm


Re: [aqm] Last Call: (FlowQueue-Codel) to Experimental RFC

2016-03-23 Thread Dave Taht
Bob:

Look. To me the time for discussing this was on the AQM list, months
ago. I see no reason to drag all these cc's in. There is an awful lot
of context and old business here, and you've said two things that
really ticked me off.

On Wed, Mar 23, 2016 at 12:37 PM, Bob Briscoe  wrote:
> David,
>
> Quick reply for now...
>
> DualQ and DCTCP is a separate issue to this question, because it has only
> just started development.

This is true and I'm sorry I dragged it into it.

"cake", on my side is pretty far along, and we are making serious
headway on fixing wifi of late. Perhaps I'll be able to give a talk
about the issues in wifi we've almost fixed by ietf berlin. I have
most of a blog written around todays' wonderful dataset.

> That's why I only mentioned PIE & FQ_CoDel, which have been maturing for
> some time.

I do not know of any substantial deployments of pie as yet. I'm
looking forward to seeing DOCSIS-pie shipped and tested sometime in
the next year or so.

>
> Safe means "without unintended side-effects". The word doesn't alter its
> meaning when the thing in question is really cool in other respects.
>
> If a teleworker is using a VPN for their everyday work from home, and every
> time their kids upload or download an elephant they squeeze all the VPN
> traffic to nearly nothing, that is enough to require a support call, and
> explanation of how to switch FQ_CoDel to a safer (but less performant) AQM
> (such as PIE, say).

Your assertion that in this case performance would be squeezed to
nearly nothing is incorrect. In the case of a single upload vs a vpn,
life converges to half for each, rapidly. In pie, less rapidly. In the
overbuffered field of today's non-buffer managed ISP environments,
they converge in 10s of seconds, or more.

In the case of that kid torrenting madly, both pie and fq_codel share
the link in roughly the same proportions as the number of extant
flows, with fq_codel only, having a tighter definition of a flow and
converging faster.

In either case, today, without some form of queue management, that kid
doing an upload is generally going to hose the link entirely and make
dad mad in the first place.

...

You and I share a focus in opposite ways, you'd like to selectively
gain priority (without diffserv) via some means, I'd like to
selectively lose priority (without diffserv). To me your focus is a
game theory fail - if someone wins, someone has to lose, and at in the
latter case... well... see section 6.3 of the draft for the cite
there.

I felt that research was important (which is why it is cited in the
draft, and I would not mind if it made pie's and yours), but at least
in my case I'm pretty sure how to go about getting less priority than
a fq+aqm'd system will want to give you, just haven't got around to
resuming the work... (after proving to my satisifaction that uTP was
mostly doing the right things anyway)... and still... for those that
like knobs, we suggest a 3 tier diffserv system in the draft and cake
has it built in.

And I HATE diffserv.

...

As for considering one technology more "safe" than the other

We've already shown in detail how badly the single queue aqms respond
to bursts of mixed traffic in particular, and the damage GRO is
causing.

The current state of seeing 2+ seconds of bufferbloat on so many link
types is decidedly *unsafe*. But the internet still seems to keep
working despite all that...

> This is not an academic argument. There are a few billion people using the
> Internet. Even if 0.01% are hit by one of the listed limitations, that's a

I look forward to pleasing the other 99.99 percent and reducing their
support calls.

Hell, I'd settle for just 10 or 20 million more this year, that's
still well behind the growth curve of the internet itself.

Doesn't seeing a billion new devices deploy in the last few years
without sane queue management, bother you?

> huge volume of support calls. This is why it is important to use the word
> "safe" precisely. Yes, FQ_CoDel has awesome performance. But it is awesome
> and unsafe in certain circumstances.

I wish you had raised your objections to this part of the language in
draft long before it hit last call. There are plenty of drafts extant
(ecn benefits being the one that still sticks in my craw) - where I
raised my objections during the process, then gave up.

> If that muddies the marketing message, well then seriously consider
> marketing PIE instead.

It's just one of several competing experimental RFCs at this point.
Near as I can tell, we're going to ship them all at the same time.
Can't we get some hardware built and let the market decide?

> There is no room for "not invented here" in this
> decision.

I resent this. I have as even-handedly as possible treated every new
advancement in this field with as much scientific integrity as I could
muster... and went and tested the hell out of it, whenever I could.

as for bufferbloat.net's efforts: We've published all 

Re: [aqm] Last Call: (FlowQueue-Codel) to Experimental RFC

2016-03-20 Thread Bob Briscoe

IESG, authors,

1. Safe?

My main concern is with applicability. In particular, the sentence in 
section 7 on Deployment Status: "We believe it to be a safe default and 
encourage people running Linux to turn it on: ...". and a similar 
sentiment repeated in the conclusions. "and we believe it to be safe to 
turn on by default, as has already happened in a number of Linux 
distributions."


Can one of the authors explain why a solution with the limitations in 
section 6 can still be described as "safe"? Doesn't "safe" mean "no 
unintended side-effects"? For instance, the limitations section says 
FQ_CoDel will schedule all the flows within an IPsec VPN as one entity - 
equivalent to a single microflow outside the VPN. Also, it says FQ_CoDel 
overrides the attempt of a scavenger flow to use less capacity than 
other flows (consequently causing foreground flows to get less capacity 
than intended). Why do the authors feel the need to say that these 
behaviours are safe?


Indeed, these sentences seem rather Orwellian. They assert the current 
group-think as fact, even though it is the opposite of the facts stated 
earlier.


Would it not be correct instead to say that FQ_CoDel has been made the 
default in a number of Linux distributions despite not being safe in 
some circumstances?


2. Default?

If a draft saying "We believe it to be a safe default..." is published 
as an RFC, it means "The IETF/IESG/etc believes..."
Only one solution can be default, so if the IETF says that FQ_CoDel is a 
safe default, and no other AQM RFC makes any claim to being a safe 
default (which they do not at the moment), it could be read as the IETF 
recommending FQ_CoDel for default status and, by implication, other AQMs 
(like PIE, say) are not recommended for default status.


As far as I know, unlike the listed FQ_CoDel limitations, no limitations 
of PIE have been identified. I don't think anyone is claiming that the 
performance of FQ_CoDel is
awesomely better than PIE. May be a bit better, may be a bit worse, 
depending on circumstances, and depending on which you value most out of 
low queuing delay, high utilization, or low loss.


So, if the authors want the IETF to recommend a default AQM on the basis 
of safety (and I agree safety is the most important factor when choosing 
a default), the most likely candidate would be PIE, wouldn't it?  
FQ_CoDel has unintended side-effects, which implies it is not a good 
candidate for default; it should only be configured deliberately by 
those who can live with the side-effects.


I believe these unintended side-effects were the main reason PIE rather 
than FQ_CoDel was defined as the minimum requirement for a DOCSIS 3.1 
cable modem.


For those reading this who haven't been following the AQM WG, I can 
assure you I'm not associated with the authors of PIE (or FQ_CoDel). 
Indeed, I provided an extensive critique of PIE during the WG phase.


3. A Detail
I also have a concern about the way the limitations are written 
(typically, each limitation is stated, followed by a arm-waving 
qualification attempting to create an impression that there is not 
really a limitation). To keep the thread clean, I'll send that in a 
follow-up email.


Indeed, rather than downplay each limitation, it would be more 
appropriate (and it is common IETF practice) to flag applicability 
limitations up front, in the abstract.


Regards



Bob

On 03/03/16 17:20, The IESG wrote:

The IESG has received a request from the Active Queue Management and
Packet Scheduling WG (aqm) to consider the following document:
- 'FlowQueue-Codel'
as Experimental RFC

The IESG plans to make a decision in the next few weeks, and solicits
final comments on this action. Please send substantive comments to the
i...@ietf.org  mailing lists by 2016-03-17. Exceptionally, comments may be
sent toi...@ietf.org  instead. In either case, please retain the
beginning of the Subject line to allow automated sorting.

Abstract


This memo presents the FQ-CoDel hybrid packet scheduler/AQM
algorithm, a powerful tool for fighting bufferbloat and reducing
latency.

FQ-CoDel mixes packets from multiple flows and reduces the impact of
head of line blocking from bursty traffic.  It provides isolation for
low-rate traffic such as DNS, web, and videoconferencing traffic.  It
improves utilisation across the networking fabric, especially for
bidirectional traffic, by keeping queue lengths short; and it can be
implemented in a memory- and CPU-efficient fashion across a wide
range of hardware.




The file can be obtained via
https://datatracker.ietf.org/doc/draft-ietf-aqm-fq-codel/

IESG discussion can be tracked via
https://datatracker.ietf.org/doc/draft-ietf-aqm-fq-codel/ballot/


No IPR declarations have been submitted directly on this I-D.





--

Bob Briscoehttp://bobbriscoe.net/

___
aqm 

Re: [aqm] Last Call: (FlowQueue-Codel) to Experimental RFC

2016-03-19 Thread Toke Høiland-Jørgensen
Hi Bob

Thank you for your timely and constructive comments. Please see the
inline responses below.

> My main concern is with applicability. In particular, the sentence in
> section 7 on Deployment Status: "We believe it to be a safe default
> and encourage people running Linux to turn it on: ...". and a similar
> sentiment repeated in the conclusions. "and we believe it to be safe
> to turn on by default, as has already happened in a number of Linux
> distributions."
>
> Can one of the authors explain why a solution with the limitations in
> section 6 can still be described as "safe"?

"We believe it to be a safe default" means that we have not seen any of
the theoretical limitations we have documented in section 6 be a concern
*in practice* in any of the extensive number of deployments FQ-CoDel has
seen already. And that the benefits of turning on FQ-CoDel are
sufficient that nudging people in that direction is a good idea.

> Indeed, these sentences seem rather Orwellian.

I can assure you that we are not attempting to exert "draconian control
by propaganda, surveillance, misinformation, denial of truth, and
manipulation of the past" (quoting
https://en.wikipedia.org/wiki/Orwellian here). But thank you for
implying it :)

> Would it not be correct instead to say that FQ_CoDel has been made the
> default in a number of Linux distributions despite not being safe in
> some circumstances?

At the time it was made the default in OpenWrt (several years ago now,
if memory serves me right), there was not a whole lot of real-world
deployment experience, due to the chicken-and-egg problem of not wanting
to change the default before we have gathered more experience. However,
today the situation is quite different, thanks in part to the boldness
of the OpenWrt devs. So no, I do not believe that to be the case any
longer.

> 2. Default?
>
> If a draft saying "We believe it to be a safe default..." is published as an
> RFC, it means "The IETF/IESG/etc believes..."
> Only one solution can be default, so if the IETF says that FQ_CoDel is a safe
> default, and no other AQM RFC makes any claim to being a safe default (which
> they do not at the moment), it could be read as the IETF recommending FQ_CoDel
> for default status and, by implication, other AQMs (like PIE, say) are not
> recommended for default status.

This is certainly not my reading. This is an experimental RFC saying "we
believe it to be safe as a default" not a standards track RFC saying
"this should be the default". This is an important difference; we are
not mandating anything, but rather expressing our honest opinion on
the applicability of FQ-CoDel as a default, should anyone wish to make
it one in their domain.

> As far as I know, unlike the listed FQ_CoDel limitations, no
> limitations of PIE have been identified. I don't think anyone is
> claiming that the performance of FQ_CoDel is awesomely better than
> PIE. May be a bit better, may be a bit worse, depending on
> circumstances, and depending on which you value most out of low
> queuing delay, high utilization, or low loss.

Well, for CoDel and PIE that is certainly true. But FQ-CoDel in many
cases reduces latency under load by an order of magnitude compared to
both of them, while improving throughput.

> So, if the authors want the IETF to recommend a default AQM on the
> basis of safety (and I agree safety is the most important factor when
> choosing a default), the most likely candidate would be PIE, wouldn't
> it? FQ_CoDel has unintended side-effects, which implies it is not a
> good candidate for default; it should only be configured deliberately
> by those who can live with the side-effects.

I'm not sure it would be possible for the AQM group to agree on a
recommendation for a default. But I suppose it might be a good
bikeshedding exercise. And as noted above, this is not what we intend to
do in this case.

> 3. A Detail
>
> I also have a concern about the way the limitations are written
> (typically, each limitation is stated, followed by a arm-waving
> qualification attempting to create an impression that there is not
> really a limitation). To keep the thread clean, I'll send that in a
> follow-up email.

It is certainly not our intention to "create an impression that there is
not really a limitation". Rather, we are trying to suggest ways in which
each limitation can be mitigated by people who are concerned about it,
but still want to realise the benefits of deploying FQ-CoDel. Sure, some
of those proposals are not exactly at the "running code" stage, but
dismissing them as arm-waving is hardly fair.

I'll add, as I noted initially, that many of the limitations we have
noted are of a theoretical nature (in the sense that we are not aware of
any deployments where they have caused issue in practice). This does not
make it any less important to document them, of course, and we have been
grateful for the feedback from the working group that the section grew
out of (you yourself were 

[aqm] Last Call: (FlowQueue-Codel) to Experimental RFC

2016-03-03 Thread The IESG

The IESG has received a request from the Active Queue Management and
Packet Scheduling WG (aqm) to consider the following document:
- 'FlowQueue-Codel'
   as Experimental RFC

The IESG plans to make a decision in the next few weeks, and solicits
final comments on this action. Please send substantive comments to the
i...@ietf.org mailing lists by 2016-03-17. Exceptionally, comments may be
sent to i...@ietf.org instead. In either case, please retain the
beginning of the Subject line to allow automated sorting.

Abstract


   This memo presents the FQ-CoDel hybrid packet scheduler/AQM
   algorithm, a powerful tool for fighting bufferbloat and reducing
   latency.

   FQ-CoDel mixes packets from multiple flows and reduces the impact of
   head of line blocking from bursty traffic.  It provides isolation for
   low-rate traffic such as DNS, web, and videoconferencing traffic.  It
   improves utilisation across the networking fabric, especially for
   bidirectional traffic, by keeping queue lengths short; and it can be
   implemented in a memory- and CPU-efficient fashion across a wide
   range of hardware.




The file can be obtained via
https://datatracker.ietf.org/doc/draft-ietf-aqm-fq-codel/

IESG discussion can be tracked via
https://datatracker.ietf.org/doc/draft-ietf-aqm-fq-codel/ballot/


No IPR declarations have been submitted directly on this I-D.


___
aqm mailing list
aqm@ietf.org
https://www.ietf.org/mailman/listinfo/aqm