Re: [Bloat] [Bug 1436945] Re: devel: consider fq_codel as the default qdisc for networking

2018-08-11 Thread Dave Taht
In revisiting this old thread, in light of this,

https://github.com/systemd/systemd/issues/9725

and my test results of cake with and without ecn under big loads... I
feel as though I'm becoming a
pariah in favor of queue length management, by dropping packets! In
bufferbloat.net! cake used to drop ecn marked packets at overload, I'm
seeing enormous differences in queue depth w and without ecn. (On one
test at 100mbit, 10ms queues vs 30ms), more details later.

Now, some of this is that cubic tcp is just way too aggressive and
perhaps some mods to it have arrived in the last 5 years that make it
even worse... so I'm going to go do a bit of testing with osx's
implementation
in particular. The ecn responses laid out in the original rfc were
against reno, a sawtooth, against iw2, and I also think that cwnd is
not decreasing enough nowadays in the first place.
___
Bloat mailing list
Bloat@lists.bufferbloat.net
https://lists.bufferbloat.net/listinfo/bloat


Re: [Bloat] [Bug 1436945] Re: devel: consider fq_codel as the default qdisc for networking

2018-06-12 Thread Dave Taht
as for the tail loss/rto problem, doesn't happen unless we are already
in a drop state for a queue, and it doesn't happen very often, and
when it does, it seems like a good idea to me to so thoroughly back
off in the face of so much congestion.

fq_codel originally never dropped the last packet in the queue, which
led to a worst case latency of 1024 * mtu at the bandwidth. That got
fixed and I'm happy with the result. I honestly don't know what cake
does anymore except that jonathan rarely tests at real rtts where the
amount of data in the pipe is a lot more than what's sane to have
queued, where I almost always have realistic path delays.

It would be good to resolve this debate in some direction one day,
perhaps by measuring utilization > 0 on a wide range of tests.

On Mon, Jun 11, 2018 at 11:39 PM, Dave Taht  wrote:
> "So there is no effect on other flows' latency, only subsequent
> packets in the same flow - and the flow is always hurt by dropping
> packets, rather than marking them."
>
> Disagree. The flow being dropped from will reduce its rate in an rtt,
> reducing the latency impact on other flows.
>
> I regard an ideal queue length as 1 packet or aggregate, as "showing"
> all flows the closest thing to the real path rtt. You want to store
> packets in the path, not buffers.
>
> ecn has mass. It is trivial to demonstrate an ecn marked flow starving
> out a non-ecn flow, at low rates.
>
> On Wed, Jun 6, 2018 at 6:04 AM, Jonathan Morton  wrote:
> The rationale for that decision still is valid, at low bandwidth every 
> opportunity to send a packet matters…

 Yes, which is why the DRR++ algorithm is used to carefully choose which 
 flow to send a packet from.
>>>
>>> Well, but look at it that way, the longer the traversal path after the cake 
>>> instance the higher the probability that the packet gets dropped by a later 
>>> hop.
>>
>> That's only true in case Cake is not at the bottleneck, in which case it 
>> will only have a transient queue and AQM will disengage anyway.  (This 
>> assumes you're using an ack-clocked protocol, which TCP is.)
>>
> …and every packet being transferred will increase the queued packets 
> delay by its serialization delay.

 This is trivially true, but has no effect whatsoever on inter-flow induced 
 latency, only intra-flow delay, which is already managed adequately well 
 by an ECN-aware sender.
>>>
>>> I am not sure that I am getting your point…
>>
>> Evidently.  You've been following Cake development for how long, now?  This 
>> is basic stuff.
>>
>>> …at 0.5Mbps every full-MTU packet will hog the line foe 20+ milliseconds, 
>>> so all other flows will incur at least that 20+ ms additional latency, this 
>>> is independent of inter- or intra-flow perspective, no?.
>>
>> At the point where the AQM drop decision is made, Cake (and fq_codel) has 
>> already decided which flow to service. On a bulk flow, most packets are the 
>> same size (a full MTU), and even if the packet delivered is the last one 
>> presently in the queue, probably another one will arrive by the time it is 
>> next serviced - so the effect of the *flow's* presence remains even into the 
>> foreseeable future.
>>
>> So there is no effect on other flows' latency, only subsequent packets in 
>> the same flow - and the flow is always hurt by dropping packets, rather than 
>> marking them.
>>
>>  - Jonathan Morton
>>
>> ___
>> Bloat mailing list
>> Bloat@lists.bufferbloat.net
>> https://lists.bufferbloat.net/listinfo/bloat
>
>
>
> --
>
> Dave Täht
> CEO, TekLibre, LLC
> http://www.teklibre.com
> Tel: 1-669-226-2619



-- 

Dave Täht
CEO, TekLibre, LLC
http://www.teklibre.com
Tel: 1-669-226-2619
___
Bloat mailing list
Bloat@lists.bufferbloat.net
https://lists.bufferbloat.net/listinfo/bloat


Re: [Bloat] [Bug 1436945] Re: devel: consider fq_codel as the default qdisc for networking

2018-06-12 Thread Dave Taht
"So there is no effect on other flows' latency, only subsequent
packets in the same flow - and the flow is always hurt by dropping
packets, rather than marking them."

Disagree. The flow being dropped from will reduce its rate in an rtt,
reducing the latency impact on other flows.

I regard an ideal queue length as 1 packet or aggregate, as "showing"
all flows the closest thing to the real path rtt. You want to store
packets in the path, not buffers.

ecn has mass. It is trivial to demonstrate an ecn marked flow starving
out a non-ecn flow, at low rates.

On Wed, Jun 6, 2018 at 6:04 AM, Jonathan Morton  wrote:
 The rationale for that decision still is valid, at low bandwidth every 
 opportunity to send a packet matters…
>>>
>>> Yes, which is why the DRR++ algorithm is used to carefully choose which 
>>> flow to send a packet from.
>>
>> Well, but look at it that way, the longer the traversal path after the cake 
>> instance the higher the probability that the packet gets dropped by a later 
>> hop.
>
> That's only true in case Cake is not at the bottleneck, in which case it will 
> only have a transient queue and AQM will disengage anyway.  (This assumes 
> you're using an ack-clocked protocol, which TCP is.)
>
 …and every packet being transferred will increase the queued packets delay 
 by its serialization delay.
>>>
>>> This is trivially true, but has no effect whatsoever on inter-flow induced 
>>> latency, only intra-flow delay, which is already managed adequately well by 
>>> an ECN-aware sender.
>>
>> I am not sure that I am getting your point…
>
> Evidently.  You've been following Cake development for how long, now?  This 
> is basic stuff.
>
>> …at 0.5Mbps every full-MTU packet will hog the line foe 20+ milliseconds, so 
>> all other flows will incur at least that 20+ ms additional latency, this is 
>> independent of inter- or intra-flow perspective, no?.
>
> At the point where the AQM drop decision is made, Cake (and fq_codel) has 
> already decided which flow to service. On a bulk flow, most packets are the 
> same size (a full MTU), and even if the packet delivered is the last one 
> presently in the queue, probably another one will arrive by the time it is 
> next serviced - so the effect of the *flow's* presence remains even into the 
> foreseeable future.
>
> So there is no effect on other flows' latency, only subsequent packets in the 
> same flow - and the flow is always hurt by dropping packets, rather than 
> marking them.
>
>  - Jonathan Morton
>
> ___
> Bloat mailing list
> Bloat@lists.bufferbloat.net
> https://lists.bufferbloat.net/listinfo/bloat



-- 

Dave Täht
CEO, TekLibre, LLC
http://www.teklibre.com
Tel: 1-669-226-2619
___
Bloat mailing list
Bloat@lists.bufferbloat.net
https://lists.bufferbloat.net/listinfo/bloat


Re: [Bloat] [Bug 1436945] Re: devel: consider fq_codel as the default qdisc for networking

2018-06-06 Thread Mario Hock

Am 06.06.2018 um 10:15 schrieb Sebastian Moeller:

Well, sending a packet incurs serialization delay for all queued up packets, so 
not sending a packet reduces the delay for all packets that are sent by exactly the 
serialization delay. If egress bandwidth is precious (so when it is congested and low in 
comparison with the amount of data that should be send) resorting to congestion signaling 
by dropping seems okay to me, as that immeiately frees up a "TX-slot" for 
another flow.


If the packet is dropped and the "TX-slot" is freed up, two things can 
happen:


1. The next packet belongs to the same flow. In this case, a TCP flow 
has no benefit because head-of-line-block occurs until the packet is 
retransmitted. (This might be different for loss-tolerant 
latency-sensitive UDP traffic, though.)


2. The next packet belongs to another flow. Obviously, this flow would 
benefit. However, the question which flow should be served next should 
be made by the scheduler, not by the dropper. (In the case of 
scheduler/dropper combinations, such as fq_codel.)


Best, Mario Hock
___
Bloat mailing list
Bloat@lists.bufferbloat.net
https://lists.bufferbloat.net/listinfo/bloat


Re: [Bloat] [Bug 1436945] Re: devel: consider fq_codel as the default qdisc for networking

2018-06-06 Thread Sebastian Moeller


> On Jun 6, 2018, at 09:44, Bless, Roland (TM)  wrote:
> 
> Hi,
> 
> Am 05.06.2018 um 20:34 schrieb Sebastian Moeller:
>>  The rationale for that decision still is valid, at low bandwidth every 
>> opportunity to send a packet matters and every packet being transferred will 
>> increase the queued packets delay by its serialization delay. The question 
>> IMHO is more is 4 Mbps a reasonable threshold to disable ECN or not.
> 
> ECN should be enabled irrespective of the current bottleneck bandwidth.
> I don't see any relationship between serialization delay with ECN.
> Congestion control is about determining the right amount of inflight
> data. ECN just provides an explicit congestion signal as feedback
> and helps anyway. The main problem is IMHO that most routers have
> no AQM in place in order to set the CE codepoint appropriately...

Well, sending a packet incurs serialization delay for all queued up 
packets, so not sending a packet reduces the delay for all packets that are 
sent by exactly the serialization delay. If egress bandwidth is precious (so 
when it is congested and low in comparison with the amount of data that should 
be send) resorting to congestion signaling by dropping seems okay to me, as 
that immeiately frees up a "TX-slot" for another flow.
Now, I do agree that for the affected flow itself ECN should be better 
as signaling is going to be faster than waiting for 3 DupACKs. But as always 
the proof is in the data, so I will refrain from making-up more hypothesis and 
rather try to look into acquiring data.


> 
> Regards,
> Roland
> 

___
Bloat mailing list
Bloat@lists.bufferbloat.net
https://lists.bufferbloat.net/listinfo/bloat


Re: [Bloat] [Bug 1436945] Re: devel: consider fq_codel as the default qdisc for networking

2018-06-06 Thread Bless, Roland (TM)
Hi,

Am 05.06.2018 um 20:34 schrieb Sebastian Moeller:
>   The rationale for that decision still is valid, at low bandwidth every 
> opportunity to send a packet matters and every packet being transferred will 
> increase the queued packets delay by its serialization delay. The question 
> IMHO is more is 4 Mbps a reasonable threshold to disable ECN or not.

ECN should be enabled irrespective of the current bottleneck bandwidth.
I don't see any relationship between serialization delay with ECN.
Congestion control is about determining the right amount of inflight
data. ECN just provides an explicit congestion signal as feedback
and helps anyway. The main problem is IMHO that most routers have
no AQM in place in order to set the CE codepoint appropriately...

Regards,
 Roland

___
Bloat mailing list
Bloat@lists.bufferbloat.net
https://lists.bufferbloat.net/listinfo/bloat


Re: [Bloat] [Bug 1436945] Re: devel: consider fq_codel as the default qdisc for networking

2018-06-06 Thread Sebastian Moeller
Hi Jonathan,



> On Jun 5, 2018, at 21:31, Jonathan Morton  wrote:
> 
>> On 5 Jun, 2018, at 9:34 pm, Sebastian Moeller  wrote:
>> 
>> The rationale for that decision still is valid, at low bandwidth every 
>> opportunity to send a packet matters…
> 
> Yes, which is why the DRR++ algorithm is used to carefully choose which flow 
> to send a packet from.

Well, but look at it that way, the longer the traversal path after the cake 
instance the higher the probability that the packet gets dropped by a later 
hop. So on ingress we in all likelihood already passed the main bottleneck (but 
beware of the local WLAN quality) on egress most of the path is still ahead of 
us. 

> 
>> …and every packet being transferred will increase the queued packets delay 
>> by its serialization delay.
> 
> This is trivially true, but has no effect whatsoever on inter-flow induced 
> latency, only intra-flow delay, which is already managed adequately well by 
> an ECN-aware sender.

I am not sure that I am getting your point, at 0.5Mbps every full-MTU 
packet will hog the line foe 20+ milliseconds, so all other flows will incur at 
least that 20+ ms additional latency, this is independent of inter- or 
intra-flow perspective, no?.

> 
> May I remind you that Cake never drops the last packet in a flow subqueue due 
> to AQM action, but may still apply an ECN mark to it.  

I believe this not dropping is close to codel's behavior? 

> That's because dropping a tail packet carries a risk of incurring an RTO 
> before retransmission occurs, rather than "only" an RTT delay.  Both RTO and 
> RTT are always greater than the serialisation delay of a single packet.

Thanks for the elaboration; clever! But dropping a packet will 
instantaneous free bandwidth for other flows independent of whether the sender 
has already realized that fact; sure the flow with the dropped packet will not 
as smoothly revover from the loss as it would deal with ECN signaling, but tat 
is not the vintage point from which I am looking at the issue here..

> 
> Which is why ECN remains valuable even on very low-bandwidth links.

Well, I guess I should revisit that and try to get some data at low 
bandwidths, but my hunch still is that 
> 
> - Jonathan Morton
> 

___
Bloat mailing list
Bloat@lists.bufferbloat.net
https://lists.bufferbloat.net/listinfo/bloat


Re: [Bloat] [Bug 1436945] Re: devel: consider fq_codel as the default qdisc for networking

2018-06-05 Thread Jonathan Morton
> On 5 Jun, 2018, at 9:34 pm, Sebastian Moeller  wrote:
> 
> The rationale for that decision still is valid, at low bandwidth every 
> opportunity to send a packet matters…

Yes, which is why the DRR++ algorithm is used to carefully choose which flow to 
send a packet from.

> …and every packet being transferred will increase the queued packets delay by 
> its serialization delay.

This is trivially true, but has no effect whatsoever on inter-flow induced 
latency, only intra-flow delay, which is already managed adequately well by an 
ECN-aware sender.

May I remind you that Cake never drops the last packet in a flow subqueue due 
to AQM action, but may still apply an ECN mark to it.  That's because dropping 
a tail packet carries a risk of incurring an RTO before retransmission occurs, 
rather than "only" an RTT delay.  Both RTO and RTT are always greater than the 
serialisation delay of a single packet.

Which is why ECN remains valuable even on very low-bandwidth links.

 - Jonathan Morton

___
Bloat mailing list
Bloat@lists.bufferbloat.net
https://lists.bufferbloat.net/listinfo/bloat


Re: [Bloat] [Bug 1436945] Re: devel: consider fq_codel as the default qdisc for networking

2018-06-05 Thread Sebastian Moeller
Hi Jonathan,


> On Jun 5, 2018, at 17:10, Jonathan Foulkes  wrote:
> 
> Jonathan, in the past the recommendation was for NOECN on egress if capacity 
> <4Mbps. Is that still the case in light of this?
> 
> Thanks,
> 
> Jonathan Foulkes
> 
>> On Jun 4, 2018, at 5:36 PM, Jonathan Morton  wrote:
>> 
>>> On 4 Jun, 2018, at 9:22 pm, Jonas Mårtensson  
>>> wrote:
>>> 
>>> Speaking about systemd defaults, they just enabled ecn for outgoing 
>>> connections:
>> 
>> That is also good news.  With Apple *and* Ubuntu using it by default, we 
>> should finally get critical mass of ECN traffic and any remaining blackholes 
>> fixed, making it easy for everyone else to justify turning it on as well.

The rationale for that decision still is valid, at low bandwidth every 
opportunity to send a packet matters and every packet being transferred will 
increase the queued packets delay by its serialization delay. The question IMHO 
is more is 4 Mbps a reasonable threshold to disable ECN or not.
Here are the serialization delays for a few selected bandwidths:

1000*(1538*8)/(500*1000)  = 24.61 ms
1000*(1538*8)/(1000*1000)  = 12.30 ms
1000*(1538*8)/(2000*1000)  = 6.15 ms
1000*(1538*8)/(4000*1000)  = 3.08 ms
1000*(1538*8)/(8000*1000) = 1.54 ms
1000*(1538*8)/(1*1000) = 1.23 ms
1000*(1538*8)/(12000*1000) = 1.03 ms

Personally, I guess I sort of agree with the <= 4Mbps threshold, maybe 2Mbps, 
but at <=1Mbps the serialization delay gets painful.  

In sqm-scripts we currently unconditionally default to egress(ECN) off, which 
might be to pessimistic about the usual egress bandwidths.


Best Regards

>> 
>> - Jonathan Morton
>> 
>> ___
>> Bloat mailing list
>> Bloat@lists.bufferbloat.net
>> https://lists.bufferbloat.net/listinfo/bloat
> 
> ___
> Bloat mailing list
> Bloat@lists.bufferbloat.net
> https://lists.bufferbloat.net/listinfo/bloat

___
Bloat mailing list
Bloat@lists.bufferbloat.net
https://lists.bufferbloat.net/listinfo/bloat


Re: [Bloat] [Bug 1436945] Re: devel: consider fq_codel as the default qdisc for networking

2018-06-05 Thread Jonathan Morton
> On 5 Jun, 2018, at 6:10 pm, Jonathan Foulkes  wrote:
> 
> Jonathan, in the past the recommendation was for NOECN on egress if capacity 
> <4Mbps. Is that still the case in light of this?

I would always use ECN, no exceptions - unless the sender is using a TCP 
congestion control algorithm that doesn't support it (eg. BBR currently).  
That's true for both fq_codel and Cake.

With ECN, codel's action doesn't drop packets, only resizes the congestion 
window.

 - Jonathan Morton

___
Bloat mailing list
Bloat@lists.bufferbloat.net
https://lists.bufferbloat.net/listinfo/bloat