On 29/06/16 16:22, Kevin Darbyshire-Bryant wrote:
Ok, so the above is done: cobalt merged into mainline cake/master.
cobalt branch rebased/fast-forwarded to the same place, so don't
forget to 'git pull' (or fetch & merge) your own repos.
Patches for LEDE to point at those latest updates h
On 28/06/16 18:37, Kevin Darbyshire-Bryant wrote:
Ok, so I've pushed the 'split u32 last_len into 2 u16' tweaks to
tc-adv & sch_cake.
I will push a corresponding change into LEDE for the iproute2 (tc)
package. The push of sch_cake itself will happen after I've merged
cobalt into maste
On 28/06/16 16:33, Jonathan Morton wrote:
On 28 Jun, 2016, at 11:40, Kevin Darbyshire-Bryant
wrote:
Would you like me to split out 'sparse_flows' and 'decaying_flows'?
No. A flow with BLUE active won’t be in “decaying flows” continuously until
traffic ceases on it, but will likely jump ra
On Tue, Jun 28, 2016 at 1:40 AM, Kevin Darbyshire-Bryant
wrote:
>
>
> On 28/06/16 03:51, Jonathan Morton wrote:
>>>
>>> On 27 Jun, 2016, at 18:18, Kevin Darbyshire-Bryant
>>> wrote:
>>>
>>> How do you feel about switching that package to the cobalt variant for
>>> wider stress testing?
>>
>> I th
> On 28 Jun, 2016, at 11:40, Kevin Darbyshire-Bryant
> wrote:
>
> Would you like me to split out 'sparse_flows' and 'decaying_flows'?
No. A flow with BLUE active won’t be in “decaying flows” continuously until
traffic ceases on it, but will likely jump rapidly between “decaying”, “sparse”
a
On 28/06/16 03:51, Jonathan Morton wrote:
On 27 Jun, 2016, at 18:18, Kevin Darbyshire-Bryant
wrote:
How do you feel about switching that package to the cobalt variant for wider
stress testing?
I think the best way to do that would be to merge the cobalt branch to master,
but retaining it
> On 27 Jun, 2016, at 18:18, Kevin Darbyshire-Bryant
> wrote:
>
> How do you feel about switching that package to the cobalt variant for wider
> stress testing?
I think the best way to do that would be to merge the cobalt branch to master,
but retaining it for further development. It’s stab
On 27/06/16 04:56, Jonathan Morton wrote:
On 4 Jun, 2016, at 22:55, Jonathan Morton wrote:
COBALT should turn out to be a reasonable antidote to sender-side cheating, due
to the way BLUE works; the drop probability remains steady until the queue has
completely emptied, and then decays slowl
Hi Jonathan,
all of this sounds great! One question inlined below…
> On Jun 27, 2016, at 05:56 , Jonathan Morton wrote:
>
>
>> On 4 Jun, 2016, at 22:55, Jonathan Morton wrote:
>>
>> COBALT should turn out to be a reasonable antidote to sender-side cheating,
>> due to the way BLUE works; t
> On 4 Jun, 2016, at 22:55, Jonathan Morton wrote:
>
> COBALT should turn out to be a reasonable antidote to sender-side cheating,
> due to the way BLUE works; the drop probability remains steady until the
> queue has completely emptied, and then decays slowly. Assuming the
> congestion-cont
On Sat, 2016-06-04 at 22:55 +0300, Jonathan Morton wrote:
> > On 4 Jun, 2016, at 20:49, Eric Dumazet wrote:
> >
> > ECN (as in RFC 3168) is well known to be trivially exploited by peers
> > pretending to be ECN ready, but not reacting to feedbacks, only to let
> > their packets traverse congested
> On 4 Jun, 2016, at 20:49, Eric Dumazet wrote:
>
> ECN (as in RFC 3168) is well known to be trivially exploited by peers
> pretending to be ECN ready, but not reacting to feedbacks, only to let
> their packets traverse congested hops with a lower drop probability.
In this case it is the sender
On Sat, 2016-06-04 at 13:10 -0400, Noah Causin wrote:
> I notice that issue with Steam. Steam uses lots of ECN, which can be
> nice for saving bandwidth with large games. The issue I notice is that
> Steam is the one application that can cause me to have ping spikes of
> over 100ms, even thoug
I notice that issue with Steam. Steam uses lots of ECN, which can be
nice for saving bandwidth with large games. The issue I notice is that
Steam is the one application that can cause me to have ping spikes of
over 100ms, even though I have thoroughly tested my network using both
flent and ds
Hi Jonathan,
> On Jun 4, 2016, at 16:16 , Jonathan Morton wrote:
>
>
>> On 4 Jun, 2016, at 17:01, moeller0 wrote:
>>
>> Maybe cake should allow to switch from the default mark by ECN policy to
>> mark by drop per command line argument? At least that would allow much
>> easier in the field
> On 4 Jun, 2016, at 17:01, moeller0 wrote:
>
> Maybe cake should allow to switch from the default mark by ECN policy to mark
> by drop per command line argument? At least that would allow much easier in
> the field testing… As is there is only the option of disabling ECN at the
> endpoint(s)
+1 for this.
I was always running sqm scripts with NOECN and found the perfs somewhat
better. Would be good to have the same option with cake.
___
Cake mailing list
Cake@lists.bufferbloat.net
https://lists.bufferbloat.net/listinfo/cake
Hi Jonathan,
> On Jun 4, 2016, at 15:55 , Jonathan Morton wrote:
>
>
>> On 4 Jun, 2016, at 04:01, Andrew McGregor wrote:
>>
>> ...servers with ECN response turned off even though they negotiate ECN.
>
> It appears that I’m looking at precisely that scenario.
>
> A random selection of conne
> On 4 Jun, 2016, at 04:01, Andrew McGregor wrote:
>
> ...servers with ECN response turned off even though they negotiate ECN.
It appears that I’m looking at precisely that scenario.
A random selection of connections from a packet dump show very high marking
rates, which are apparently acknow
> On 4 Jun, 2016, at 04:01, Andrew McGregor wrote:
>
> There are undoubtedly DCTCP-like ECN responses widely deployed, since
> that is the default behaviour in Windows Server (gated on RTT in some
> versions). But also, ECN bleaching exists, as do servers with ECN
> response turned off even tho
There are undoubtedly DCTCP-like ECN responses widely deployed, since
that is the default behaviour in Windows Server (gated on RTT in some
versions). But also, ECN bleaching exists, as do servers with ECN
response turned off even though they negotiate ECN. It would be good
to know some specifics
> On 3 Jun, 2016, at 22:09, Noah Causin wrote:
>
> Was the issue, where the drops and marks did not seem to occur, resolved?
Examination of packet dumps obtained under controlled conditions showed that
marking and dropping *did* occur as normal, and I got a normal response from a
local machin
Was the issue, where the drops and marks did not seem to occur, resolved?
On 5/26/2016 8:33 AM, Jonathan Morton wrote:
On 24 May, 2016, at 18:52, Dave Täht wrote:
My last attempts with cake the way it was had it performing miserably at
longer RTTs (try 50ms) vs codel or fq-codel - as in half t
> On 24 May, 2016, at 18:52, Dave Täht wrote:
>
> My last attempts with cake the way it was had it performing miserably at
> longer RTTs (try 50ms) vs codel or fq-codel - as in half the throughput
> achieved by codel, at that RTT.
There’s definitely something weird going on - as if the marks an
On Tue, May 24, 2016 at 11:02 AM, Dave Taht wrote:
> On Tue, May 24, 2016 at 9:56 AM, Jonathan Morton
> wrote:
> >
> >> On 24 May, 2016, at 18:52, Dave Täht wrote:
> >>
> >> My last attempts with cake the way it was had it performing miserably at
> >> longer RTTs (try 50ms) vs codel or fq-codel
On Tue, May 24, 2016 at 8:02 PM, Dave Taht wrote:
> On Tue, May 24, 2016 at 9:56 AM, Jonathan Morton
> wrote:
>>
>>> On 24 May, 2016, at 18:52, Dave Täht wrote:
>>>
>>> My last attempts with cake the way it was had it performing miserably at
>>> longer RTTs (try 50ms) vs codel or fq-codel - as
On Tue, May 24, 2016 at 9:56 AM, Jonathan Morton wrote:
>
>> On 24 May, 2016, at 18:52, Dave Täht wrote:
>>
>> My last attempts with cake the way it was had it performing miserably at
>> longer RTTs (try 50ms) vs codel or fq-codel - as in half the throughput
>> achieved by codel, at that RTT.
>
>
> On 24 May, 2016, at 18:52, Dave Täht wrote:
>
> My last attempts with cake the way it was had it performing miserably at
> longer RTTs (try 50ms) vs codel or fq-codel - as in half the throughput
> achieved by codel, at that RTT.
Was that before or after I found and fixed the invsqrt cache bug
1) I am all in favor of continued experimentation and coding in these areas.
2) However I strongly advise the first thing you attempt to do when
futzing with an aqm, is to try it at various RTTs, and then do it at
high bandwidths and low.
Some of the discussion below makes me nervous, in that a p
> On 24 May, 2016, at 16:47, Jeff Weeks wrote:
>
>> In COBALT, I keep the drop-scheduler running in this phase, but without
>> actually dropping packets, and *decrementing* count instead of incrementing
>> it; the backoff phase then
>> naturally ends when count returns to zero, instead of aft
> In COBALT, I keep the drop-scheduler running in this phase, but without
> actually dropping packets, and *decrementing* count instead of incrementing
> it; the backoff phase then
> naturally ends when count returns to zero, instead of after an arbitrary hard
> timeout. The loop simply ensure
On 05/20/2016 09:35 AM, Jonathan Morton wrote:
On 20 May, 2016, at 19:20, Rick Jones wrote:
I suppose if said software were to dive below the socket interface
it could find-out, though that will tend to lack portability.
I’m a little fuzzy on UDP socket semantics.
Could the sender set DF on a
On 05/20/2016 10:07 AM, Jonathan Morton wrote:
On 20 May, 2016, at 20:01, Rick Jones wrote:
But I haven't seen that EMSGSIZE happen with netperf UDP tests -
could be though I've never run them in an environment which
triggered PTMUD.
It’s entirely possible that netperf and/or iperf3 are (ab
> On 20 May, 2016, at 20:26, David Lang wrote:
>
> iperf3 defaults to a mss of 1200 bytes, well below the MTU
That’s not what was implied by the test run earlier. It turned out to be
producing large, heavily fragmented packets by default.
Unless I’ve somehow completely got the wrong end of t
On Fri, 20 May 2016, Jonathan Morton wrote:
On 20 May, 2016, at 20:01, Rick Jones wrote:
But I haven't seen that EMSGSIZE happen with netperf UDP tests - could be
though I've never run them in an environment which triggered PTMUD.
It’s entirely possible that netperf and/or iperf3 are (ab)us
> On 20 May, 2016, at 20:01, Rick Jones wrote:
>
> But I haven't seen that EMSGSIZE happen with netperf UDP tests - could be
> though I've never run them in an environment which triggered PTMUD.
It’s entirely possible that netperf and/or iperf3 are (ab)using the
IP_MTU_DISCOVERY socket option
> On 20 May, 2016, at 19:20, Rick Jones wrote:
>
> On 05/20/2016 08:12 AM, Jonathan Morton wrote:
>>
>>> On 20 May, 2016, at 17:04, David Lang wrote:
>>>
>>> Is it possible to get speed testing software to detect that it's receiving
>>> fragments and warn about that?
>>
>> Do iperf3’s maint
On 05/20/2016 08:12 AM, Jonathan Morton wrote:
On 20 May, 2016, at 17:04, David Lang wrote:
Is it possible to get speed testing software to detect that it's receiving
fragments and warn about that?
Do iperf3’s maintainers accept patches?
Netperf's maintainer has been known to accept patc
38 matches
Mail list logo