On Fri, 2019-03-01 at 22:02 -0500, George Amanakis wrote:
>
> I will setup a vlan and try again.
>
I replicated Pete's VLAN setup, and I am getting fairness:
IP1,2 <---> router enp1s0 / router enp1s0.100 <---> server
IP1, 1 up: 46.73 mbit/s
IP2, 32 up: 46.91
IP1,
I ran some tests with shaping ingress (ifb) and egress on the same
physical interface, albeit without vlans:
IP1, 1 up: 45.07
IP2, 32 up: 45.06
IP1, 32 down: 45.64 mbit/s
IP2, 1 down:44.50
This is on x86_64, kernel 4.20.12, iproute 4.20.
qdisc cake 8006: dev enp1s0 root refcnt 2
. In a hash collision the host_bulk_flow_count values must
be decremented on the old hosts and incremented on the new ones *if* the
queue is in the bulk set.
Reported-by: Pete Heist
Signed-off-by: George Amanakis
---
sch_cake.c | 92 +-
1 file changed
.
Reported-by: Pete Heist
Signed-off-by: George Amanakis
---
sch_cake.c | 84 +++---
1 file changed, 55 insertions(+), 29 deletions(-)
diff --git a/sch_cake.c b/sch_cake.c
index d434ae0..ed3fbd9 100644
--- a/sch_cake.c
+++ b/sch_cake.c
@@ -146,8 +146,8
I tried Toke's and Jonathan's suggestion, dropping the
sparse_flow_count. Tthe results are the same (fairness).
In a hash collision in this patch the host_bulk_flow_count is not updated,
does this make sense?
---8<---
Client A:
Data file written to
Updated version with Jonathan's suggestion. Fairness is preserved.
enp4s0 enp1s0
Client A/B <--> router <--> server
tc qdisc add dev enp1s0 root cake bandwidth 100mbit dual-srchost
besteffort
tc qdisc add dev enp4s0 root cake bandwidth 100mbit dual-dsthost
I recently rewrote the patch (out-of-tree cake) so as to keep track of the
bulk/sparse flow-count per host. I have been testing it for about a month
on a WRT1900ACS and it runs fine.
Would love to hear if Jonathan or anybody else has thought of
implementing something different.
Best,
George
A better version of the patch for testing.
Setup:
IP{1,2}(flent) <> Router <> Server(netserver)
Router:
tc qdisc add dev enp1s0 root cake bandwidth 100mbit dual-srchost besteffort
tc qdisc add dev enp4s0 root cake bandwidth 100mbit dual-dsthost besteffort
IP1:
Data file written to
I think what is happening here is that if a client has flows such as "a
(bulk upload)" and "b (bulk download)", the incoming ACKs of flow "a"
compete with the incoming bulk traffic on flow "b". With compete I mean
in terms of flow selection.
So if we adjust the host_load to be the same with the
With Toke's patch I can see these warnings using veth, too.
This is on a 4.14.53 kernel, using sch_cake/master branch.
However I can see them only if the counter is set to throw a warning at
a lower limit, i.e. 1k instead of 100k.
This is what I get:
cake in unlimited mode --> aborts in loop
Did you try to update tc to the latest commit in github.com/dtaht/tc-adv?
Your output suggests that the tc you are using doesn't understand some
of the options.
Could you update tc to the latest commit from the above repo and try again?
George
On 6/10/2018 8:20 PM, Kristjan Onu wrote:
>
---
sch_cake.c | 1 -
1 file changed, 1 deletion(-)
diff --git a/sch_cake.c b/sch_cake.c
index 3eb743d..52ba3d7 100644
--- a/sch_cake.c
+++ b/sch_cake.c
@@ -56,7 +56,6 @@
#include
#include
#include
-#include "pkt_sched.h"
#include
#include
#include
--
2.17.0
From: gamanakis <gamana...@gmail.com>
Signed-off-by: George Amanakis <gamana...@gmail.com>
---
sch_cake.c | 18 +-
1 file changed, 17 insertions(+), 1 deletion(-)
diff --git a/sch_cake.c b/sch_cake.c
index 9f2acb5..22197e0 100644
--- a/sch_cake.c
+++ b/sch_cake.
It seems the change was introduced here:
https://patchwork.kernel.org/patch/9671147/
I drafted the following very simplistic patch, could somebody take a
look at it?
From 0c3c135cc65fa1fdd2521490c8f1edee41edcea2 Mon Sep 17 00:00:00 2001
From: gamanakis
Date: Sun, 18 Mar
Dear All,
merry christmas to the list members!
I was doing some real-world testing using dslreports.com/speedtest with
my 10/2mbps comcast cable line, and found out by bisecting that
commit 0d8f30faa3d4bb2bc87a382f18d8e0f3e4e56ea (Dynamically adjust
sojourn target according to flow count and
I will try to get a run using the same setup.
On Thu, 2017-12-07 at 04:27 +0200, Jonathan Morton wrote:
> The latest push now enforces 4 x MTU x flows on the target intra-flow
> latency. When lightly loaded, the normal target still applies.
> This gives a noticeable improvement on the goodput
Whatever your primary use case is? My biggest concern is that it
simply not crash - 300 second long tests, 1200 seconds, all night
long over and over, again, pounding it flat.
My home router runs x86_64 Archlinux on net-next with cake and
nf_conntrack compiled as integrals. TSO, GSO and GRO
branch, with kevins latest.
Trying another build here, with "m", takes hours. thx for trying 'y'!
As for what's going wrong... is nf_conntrack being built? as a module
? as integral? We've always built cake and nf_conntrack as modules
before.
On Sat, Nov 25, 2017 at 4:51 PM, George Amanak
grep'ing in net-next for nf_ct_get_tuplepr reveals these are still in use.
On 11/25/2017 7:49 PM, George Amanakis wrote:
I tried Kevin's latest commit, now it fails with:
CHK include/config/kernel.release
CHK include/generated/uapi/linux/version.h
DESCEND objtool
CHK
] Error 1
Everything is selected (Y or M) under "Core Netfilter Configuration".
Are these functions deprecated in net-next?
George
On 11/25/2017 4:42 PM, Dave Taht wrote:
bad merge. darn it.
On Sat, Nov 25, 2017 at 11:57 AM, George Amanakis <gamana...@gmail.com> wrote:
I think we missed an "allocate_host" in cake_hash(), line ~810:
if (allocate_host) {
srchost_idx = srchost_hash % CAKE_QUEUES;
inner_hash = srchost_idx % CAKE_SET_WAYS;
outer_hash = srchost_idx - inner_hash;
for (i = 0, k = inner_hash; i <
FYI, I am testing the latest cobalt branch, and commit d187cd1 breaks
compilation on 4.9.64.
make -C /lib/modules/4.9.64-1-lts/build SUBDIRS=/home/scy/src2/sch_cake
modules
LDFLAGS_MODULE="--build-id=0xf3fb15754862299786d7e6cf1bc26e289a96dff9"
at.net/listinfo/cake
-- Forwarded message --
From: "G. Amanakis" <g_amana...@yahoo.com>
To: Dave Taht <d...@taht.net>, George Amanakis via Cake
<cake@lists.bufferbloat.net>
Cc:
Bcc:
Date: Tue, 14 Nov 2017 17:13:16 -0500
Subject: Re: [Cake] total
--- Begin Message ---
Dear David,
I agree. My point is that currently ingress mode seems to be dropping
more packets than necessary to keep senders from bottlenecking the
connection (when there is a large number of concurrent flows, >8). And
right now, ingress mode is the only mode that
--- Begin Message ---
I meant proportionally to (1-1/sqrt(x)).
On 11/13/2017 8:51 PM, George Amanakis wrote:
I am exploring this idea further. If q->time_next_packet is
incremented for dropped packets proportionally to (1-1/x), where x is
the count of all flows in the tin that is being ser
INIT_LIST_HEAD(>flowchain);
cobalt_vars_init(>cvars);
+ cobalt_vars_init(>cvars2);
q->overflow_heap[k].t = i;
q->overflow_heap[k].b = j;
=8<=
On 11/11/2017 1
--- Begin Message ---
Dear All,
i would like to make a small donation for the development of cake. Is
"https://www.bufferbloat.net/projects/cerowrt/wiki/Donate/; up-to-date?
Thank you,
George Amanakis
--- End Message ---
___
Cake mailing list
--- Begin Message ---
I totally understand what you are saying. However, I believe cake's
egress and ingress modes currently behave as two extremes. One could
argue that neither of them is the golden mean. With a patch in ingress
mode (see below) and a single host using 32 flows to download I
--- Begin Message ---
Indeed. Also interplanetary essentially omits cake_advance_shaper for
dropped packets (since cobalt never drops that way), almost disabling
ingress mode. Which leads to other hosts having pings to WAN > 500ms
when one of them is using many flows.
What I think is
--- Begin Message ---
I did more testing regarding this issue.
I found out that if I use "interplanetary", I do not observe the
behaviour I mentioned before. I still need to test the responsiveness of
other hosts while one of them is using too many flows with this setting.
Setup is the same,
--- Begin Message ---
Dear Dave,
I could get the following dumps:
*lan.cake.pcap
captured on router, lan iface, with cake
#lan.cake.ping.pcap
captured on router, lan iface, with cake, ping from host other than w10
update
$lan.nocake.pcap
captured on router, lan iface, without cake
--- Begin Message ---
I think I have another install not updated to "Fall Creators Update". I
will try and get a capture using Wireshark. Or should I rather capture
on the router?
George
On 10/31/2017 9:06 PM, Antoine Deschênes wrote:
Fall Creators update, killed my whole internet connection
duler's queue is still
empty at that point.
>On Sunday, September 17, 2017, 10:34:31 AM EDT, George Amanakis
><g_amana...@yahoo.com> wrote:
>Dear Jonathan,
>I am looking/testing the latest cobalt updates.Most of them make sense.
>I have a question though.
>At t
--- Begin Message ---
Dear Jonathan,
I am looking/testing the latest cobalt updates.Most of them make sense.
I have a question though.
At the end of cake_dequeue() you schedule the watchdog at
"now+q->tins[i].cparams.target" if "!sch->q.qlen". This seems very
reasonable. Instead in the
I think I got the point.
For example in diffserv3: tdiff values -2, 2, -1 for tins 1, 0, 2, then:"if
(tdiff<=0 || tdiff<=best_time)" returns best_time -1 and tin 2. Highest
priority tin with packet overdue."if (tdiff<=best_time)" would return best_time
-2 and tin 1. Not the highest priority tin
>The last value less than or equal to zero will always win as best_tin.
>If all tdiff values are positive then best_time will end up as the lowest
>value however.
I agree with xnoreq. The algorithm as it is will not always return the least
overdue tin. It returns either the lowest positive
Hi Jonathan,
I have some questions regarding the algorithm to choose a tin to dequeue from
in sch_cake.c:---8<
int oi, best_tin=0;
s64 best_time = 0xUL;
for(oi=0; oi < q->tin_cnt; oi++) {
int tin = q->tin_order[oi];
Hi Jonathan,
I have some questions regarding the algorithm to choose a tin to dequeue from
in sch_cake.c:---8< int oi, best_tin=0;
s64 best_time = 0xUL;
for(oi=0; oi < q->tin_cnt; oi++) {
int tin = q->tin_order[oi];
b
38 matches
Mail list logo