Re: [Cake] [PATCH net-next v9 2/7] sch_cake: Add ingress mode

2018-05-08 Thread Sebastian Moeller


> On May 8, 2018, at 16:34, Toke Høiland-Jørgensen  wrote:
> [...]
> 
> This commit also adds a separate switch to enable ingress mode rate
> autoscaling. If enabled, the autoscaling code will observe the actual
> traffic rate and adjust the shaper rate to match it. This can help avoid
> latency increases in the case where the actual bottleneck rate decreases
> below the shaped rate. The scaling filters out spikes by an EWMA filter.
[...]

This reminds me of an discussion I had with a user who tried the 
autorate-ingress feature unsuccessfully, it sems he would have needed an 
additional toggle to set the lower range for the bandwidth as due to some 
quirkiness his ingress did not as much get autorated but rather throttled. So 
@Jonathan and @Toke, is that just an unfortunate sould that can't be helped or 
does such an additional toggle make some sense (if only as a safety belt and 
suspenders kind of thing). If yes, I might try to actually test it.

Best Regards
Sebastian
___
Cake mailing list
Cake@lists.bufferbloat.net
https://lists.bufferbloat.net/listinfo/cake


[Cake] [PATCH net-next v9 1/7] sched: Add Common Applications Kept Enhanced (cake) qdisc

2018-05-08 Thread Toke Høiland-Jørgensen
sch_cake targets the home router use case and is intended to squeeze the
most bandwidth and latency out of even the slowest ISP links and routers,
while presenting an API simple enough that even an ISP can configure it.

Example of use on a cable ISP uplink:

tc qdisc add dev eth0 cake bandwidth 20Mbit nat docsis ack-filter

To shape a cable download link (ifb and tc-mirred setup elided)

tc qdisc add dev ifb0 cake bandwidth 200mbit nat docsis ingress wash

CAKE is filled with:

* A hybrid Codel/Blue AQM algorithm, "Cobalt", tied to an FQ_Codel
  derived Flow Queuing system, which autoconfigures based on the bandwidth.
* A novel "triple-isolate" mode (the default) which balances per-host
  and per-flow FQ even through NAT.
* An deficit based shaper, that can also be used in an unlimited mode.
* 8 way set associative hashing to reduce flow collisions to a minimum.
* A reasonable interpretation of various diffserv latency/loss tradeoffs.
* Support for zeroing diffserv markings for entering and exiting traffic.
* Support for interacting well with Docsis 3.0 shaper framing.
* Extensive support for DSL framing types.
* Support for ack filtering.
* Extensive statistics for measuring, loss, ecn markings, latency
  variation.

A paper describing the design of CAKE is available at
https://arxiv.org/abs/1804.07617

This patch adds the base shaper and packet scheduler, while subsequent
commits add the optional (configurable) features. The full userspace API
and most data structures are included in this commit, but options not
understood in the base version will be ignored.

Various versions baking have been available as an out of tree build for
kernel versions going back to 3.10, as the embedded router world has been
running a few years behind mainline Linux. A stable version has been
generally available on lede-17.01 and later.

sch_cake replaces a combination of iptables, tc filter, htb and fq_codel
in the sqm-scripts, with sane defaults and vastly simpler configuration.

CAKE's principal author is Jonathan Morton, with contributions from
Kevin Darbyshire-Bryant, Toke Høiland-Jørgensen, Sebastian Moeller,
Ryan Mounce, Guido Sarducci, Dean Scarff, Nils Andreas Svee, Dave Täht,
and Loganaden Velvindron.

Testing from Pete Heist, Georgios Amanakis, and the many other members of
the cake@lists.bufferbloat.net mailing list.

tc -s qdisc show dev eth2
qdisc cake 1: root refcnt 2 bandwidth 100Mbit diffserv3 triple-isolate rtt 
100.0ms raw overhead 0
 Sent 0 bytes 0 pkt (dropped 0, overlimits 0 requeues 0)
 backlog 0b 0p requeues 0
 memory used: 0b of 500b
 capacity estimate: 100Mbit
 min/max network layer size:65535 /   0
 min/max overhead-adjusted size:65535 /   0
 average network hdr offset:0

   Bulk  Best EffortVoice
  thresh   6250Kbit  100Mbit   25Mbit
  target  5.0ms5.0ms5.0ms
  interval  100.0ms  100.0ms  100.0ms
  pk_delay  0us  0us  0us
  av_delay  0us  0us  0us
  sp_delay  0us  0us  0us
  pkts000
  bytes   000
  way_inds000
  way_miss000
  way_cols000
  drops   000
  marks   000
  ack_drop000
  sp_flows000
  bk_flows000
  un_flows000
  max_len 000
  quantum   300 1514  762

Tested-by: Pete Heist 
Tested-by: Georgios Amanakis 
Signed-off-by: Dave Taht 
Signed-off-by: Toke Høiland-Jørgensen 
---
 include/uapi/linux/pkt_sched.h |  105 ++
 net/sched/Kconfig  |   11 
 net/sched/Makefile |1 
 net/sched/sch_cake.c   | 1729 
 4 files changed, 1846 insertions(+)
 create mode 100644 net/sched/sch_cake.c

diff --git a/include/uapi/linux/pkt_sched.h b/include/uapi/linux/pkt_sched.h
index 37b5096ae97b..bc581473c0b0 100644
--- a/include/uapi/linux/pkt_sched.h
+++ b/include/uapi/linux/pkt_sched.h
@@ -934,4 +934,109 @@ enum {
 
 #define TCA_CBS_MAX (__TCA_CBS_MAX - 1)
 
+/* CAKE */
+enum {
+   TCA_CAKE_UNSPEC,
+   TCA_CAKE_BASE_RATE,
+   TCA_CAKE_DIFFSERV_MODE,
+   TCA_CAKE_ATM,
+   TCA_CAKE_FLOW_MODE,
+   TCA_CAKE_OVERHEAD,
+   TCA_CAKE_RTT,
+   TCA_CAKE_TARGET,
+   TCA_CAKE_AUTORATE,
+   TCA_CAKE_MEMORY,
+   TCA_CAKE_NAT,
+   TCA_CAKE_RAW,
+   TCA_CAKE_WASH,
+   TCA_CAKE_MPU,
+   TCA_CAKE_INGRESS,
+   TCA_CAKE_ACK_FILTER,
+   TCA_CAKE_SPLIT_GSO,
+   __TCA_CAKE_MAX
+};

[Cake] [PATCH net-next v9 4/7] sch_cake: Add NAT awareness to packet classifier

2018-05-08 Thread Toke Høiland-Jørgensen
When CAKE is deployed on a gateway that also performs NAT (which is a
common deployment mode), the host fairness mechanism cannot distinguish
internal hosts from each other, and so fails to work correctly.

To fix this, we add an optional NAT awareness mode, which will query the
kernel conntrack mechanism to obtain the pre-NAT addresses for each packet
and use that in the flow and host hashing.

When the shaper is enabled and the host is already performing NAT, the cost
of this lookup is negligible. However, in unlimited mode with no NAT being
performed, there is a significant CPU cost at higher bandwidths. For this
reason, the feature is turned off by default.

Signed-off-by: Toke Høiland-Jørgensen 
---
 net/sched/sch_cake.c |   73 ++
 1 file changed, 73 insertions(+)

diff --git a/net/sched/sch_cake.c b/net/sched/sch_cake.c
index 7e57eef5f949..a227a685bd58 100644
--- a/net/sched/sch_cake.c
+++ b/net/sched/sch_cake.c
@@ -71,6 +71,12 @@
 #include 
 #include 
 
+#if IS_REACHABLE(CONFIG_NF_CONNTRACK)
+#include 
+#include 
+#include 
+#endif
+
 #define CAKE_SET_WAYS (8)
 #define CAKE_MAX_TINS (8)
 #define CAKE_QUEUES (1024)
@@ -522,6 +528,61 @@ static bool cobalt_should_drop(struct cobalt_vars *vars,
return drop;
 }
 
+#if IS_REACHABLE(CONFIG_NF_CONNTRACK)
+
+static void cake_update_flowkeys(struct flow_keys *keys,
+const struct sk_buff *skb)
+{
+   enum ip_conntrack_info ctinfo;
+   bool rev = false;
+
+   struct nf_conn *ct;
+   const struct nf_conntrack_tuple *tuple;
+
+   if (tc_skb_protocol(skb) != htons(ETH_P_IP))
+   return;
+
+   ct = nf_ct_get(skb, );
+   if (ct) {
+   tuple = nf_ct_tuple(ct, CTINFO2DIR(ctinfo));
+   } else {
+   const struct nf_conntrack_tuple_hash *hash;
+   struct nf_conntrack_tuple srctuple;
+
+   if (!nf_ct_get_tuplepr(skb, skb_network_offset(skb),
+  NFPROTO_IPV4, dev_net(skb->dev),
+  ))
+   return;
+
+   hash = nf_conntrack_find_get(dev_net(skb->dev),
+_ct_zone_dflt,
+);
+   if (!hash)
+   return;
+
+   rev = true;
+   ct = nf_ct_tuplehash_to_ctrack(hash);
+   tuple = nf_ct_tuple(ct, !hash->tuple.dst.dir);
+   }
+
+   keys->addrs.v4addrs.src = rev ? tuple->dst.u3.ip : tuple->src.u3.ip;
+   keys->addrs.v4addrs.dst = rev ? tuple->src.u3.ip : tuple->dst.u3.ip;
+
+   if (keys->ports.ports) {
+   keys->ports.src = rev ? tuple->dst.u.all : tuple->src.u.all;
+   keys->ports.dst = rev ? tuple->src.u.all : tuple->dst.u.all;
+   }
+   if (rev)
+   nf_ct_put(ct);
+}
+#else
+static void cake_update_flowkeys(struct flow_keys *keys,
+const struct sk_buff *skb)
+{
+   /* There is nothing we can do here without CONNTRACK */
+}
+#endif
+
 /* Cake has several subtle multiple bit settings. In these cases you
  *  would be matching triple isolate mode as well.
  */
@@ -549,6 +610,9 @@ static u32 cake_hash(struct cake_tin_data *q, const struct 
sk_buff *skb,
skb_flow_dissect_flow_keys(skb, ,
   FLOW_DISSECTOR_F_STOP_AT_FLOW_LABEL);
 
+   if (flow_mode & CAKE_FLOW_NAT_FLAG)
+   cake_update_flowkeys(, skb);
+
/* flow_hash_from_keys() sorts the addresses by value, so we have
 * to preserve their order in a separate data structure to treat
 * src and dst host addresses as independently selectable.
@@ -1717,6 +1781,12 @@ static int cake_change(struct Qdisc *sch, struct nlattr 
*opt,
q->flow_mode = (nla_get_u32(tb[TCA_CAKE_FLOW_MODE]) &
CAKE_FLOW_MASK);
 
+   if (tb[TCA_CAKE_NAT]) {
+   q->flow_mode &= ~CAKE_FLOW_NAT_FLAG;
+   q->flow_mode |= CAKE_FLOW_NAT_FLAG *
+   !!nla_get_u32(tb[TCA_CAKE_NAT]);
+   }
+
if (tb[TCA_CAKE_RTT]) {
q->interval = nla_get_u32(tb[TCA_CAKE_RTT]);
 
@@ -1881,6 +1951,9 @@ static int cake_dump(struct Qdisc *sch, struct sk_buff 
*skb)
if (nla_put_u32(skb, TCA_CAKE_ACK_FILTER, q->ack_filter))
goto nla_put_failure;
 
+   if (nla_put_u32(skb, TCA_CAKE_NAT, !!(q->flow_mode & 
CAKE_FLOW_NAT_FLAG)))
+   goto nla_put_failure;
+
return nla_nest_end(skb, opts);
 
 nla_put_failure:

___
Cake mailing list
Cake@lists.bufferbloat.net
https://lists.bufferbloat.net/listinfo/cake


[Cake] [PATCH net-next v9 3/7] sch_cake: Add optional ACK filter

2018-05-08 Thread Toke Høiland-Jørgensen
The ACK filter is an optional feature of CAKE which is designed to improve
performance on links with very asymmetrical rate limits. On such links
(which are unfortunately quite prevalent, especially for DSL and cable
subscribers), the downstream throughput can be limited by the number of
ACKs capable of being transmitted in the *upstream* direction.

Filtering ACKs can, in general, have adverse effects on TCP performance
because it interferes with ACK clocking (especially in slow start), and it
reduces the flow's resiliency to ACKs being dropped further along the path.
To alleviate these drawbacks, the ACK filter in CAKE tries its best to
always keep enough ACKs queued to ensure forward progress in the TCP flow
being filtered. It does this by only filtering redundant ACKs. In its
default 'conservative' mode, the filter will always keep at least two
redundant ACKs in the queue, while in 'aggressive' mode, it will filter
down to a single ACK.

The ACK filter works by inspecting the per-flow queue on every packet
enqueue. Starting at the head of the queue, the filter looks for another
eligible packet to drop (so the ACK being dropped is always closer to the
head of the queue than the packet being enqueued). An ACK is eligible only
if it ACKs *fewer* cumulative bytes than the new packet being enqueued.
This prevents duplicate ACKs from being filtered (unless there is also SACK
options present), to avoid interfering with retransmission logic. In
aggressive mode, an eligible packet is always dropped, while in
conservative mode, at least two ACKs are kept in the queue. Only pure ACKs
(with no data segments) are considered eligible for dropping, but when an
ACK with data segments is enqueued, this can cause another pure ACK to
become eligible for dropping.

The approach described above ensures that this ACK filter avoids most of
the drawbacks of a naive filtering mechanism that only keeps flow state but
does not inspect the queue. This is the rationale for including the ACK
filter in CAKE itself rather than as separate module (as the TC filter, for
instance).

Our performance evaluation has shown that on a 30/1 Mbps link with a
bidirectional traffic test (RRUL), turning on the ACK filter on the
upstream link improves downstream throughput by ~20% (both modes) and
upstream throughput by ~12% in conservative mode and ~40% in aggressive
mode, at the cost of ~5ms of inter-flow latency due to the increased
congestion.

In *really* pathological cases, the effect can be a lot more; for instance,
the ACK filter increases the achievable downstream throughput on a link
with 100 Kbps in the upstream direction by an order of magnitude (from ~2.5
Mbps to ~25 Mbps).

Finally, even though we consider the ACK filter to be safer than most, we
do not recommend turning it on everywhere: on more symmetrical link
bandwidths the effect is negligible at best.

Signed-off-by: Toke Høiland-Jørgensen 
---
 net/sched/sch_cake.c |  264 +-
 1 file changed, 258 insertions(+), 6 deletions(-)

diff --git a/net/sched/sch_cake.c b/net/sched/sch_cake.c
index aeafbb95becd..7e57eef5f949 100644
--- a/net/sched/sch_cake.c
+++ b/net/sched/sch_cake.c
@@ -128,7 +128,6 @@ struct cake_flow {
/* this stuff is all needed per-flow at dequeue time */
struct sk_buff*head;
struct sk_buff*tail;
-   struct sk_buff*ackcheck;
struct list_head  flowchain;
s32   deficit;
struct cobalt_vars cvars;
@@ -748,9 +747,6 @@ static struct sk_buff *dequeue_head(struct cake_flow *flow)
if (skb) {
flow->head = skb->next;
skb->next = NULL;
-
-   if (skb == flow->ackcheck)
-   flow->ackcheck = NULL;
}
 
return skb;
@@ -768,6 +764,239 @@ static void flow_queue_add(struct cake_flow *flow, struct 
sk_buff *skb)
skb->next = NULL;
 }
 
+static struct iphdr *cake_get_iphdr(const struct sk_buff *skb,
+   struct ipv6hdr *buf)
+{
+   unsigned int offset = skb_network_offset(skb);
+   struct iphdr *iph;
+
+   iph = skb_header_pointer(skb, offset, sizeof(struct iphdr), buf);
+
+   if (!iph)
+   return NULL;
+
+   if (iph->version == 4 && iph->protocol == IPPROTO_IPV6)
+   return skb_header_pointer(skb, offset + iph->ihl * 4,
+ sizeof(struct ipv6hdr), buf);
+
+   else if (iph->version == 4)
+   return iph;
+
+   else if (iph->version == 6)
+   return skb_header_pointer(skb, offset, sizeof(struct ipv6hdr),
+ buf);
+
+   return NULL;
+}
+
+static struct tcphdr *cake_get_tcphdr(const struct sk_buff *skb,
+ void *buf, unsigned int bufsize)
+{
+   unsigned int offset = skb_network_offset(skb);
+   const struct ipv6hdr *ipv6h;
+   

[Cake] [PATCH net-next v9 5/7] sch_cake: Add DiffServ handling

2018-05-08 Thread Toke Høiland-Jørgensen
This adds support for DiffServ-based priority queueing to CAKE. If the
shaper is in use, each priority tier gets its own virtual clock, which
limits that tier's rate to a fraction of the overall shaped rate, to
discourage trying to game the priority mechanism.

CAKE defaults to a simple, three-tier mode that interprets most code points
as "best effort", but places CS1 traffic into a low-priority "bulk" tier
which is assigned 1/16 of the total rate, and a few code points indicating
latency-sensitive or control traffic (specifically TOS4, VA, EF, CS6, CS7)
into a "latency sensitive" high-priority tier, which is assigned 1/4 rate.
The other supported DiffServ modes are a 4-tier mode matching the 802.11e
precedence rules, as well as two 8-tier modes, one of which implements
strict precedence of the eight priority levels.

This commit also adds an optional DiffServ 'wash' mode, which will zero out
the DSCP fields of any packet passing through CAKE. While this can
technically be done with other mechanisms in the kernel, having the feature
available in CAKE significantly decreases configuration complexity; and the
implementation cost is low on top of the other DiffServ-handling code.

Filters and applications can set the skb->priority field to override the
DSCP-based classification into tiers. If TC_H_MAJ(skb->priority) matches CAKE's
qdisc handle, the minor number will be interpreted as a priority tier if it is
less than or equal to the number of configured priority tiers.

Signed-off-by: Toke Høiland-Jørgensen 
---
 net/sched/sch_cake.c |  408 +-
 1 file changed, 401 insertions(+), 7 deletions(-)

diff --git a/net/sched/sch_cake.c b/net/sched/sch_cake.c
index a227a685bd58..6f9980a6603e 100644
--- a/net/sched/sch_cake.c
+++ b/net/sched/sch_cake.c
@@ -308,6 +308,68 @@ static void cobalt_set_enqueue_time(struct sk_buff *skb,
 
 static u16 quantum_div[CAKE_QUEUES + 1] = {0};
 
+/* Diffserv lookup tables */
+
+static const u8 precedence[] = {
+   0, 0, 0, 0, 0, 0, 0, 0,
+   1, 1, 1, 1, 1, 1, 1, 1,
+   2, 2, 2, 2, 2, 2, 2, 2,
+   3, 3, 3, 3, 3, 3, 3, 3,
+   4, 4, 4, 4, 4, 4, 4, 4,
+   5, 5, 5, 5, 5, 5, 5, 5,
+   6, 6, 6, 6, 6, 6, 6, 6,
+   7, 7, 7, 7, 7, 7, 7, 7,
+};
+
+static const u8 diffserv8[] = {
+   2, 5, 1, 2, 4, 2, 2, 2,
+   0, 2, 1, 2, 1, 2, 1, 2,
+   5, 2, 4, 2, 4, 2, 4, 2,
+   3, 2, 3, 2, 3, 2, 3, 2,
+   6, 2, 3, 2, 3, 2, 3, 2,
+   6, 2, 2, 2, 6, 2, 6, 2,
+   7, 2, 2, 2, 2, 2, 2, 2,
+   7, 2, 2, 2, 2, 2, 2, 2,
+};
+
+static const u8 diffserv4[] = {
+   0, 2, 0, 0, 2, 0, 0, 0,
+   1, 0, 0, 0, 0, 0, 0, 0,
+   2, 0, 2, 0, 2, 0, 2, 0,
+   2, 0, 2, 0, 2, 0, 2, 0,
+   3, 0, 2, 0, 2, 0, 2, 0,
+   3, 0, 0, 0, 3, 0, 3, 0,
+   3, 0, 0, 0, 0, 0, 0, 0,
+   3, 0, 0, 0, 0, 0, 0, 0,
+};
+
+static const u8 diffserv3[] = {
+   0, 0, 0, 0, 2, 0, 0, 0,
+   1, 0, 0, 0, 0, 0, 0, 0,
+   0, 0, 0, 0, 0, 0, 0, 0,
+   0, 0, 0, 0, 0, 0, 0, 0,
+   0, 0, 0, 0, 0, 0, 0, 0,
+   0, 0, 0, 0, 2, 0, 2, 0,
+   2, 0, 0, 0, 0, 0, 0, 0,
+   2, 0, 0, 0, 0, 0, 0, 0,
+};
+
+static const u8 besteffort[] = {
+   0, 0, 0, 0, 0, 0, 0, 0,
+   0, 0, 0, 0, 0, 0, 0, 0,
+   0, 0, 0, 0, 0, 0, 0, 0,
+   0, 0, 0, 0, 0, 0, 0, 0,
+   0, 0, 0, 0, 0, 0, 0, 0,
+   0, 0, 0, 0, 0, 0, 0, 0,
+   0, 0, 0, 0, 0, 0, 0, 0,
+   0, 0, 0, 0, 0, 0, 0, 0,
+};
+
+/* tin priority order for stats dumping */
+
+static const u8 normal_order[] = {0, 1, 2, 3, 4, 5, 6, 7};
+static const u8 bulk_order[] = {1, 0, 2, 3};
+
 #define REC_INV_SQRT_CACHE (16)
 static u32 cobalt_rec_inv_sqrt_cache[REC_INV_SQRT_CACHE] = {0};
 
@@ -1225,6 +1287,46 @@ static unsigned int cake_drop(struct Qdisc *sch, struct 
sk_buff **to_free)
return idx + (tin << 16);
 }
 
+static void cake_wash_diffserv(struct sk_buff *skb)
+{
+   switch (skb->protocol) {
+   case htons(ETH_P_IP):
+   ipv4_change_dsfield(ip_hdr(skb), INET_ECN_MASK, 0);
+   break;
+   case htons(ETH_P_IPV6):
+   ipv6_change_dsfield(ipv6_hdr(skb), INET_ECN_MASK, 0);
+   break;
+   default:
+   break;
+   }
+}
+
+static u8 cake_handle_diffserv(struct sk_buff *skb, u16 wash)
+{
+   u8 dscp;
+
+   switch (skb->protocol) {
+   case htons(ETH_P_IP):
+   dscp = ipv4_get_dsfield(ip_hdr(skb)) >> 2;
+   if (wash && dscp)
+   ipv4_change_dsfield(ip_hdr(skb), INET_ECN_MASK, 0);
+   return dscp;
+
+   case htons(ETH_P_IPV6):
+   dscp = ipv6_get_dsfield(ipv6_hdr(skb)) >> 2;
+   if (wash && dscp)
+   ipv6_change_dsfield(ipv6_hdr(skb), INET_ECN_MASK, 0);
+   return dscp;
+
+   case htons(ETH_P_ARP):
+   return 0x38;  /* CS7 - Net Control */
+
+   default:
+   /* If there is no Diffserv field, treat 

[Cake] [PATCH net-next v9 2/7] sch_cake: Add ingress mode

2018-05-08 Thread Toke Høiland-Jørgensen
The ingress mode is meant to be enabled when CAKE runs downlink of the
actual bottleneck (such as on an IFB device). The mode changes the shaper
to also account dropped packets to the shaped rate, as these have already
traversed the bottleneck.

Enabling ingress mode will also tune the AQM to always keep at least two
packets queued *for each flow*. This is done by scaling the minimum queue
occupancy level that will disable the AQM by the number of active bulk
flows. The rationale for this is that retransmits are more expensive in
ingress mode, since dropped packets have to traverse the bottleneck again
when they are retransmitted; thus, being more lenient and keeping a minimum
number of packets queued will improve throughput in cases where the number
of active flows are so large that they saturate the bottleneck even at
their minimum window size.

This commit also adds a separate switch to enable ingress mode rate
autoscaling. If enabled, the autoscaling code will observe the actual
traffic rate and adjust the shaper rate to match it. This can help avoid
latency increases in the case where the actual bottleneck rate decreases
below the shaped rate. The scaling filters out spikes by an EWMA filter.

Signed-off-by: Toke Høiland-Jørgensen 
---
 net/sched/sch_cake.c |   78 +++---
 1 file changed, 74 insertions(+), 4 deletions(-)

diff --git a/net/sched/sch_cake.c b/net/sched/sch_cake.c
index c3446a99341f..aeafbb95becd 100644
--- a/net/sched/sch_cake.c
+++ b/net/sched/sch_cake.c
@@ -441,7 +441,8 @@ static bool cobalt_queue_empty(struct cobalt_vars *vars,
 static bool cobalt_should_drop(struct cobalt_vars *vars,
   struct cobalt_params *p,
   cobalt_time_t now,
-  struct sk_buff *skb)
+  struct sk_buff *skb,
+  u32 bulk_flows)
 {
bool drop = false;
 
@@ -466,6 +467,7 @@ static bool cobalt_should_drop(struct cobalt_vars *vars,
cobalt_tdiff_t schedule = now - vars->drop_next;
 
bool over_target = sojourn > p->target &&
+  sojourn > p->mtu_time * bulk_flows * 2 &&
   sojourn > p->mtu_time * 4;
bool next_due= vars->count && schedule >= 0;
 
@@ -919,6 +921,9 @@ static unsigned int cake_drop(struct Qdisc *sch, struct 
sk_buff **to_free)
b->tin_dropped++;
sch->qstats.drops++;
 
+   if (q->rate_flags & CAKE_FLAG_INGRESS)
+   cake_advance_shaper(q, b, skb, now, true);
+
__qdisc_drop(skb, to_free);
sch->q.qlen--;
 
@@ -995,8 +1000,39 @@ static s32 cake_enqueue(struct sk_buff *skb, struct Qdisc 
*sch,
cake_heapify_up(q, b->overflow_idx[idx]);
 
/* incoming bandwidth capacity estimate */
-   q->avg_window_bytes = 0;
-   q->last_packet_time = now;
+   if (q->rate_flags & CAKE_FLAG_AUTORATE_INGRESS) {
+   u64 packet_interval = now - q->last_packet_time;
+
+   if (packet_interval > NSEC_PER_SEC)
+   packet_interval = NSEC_PER_SEC;
+
+   /* filter out short-term bursts, eg. wifi aggregation */
+   q->avg_packet_interval = cake_ewma(q->avg_packet_interval,
+  packet_interval,
+   packet_interval > q->avg_packet_interval ? 2 : 8);
+
+   q->last_packet_time = now;
+
+   if (packet_interval > q->avg_packet_interval) {
+   u64 window_interval = now - q->avg_window_begin;
+   u64 b = q->avg_window_bytes * (u64)NSEC_PER_SEC;
+
+   do_div(b, window_interval);
+   q->avg_peak_bandwidth =
+   cake_ewma(q->avg_peak_bandwidth, b,
+ b > q->avg_peak_bandwidth ? 2 : 8);
+   q->avg_window_bytes = 0;
+   q->avg_window_begin = now;
+
+   if (now - q->last_reconfig_time > (NSEC_PER_SEC / 4)) {
+   q->rate_bps = (q->avg_peak_bandwidth * 15) >> 4;
+   cake_reconfigure(sch);
+   }
+   }
+   } else {
+   q->avg_window_bytes = 0;
+   q->last_packet_time = now;
+   }
 
/* flowchain */
if (!flow->set || flow->set == CAKE_SET_DECAYING) {
@@ -1251,14 +1287,26 @@ static struct sk_buff *cake_dequeue(struct Qdisc *sch)
}
 
/* Last packet in queue may be marked, shouldn't be dropped */
-   if (!cobalt_should_drop(>cvars, >cparams, now, skb) ||
+   if (!cobalt_should_drop(>cvars, >cparams, now, skb,
+   (b->bulk_flow_count *
+!!(q->rate_flags &
+

Re: [Cake] [PATCH net-next v8 1/7] sched: Add Common Applications Kept Enhanced (cake) qdisc

2018-05-08 Thread Cong Wang
On Mon, May 7, 2018 at 11:37 AM, Toke Høiland-Jørgensen  wrote:
> Cong Wang  writes:
>
>> On Fri, May 4, 2018 at 12:10 PM, Toke Høiland-Jørgensen  wrote:
>>> Thank you for the review! A few comments below, I'll fix the rest.
>>>
 [...]

 So sch_cake doesn't accept normal tc filters? Is this intentional?
 If so, why?
>>>
>>> For two reasons:
>>>
>>> - The two-level scheduling used in CAKE (tins / diffserv classes, and
>>>   flow hashing) does not map in an obvious way to the classification
>>>   index of tc filters.
>>
>> Sounds like you need to extend struct tcf_result?
>
> Well, the obvious way to support filters would be to have skb->priority
> override the diffserv mapping if set, and have the filter classification
> result select the queue within that tier. That would probably be doable,
> but see below.
>
>>> - No one has asked for it. We have done our best to accommodate the
>>>   features people want in a home router qdisc directly in CAKE, and the
>>>   ability to integrate tc filters has never been requested.
>>
>> It is not hard to integrate, basically you need to call
>> tcf_classify(). Although it is not mandatory, it is odd to merge a
>> qdisc doesn't work with existing tc filters (and actions too).
>
> I looked at the fq_codel code to do this. Is it possible to support
> filtering without implementing Qdisc_class_ops? If so, I'll give it a
> shot; but implementing the class ops is more than I can commit to...

Good question. The tc classes in flow-based qdisc's are actually
used as flows rather than a normal tc class in a hierarchy qdisc.
Like in fq_code, the classes are mapped to each flow and because
of that we can dump stats of each flow.

I am not sure if you can totally bypass class_ops, you need to look
into these API's. Most of them are easy to implement, probably
only except the ->dump_stats(), so I don't think it is a barrier here.


>
> +static int cake_init(struct Qdisc *sch, struct nlattr *opt,
> +struct netlink_ext_ack *extack)
> +{
> +   struct cake_sched_data *q = qdisc_priv(sch);
> +   int i, j;
> +
> +   sch->limit = 10240;
> +   q->tin_mode = CAKE_DIFFSERV_BESTEFFORT;
> +   q->flow_mode  = CAKE_FLOW_TRIPLE;
> +
> +   q->rate_bps = 0; /* unlimited by default */
> +
> +   q->interval = 10; /* 100ms default */
> +   q->target   =   5000; /* 5ms: codel RFC argues
> +  * for 5 to 10% of interval
> +  */
> +
> +   q->cur_tin = 0;
> +   q->cur_flow  = 0;
> +
> +   if (opt) {
> +   int err = cake_change(sch, opt, extack);
> +
> +   if (err)
> +   return err;


 Not sure if you really want to reallocate q->tines below for this
 case.
>>>
>>> I'm not sure what you mean here? If there's an error we return it and
>>> the qdisc is not created. If there's not, we allocate and on subsequent
>>> changes cake_change() will be called directly, or? Can the init function
>>> ever be called again during the lifetime of the qdisc?
>>>
>>
>> In non-error case, you call cake_change() first and then allocate
>> ->tins with kvzalloc() below. For me it looks like you don't need to
>> allocate it again when ->tins!=NULL.
>
> No, we definitely don't. It's just not clear to me how cake_init() could
> ever be called with q->tins already allocated?
>
> I can add a check in any case, though, I see that there is one in
> fq_codel as well...

Ah, that's right, you have a check in cake_change() before
cake_reconfigure().
___
Cake mailing list
Cake@lists.bufferbloat.net
https://lists.bufferbloat.net/listinfo/cake