> On 4 Mar, 2021, at 3:54 am, Thomas Croghan <tcrog...@lostcreek.tech> wrote:
> 
> So, a beta of Mikrotik's RouterOS was released some time ago which finally 
> has Cake built into it. 
> 
> In testing everything seems to be working, I just am coming up with some 
> questions that I haven't been able to answer. 
> 
> Should there be any special considerations when Cake is being used in a 
> setting where it's by far the most significant limiting factor to a 
> connection? For example: <internet> --10 Gbps Fiber -- <ISP Router> --10 Gbps 
> Fiber -- [ISP Switch] -- 1 Gbps Fiber -- <500 Mbps Customer>
> In this situation very frequently the "<ISP Router>" could be running Cake 
> and do the bandwidth limiting of the customer down to 1/2 (or even less) of 
> the physical connectivity. A lot of the conversations here revolve around 
> Cake being set up just below the Bandwidth limits of the ISP, but that's not 
> really going to be the case in a lot of the ISP world.

There shouldn't be any problems with that.  Indeed, Cake is *best* used as the 
bottleneck inducer with effectively unlimited inbound bandwidth, as is 
typically the case when debloating a customer's upstream link at the CPE.  In 
my own setup, I currently have GigE LAN feeding into a 2Mbps Cake instance in 
that direction, to deal with a decidedly variable LTE last-mile; this is good 
enough to permit reliable videoconferencing.

All you should ned to do here is to filter each subscriber's traffic into a 
separate Cake instance, configured to the appropriate rate, and ensure that the 
underlying hardware has enough throughput to keep up.

> Another question would be based on the above:
> 
> How well does Cake do with stacking instances? In some cases our above 
> example could look more like this: <Internet> -- [Some sort of limitation to 
> 100 Mbps] -- <ISP Router> -- 1 Gbps connection- <25 Mbps Customer X 10> 
> 
> In this situation, would it be helpful to Cake to have a "Parent Queue" that 
> limits the total throughput of all customer traffic to 99-100 Mbps then 
> "Child Queues" that respectively limit customers to their 25 Mbps? Or would 
> it be better to just setup each customer Queue at their limit and let Cake 
> handle the times when the oversubscription has reared it's ugly head?

Cake is not specifically designed to handle this case.  It is designed around 
the assumption that there is one bottleneck link to manage, though there may be 
several hosts who have equal rights to use as much of it as is available.  
Ideally you would put one Cake or fq_codel instance immediately upstream of 
every link that may become saturated; in practice you might not have access to 
do so.

With that said, for the above topology you could use an ingress Cake instance 
to manage the backhaul bottleneck (using the "dual-dsthost" mode to 
more-or-less fairly share this bandwidth between subscribers), then a 
per-subscriber array of Cake instances on egress to handle that side, as above. 
 In the reverse direction you could invert this, with a per-subscriber tree on 
ingress and a backhaul-generic instance (using "dual-srchost" mode) on egress.  
The actual location where queuing and ECN marking occurs would shift 
dynamically depending on where the limit exists, and that can be monitored via 
the qdisc stats.

This sort of question has come up before, which sort-of suggests that there's 
room for a qdisc specifically designed for this family of use cases.  Indeed I 
think HTB is designed with stuff like this in mind, though it uses markedly 
inferior shaping algorithms.  At this precise moment I'm occupied with the 
upcoming IETF (and my current project, Some Congestion Experienced), but there 
is a possibility I could adapt some of Cake's technology to a HTB-like 
structure, later on.

 - Jonathan Morton

_______________________________________________
Cake mailing list
Cake@lists.bufferbloat.net
https://lists.bufferbloat.net/listinfo/cake

Reply via email to