adding a basic shaper to fq_codel itself is kind of trivial. You need to check if you are shaping
https://github.com/dtaht/sch_tart/blob/master/sch_tart.c#L329 Calculate the next start time: https://github.com/dtaht/sch_tart/blob/master/sch_tart.c#L392 and calc the rate in setup and init the watchdog. this was 40% faster than tbf + fq_codel in the good ole days and I sometimes think about just adding it into fq_codel itself, where, in a system already using it elsewhere, keeps the icache hotter as well. I'm going to discard the work in sch_tart and try fiddling with fq_codel again perhaps, but, also over the years, we've found that multiple variables (flows, target, interval) were effectively constants in 99% of cases, and I did want to restore my original work on codel having saner responses to overload that I'd done in earlier versions of cake... dunno. I *personally* need an inbound shaper that cracks 120mbit on mips hardware. ... I hadn't thought about additionally parallelizing the workload (which has problems like adding way more queues than I'd like) much. ... Now, done cleverly, shaping is also parralizable, with an atomic add across this core variable, or a periodic rcu'd merge step of the separate bandwidth clocks running. The "clever" part is that :I've never figured out how two+ instances of a qdisc (being directed by sch_mq) could share a tiny bit of data. There *are* some per-cpu stats rcu'd in qdiscs and filters at this point. and perhaps the per interface (as opposed to per qdisc) stats are rcu'd enough to pull from periodically to compensate. OFFTOPIC: seeing this go by gives me more hope on the rx path than I've had in a while https://lwn.net/SubscriberLink/763056/f9a20ec24b8d29dd/ _______________________________________________ Cake mailing list [email protected] https://lists.bufferbloat.net/listinfo/cake
