and in the end, that new modem crapped out completely

On Wed, May 27, 2015 at 10:12 AM, Dave Taht <[email protected]> wrote:
> On Tue, May 26, 2015 at 3:49 AM, Sebastian Moeller <[email protected]> wrote:
>> Hi Dave,
>>
>> I just stumbled over your last edit of "wondershaper needs to go the way of 
>> the dodo”; especially the following caught my attention (lines 303 - 311):
>>
>> ## The ingress policer doesn't work against ipv6, so if you have mixed 
>> traffic
>> ## you are not matching all of it, and the policer fails entirely
>> ## A correct, modern line for this would be:
>> ## tc filter add dev ${DEV} parent ffff: protocol all match u32 0 0 \
>> ## police rate ${DOWNLINK}kbit burst 100k drop flowid :1
>> ##
>> ## Even if it did work, the police burst size is too small for higher speed
>> ## connections and what I suggest above for a burst size needs to be
>> ## a calculated figure.(that one works ok at 100mbit)
>>
>> I think we should implement a policer setting in SQM (if only for testing) 
>> so I wonder how to set the burst size?
>> I think we basically can run the policer only X times per second, so we 
>> should allow something along the lines of:
>> bandwidth [bits/sec] / X [1/sec] = max bits in batch [bits]
>> Since in the end, we only can work in bursts/batches we need to figure out 
>> what worst-case batches to expect.
>> Now it would be sweet if we could get a handle on X, but what about just 
>> using the following approximation:
>>
>> How often does the policer run per second worst case?
>>
>> 100[kB]*1000*8 = 800000 [bit]
>> (100*1000^2 [bit/sec] / (100*1000*8) [bit]) = 125 1/sec or 8 milliseconds 
>> per invocation
>>
>> So your example seems to show that if we can run 125 times per second we 
>> will be able to drain enough packets so we do not drop excessively many 
>> packets. This bursting issue will increase the latency under load for sure, 
>> but I guess not more than 8ms on average?.
>>         Now, I guess one issue will be that this is not simply dependent on 
>> either data size or packet count, but probably we are both limited at how 
>> many packets per second we can process as well as how many bytes. So what 
>> about:
>>
>> burst = (bandwidth [bits/sec] /  125 [1/sec]) / (1000*8) [kB]
>>
>> This is probably too simplistic, but better than nothing.
>>
>> I would appreciate any hints how to improve this; so thanks in advance. Now 
>> all I need to do is hook this up with sqm-scripts and then go test the hell 
>> out of it ;)
>>
>> Best Regards
>>         Sebastian
>
> Well, I am pretty sure policing as currently understood is generally
> not a win compared to inbound shaping with aqm, particularly in the
> concatenated queues case which was the one I wanted to address (90/100
> rate differential).
>
> policers have generally been "pitched" as a means of customer
> bandwidth control (with CIR and other "features"), and do seem to be
> highly used...
>
> What I wanted to do was come up with a kinder, gentler policer that
> was effective but less damaging to non-tcp-like flow types, and the
> whole concept of a burst parameter just doesn't work with shorter RTTs
> in particular when used with ewma. The initial burst characteristics
> we see today are very different from the slow speeds and small initial
> windows (2) of yesteryear, and we tend to see a bunch of flows in slow
> start all at the same time. signal sent is too late to dissipate the
> original burst, and yet the policer signal is a brick wall so once it
> kicks in bad things happen to all flows.
>
> So, for example, I came up with a simple mod to the existing policer
> code, to "shred" inbound with a fq-like idea The shred.patch and some
> flent data are here:
>
> http://snapon.lab.bufferbloat.net/~cero3/bobbie/
>
> But: new data points galore hit me at the same time.
>
> Recently I boosted my signal strength on my cable modems in the
> biggest testbed, and switched to a new one. the old one, latched up at
> 110mbit down prior (and really horrible download bufferbloat), started
> giving me 172mbit service. This morning I measured that at about
> 142mbit service. THAT difference in performance ended up pretty
> dramatic, I went from where dslreports would peak at seconds on
> inbound to mere 100s of ms on an unshaped modem.
>
> http://www.dslreports.com/speedtest/560968
>
> dslreports changed their cable test to be 16 down and 12 up, also (from 16/6).
>
> ( I have also made so many other changes to the test driving box - for
> example I reduced tcp_limit_output_bytes to 4k and started using the
> sch_fq qdisc on my test driver box - and certainly it is my hope that
> the cable isps sat up and took notice and deployed some fixes in the
> past few weeks)
>
> So here is me just fixing outbound on this test:
>
> http://www.dslreports.com/speedtest/560989
>
> so there are WAY too many variables in play again.
>
> and trying to fix inbound (and failing)
>
> http://www.dslreports.com/speedtest/561097
>
> (and see the dataset)
>
> I am still seeing 30ms of induced latency on the rrul test but it is
> so far from horrible that I think I am still dreaming.
> and i have a whole bunch of variables to tediously recheck.
>
>>
>>
>>
>>
>
>
>
> --
> Dave Täht
> Open Networking needs **Open Source Hardware**
>
> https://plus.google.com/u/0/+EricRaymond/posts/JqxCe2pFr67



-- 
Dave Täht
What will it take to vastly improve wifi for everyone?
https://plus.google.com/u/0/explore/makewififast
_______________________________________________
Cerowrt-devel mailing list
[email protected]
https://lists.bufferbloat.net/listinfo/cerowrt-devel

Reply via email to