Re: [Cake] [Make-wifi-fast] [Bloat] dslreports is no longer free

2020-05-02 Thread Benjamin Cronce
> Fast.com reports my unloaded latency as 4ms, my loaded latency as ~7ms

For download, I show 6ms unloaded and 6-7 loaded. But for upload the loaded
shows as 7-8 and I see it blip upwards of 12ms. But I am no longer using
any traffic shaping. Any anti-bufferbloat is from my ISP. A graph of the
bloat would be nice.

On Sat, May 2, 2020 at 9:51 AM Jannie Hanekom  wrote:

> Michael Richardson :
> > Does it find/use my nearest Netflix cache?
>
> Thankfully, it appears so.  The DSLReports bloat test was interesting, but
> the jitter on the ~240ms base latency from South Africa (and other parts of
> the world) was significant enough that the figures returned were often
> unreliable and largely unusable - at least in my experience.
>
> Fast.com reports my unloaded latency as 4ms, my loaded latency as ~7ms and
> mentions servers located in local cities.  I finally have a test I can
> share
> with local non-technical people!
>
> (Agreed, upload test would be nice, but this is a huge step forward from
> what I had access to before.)
>
> Jannie Hanekom
>
> ___
> Cake mailing list
> Cake@lists.bufferbloat.net
> https://lists.bufferbloat.net/listinfo/cake
>
___
Cake mailing list
Cake@lists.bufferbloat.net
https://lists.bufferbloat.net/listinfo/cake


Re: [Cake] inbound cake or fq_codel shaping fails on cable on netflix reno

2018-07-23 Thread Benjamin Cronce
I don't know if this is possible for higher density cities, but the fiber
ISP here uses P2P fiber ring from the house all the way back to the CO.
It's only at the CO that they aggregate to the GPON port. This means I do
not share any field fiber with anyone else and the ring design allows for a
single fiber cut to not take out my Internet.

It seems " and allowing competition for service from multiple isps" is the
main point for your described setup. My current setup is fine for my single
ISP, but they don't have to share with anyone else. I have heard of
alternative setups where the CO was not owned by an ISP, but where all the
ISPs hooked into the fiber network. North West East South redundancy sound
overly redundant, but I guess I would not complain. I assume it means
powered equipment in the field, unless there's a way to passively do that
without dropping the signal strength. I would prefer a two point redundant
system that was passive over an active 4 way redundant system that could
have power failures, which are more common than fiber cuts around here. My
firewall has nearly 450 days of uptime, not many power outages either.

I am already one hop away from everyone in the city, including my employer.
The ISP uses a flat network model, everything plugs into the core. The core
router has many ports with a minimum rates of 100Gb. The GPON units plug
directly into the core, and they're only used as layer 2 devices. The GPON
units have 1 or more 100gb or 400gb uplinks. The network is provisioned to
be fully non-blocking. It can handle all customers at 100% of their
provisioned rates at the same time. Other than for redundancy, there's
little reason to have routing/forwarding being done in the field. A "hub"
pattern is fine and scales just fine, and less complex.

I'm not sure one hop away if useful in a multi-ISP shared network since
your packets need to go to your ISP in order to get routed back to your
neighbor.

On Mon, Jul 23, 2018 at 9:56 AM Dave Taht  wrote:

> Great info, thx. Using this opportunity to rant about city-wid
> networks, I'd have done something so different
> than what the governments and ISPs have inflicted on us, substituting
> redundancy for reliability.
>
> I'd have used bog standard ethernet over fiber instead of gpon. The
> only advantages to gpon were that it was a standard normal folk
> (still) can't use, it offered encryption to the co-lo, and the gpon
> splitter at the neighborhood cabinett could be unpowered, and a
> telco-like design could be made telco-level reliable.Theoretically. In
> reality it constrains the market and raises the price of gpon capable
> gear enormously, thus creating a cost for the isp and a healthy profit
> margin for the fiber vendor.
>
> Neighborhood cabinets would be cross connected north, east, west,
> south, uplink1, uplink2, thus rendering the entire network immune to
> fiber cuts and other disruptions of service and allowing competition
> for service from multiple isps. Fiber or copper or wireless (cell) to
> the building from there. Your neighbor would be one hop away. Local
> cellular or wifi would spring out of smaller towers distributed above
> those cabinets.
>
> Lest you think I'm entirely insane, that's how amsterdam's network was
> built out 10+ years ago.
>
> https://arstechnica.com/tech-policy/2010/03/how-amsterdam-was-wired-for-open-access-fiber/
>
> I'd have avoided MPLS, and gone with something like 64xlat to deal
> with the ipv4 distribution problem. There'd be a secure routing
> protocol holding the city-wide network together. And there'd be
> ponies. Lots of ponies.
> ___
> Cake mailing list
> Cake@lists.bufferbloat.net
> https://lists.bufferbloat.net/listinfo/cake
>
___
Cake mailing list
Cake@lists.bufferbloat.net
https://lists.bufferbloat.net/listinfo/cake


Re: [Cake] Fast snack, QUIC CAKE?

2017-08-10 Thread Benjamin Cronce
CAKE only works for endpoints you control. QUIC can benefit in situations
where you don't control the chokepoints. Not sure how QUIC interacts with
CAKE. I can't see it being more than a small percent better or worse.

On Wed, Aug 9, 2017 at 3:36 AM,  wrote:

> Has anybody here done any experimentation on CAKE (and others) when using
> QUIC?  Or other real world insights into other aspects of QUIC?   For
> example proper CAKE and TCP version of youtube vs crappy quing/latency and
> QUIC.
>
>
> The overlapping design goals is making the user experience snappy, but
> QUICs approach is to control the end points with a new protocol to replace
> TCP.  (Or improve TCP in the future).
>
>
> https://en.wikipedia.org/wiki/QUIC
>
>
>
> -Erik
> ___
> Cake mailing list
> Cake@lists.bufferbloat.net
> https://lists.bufferbloat.net/listinfo/cake
>
___
Cake mailing list
Cake@lists.bufferbloat.net
https://lists.bufferbloat.net/listinfo/cake


Re: [Cake] Getting Cake to work better with Steam and similar applications

2017-04-25 Thread Benjamin Cronce
What's your RTT(ping) to the different services, like Steam and Windows
Update? Some ISPs have local CDNs that can give incredibly low latency
relative to the provisioned bandwidth, which can cause bad things to happen
with TCP.

On Tue, Apr 25, 2017 at 3:44 PM, Dendari Marini  wrote:

>
> On 25 April 2017 at 21:10, Jonathan Morton  wrote:
>
>>
>> You may see some improvement from wholesale reducing the inbound
>> bandwidth, to say 10Mbit.  This is especially true given the high asymmetry
>> of your connection, which might require dropped acks upstream to keep
>> filled downstream - and dropped acks will tend to increase burstiness of
>> sending on unpaced senders.
>>
>> You should also try to ensure ECN is fully enabled on your LAN hosts,
>> especially the ones running Steam.  This will help to reduce
>> retransmissions and loss-recovery cycles.
>>
>>  - Jonathan Morton
>>
>>
> Well, the only improvement I've seen when limiting the bandwidth with
> Steam has been at lower than 1Mbps, don't think I want to go that far. In
> all honesty I wouldn't limit it to 10Mbit either, with the overhead it
> means half of my total bandwidth, not a trade-off I'm willing to do.
>
> Still, the issue is real and it seems Steam is the only application I can
> reproduce it. I've seen reports about Battle.net and Windows Updates doing
> the same thing (because they should open multiple concurrent connections),
> but I can't reproduce it, at least not in the way Steam does.
>
> Anyway I'm gonna take a "pause" from all of this, I've wasted the last
> three weeks ago just for trying resolving it but unfortunately still
> nothing. Thanks all for your help, if there's any news I'll report it here.
>
> ___
> Cake mailing list
> Cake@lists.bufferbloat.net
> https://lists.bufferbloat.net/listinfo/cake
>
>
___
Cake mailing list
Cake@lists.bufferbloat.net
https://lists.bufferbloat.net/listinfo/cake


Re: [Cake] [LEDE-DEV] Cake SQM killing my DIR-860L - was: [17.01] Kernel: bump to 4.4.51

2017-03-06 Thread Benjamin Cronce
Depends on how short of a timescale you're talking about. Shared global
state that is being read and written to very quickly by multiple threads is
bad enough for a single package system, but when you start getting to
something like an AMD Ryzen or NUMA, shared global state becomes really
expensive. Accuracy is expensive. Loosen the accuracy and gain scalability.

I would be interested in the pseduo-code or high level of what state needs
to be shared and how that state is used.

I was also thinking more of some hybrid. Instead of a "token" representing
a bucked amount of bandwidth that can be immediately used, I was thinking
more of like a "future" of bandwidth that could be used. So instead of
saying "here's a token of bandwidth", you have each core doing it's own
deficit bandwidth shaping, but when a token is received, a core can
temporarily increase its assigned shaping bandwidth. If I remember
correctly, cake already supports having its bandwidth changed on the fly.

Of course it may be simpler to say cake is meant to be used on no more than
8 cores with a non-numa CPU system with all cores having a shared
low-latency cache connecting the cores.

On Mon, Mar 6, 2017 at 8:44 AM, Jonathan Morton <chromati...@gmail.com>
wrote:

>
> > On 6 Mar, 2017, at 15:30, Benjamin Cronce <bcro...@gmail.com> wrote:
> >
> > You could treat it like task stealing, except each core can generate
> tokens that represent a quantum of bandwidth that is only valid for some
> interval.
>
> You’re obviously thinking of a token-bucket based shaper here.  CAKE uses
> a deficit-mode shaper which deliberately works a different way - it’s more
> accurate on short timescales, and this actually makes a positive difference
> in several important cases.
>
> The good news is that there probably is a way to explicitly and
> efficiently share bandwidth in any desired ratio across different CAKE
> instances, assuming a shared-memory location can be established.  I don’t
> presently have the mental bandwidth to actually try doing that, though.
>
>  - Jonathan Morton
>
>
___
Cake mailing list
Cake@lists.bufferbloat.net
https://lists.bufferbloat.net/listinfo/cake


Re: [Cake] upstreaming cake in 2017?

2016-12-24 Thread Benjamin Cronce
On Fri, Dec 23, 2016 at 3:53 AM, Jonathan Morton 
wrote:

> >> As far as Diffserv is concerned, I explicitly assume that the standard
> RFC-defined DSCPs and PHBs are in use, which obviates any concerns about
> Diffserv policy boundaries.
> >
> >   ??? This comes close to ignoring reality. The RFCs are less
> important than what people actually send down the internet.
>
> What is actually sent down the Internet right now is mostly best-effort
> only - the default CS0 codepoint.  My inbound shaper currently shows 96GB
> best-effort, 46MB CS1 and 4.3MB “low latency”.
>
> This is called the “chicken and egg” problem; applications mostly ignore
> Diffserv’s existence because it has no effect in most environments, and CPE
> ignores Diffserv’s existence because little traffic is observed using it.
>
> To solve the chicken-and-egg problem, you have to break that vicious
> cycle.  It turns out to be easier to do that on the network side, creating
> an environment where DSCPs *do* have effects which applications might find
> useful.
>
> > coming up with a completely different system (preferable randomized for
> each home network) will make gaming the DSCPs much harder
>
> With all due respect, that is the single most boneheaded idea I’ve come
> across on this list.  If the effect of applying a given DSCP is
> unpredictable, and may even be opposite to the desired behaviour - or,
> equivalently, if the correct DSCP to achieve a given behaviour is
> unpredictable - then Diffserv will *never* be used by mainstream users and
> applications.
>
> >> Cake does *not* assume that DSCPs are trustworthy.  It respects them as
> given, but employs straightforward countermeasures against misuse (eg.
> higher “priority” applies only up to some fraction of capacity),
> >
> >   But doesn’t that automatically mean that an attacker can degrade
> performance of a well configured high priority tier (with appropriate
> access control) by overloading that band, which will affect the priority of
> the whole band, no? That might not be the worst alternative, but it
> certainly is not side-effect free.
>
> If an attacker wants to cause side-effects like that, he’ll always be able
> to do so - unless he’s filtered at source.  As a more direct counterpoint,
> if we weren’t using Diffserv at all, the very same attack would degrade
> performance for all traffic, not just the subset with equivalent DSCPs.
>
> Therefore, I have chosen to focus on incentivising legitimate traffic in
> appropriate directions.
>
> >> So, if Cake gets deployed widely, an incentive for applications to
> correctly mark their traffic will emerge.
> >
> >   For which value of “correct” exactly?
>
> RFC-compliant, obviously.
>
> There are a few very well-established DSCPs which mean “minimise latency”
> (TOS4, EF) or “yield priority” (CS1).  The default configuration recognises
> those and treats them accordingly.
>
> >> But almost no program uses CS1 to label its data as lower priority
>
> See chicken-and-egg argument above.  There are signs that CS1 is in fact
> being used in its modern sense; indeed, while downloading the latest Star
> Citizen test version the other day, 46MB of data ended up in CS1.  Star
> Citizen uses libtorrent, as I suspect do several other prominent games, so
> adding CS1 support there would probably increase coverage quite quickly.
>
> >> Cake also assumes in general that the number of flows on the link at
> any given instant is not too large - a few hundred is acceptable.
> >
> >   I assume there is a build time parameter that will cater to a
> specific set of flows, would recompiling with a higher value for that
> constant allow to taylor cake for environments with a larger number of
> concurrent flows?
>
> There is a compile-time constant in the code which could, in principle, be
> exposed to the kernel configuration system.  Increasing the queue count
> from 1K to 32K would allow “several hundred” to be replaced with “about ten
> thousand”.  That’s still not backbone-grade, but might be useful for a very
> small ISP to manage its backhaul, such as an apartment complex FTTP
> installation or a village initiative.
>

A few years back when reading about fq_Codel and Cake, one of the research
articles that I came across talked about how many flows are actually in a
buffer at any given time. They looked at the buffers of backbone links from
155Mb to 10Gb and they got the same numbers every time. While these links
may be servicing hundreds of thousands of active flows, at any given
instant there was fewer than 200 flows in the buffer, nearly all flows had
exactly one packet in the buffer, in the ballpark of 10 flows had 2 or more
packets in the buffer.

You could say the buffer follows the 80/20 rule. 20% of the flows in the
buffer comprise of 80% of the buffer. Regardless, the total number of flows
in the buffer is almost fixed. What was also interesting is the flows
consuming the majority of the buffer 

Re: [Cake] New to cake. Some questions

2016-06-10 Thread Benjamin Cronce
At least you ISP's trunk seems decent

ping -t 109.90.28.1

Packets: sent=150, rcvd=150, error=0, lost=0 (0.0% loss) in 74.660177 sec
RTTs in ms: min/avg/max/dev: 158.255 / 159.140 / 161.922 / 0.528
Bandwidth in kbytes/sec: sent=0.120, rcvd=0.120

On Fri, Jun 10, 2016 at 9:05 AM, Dennis Fedtke 
wrote:

> Hi Sebastian,
>
> yes this is wired connection. As i stated my ping times always vary
> independently of target.
> My ISP is overloaded in certain regions. So i assume they do some
> shaping/limiting on certain protocols (icmp for example)
> Connection speed is 200/20 Mbit.
> ISP is unitymedia which doesn't allow you to use your own hardware.
> So actually i have to run my router behind theirs with exposed host
> enabled :<
>
> Ping response:
>
> ping -s 1400 -c 1 109.90.x.x
> PING 109.90.28.1 (109.90.28.1) 1400(1428) bytes of data.
> 1408 bytes from 109.90.28.1: icmp_seq=1 ttl=253 time=11.6 ms
>
> --- 109.90.28.1 ping statistics ---
> 1 packets transmitted, 1 received, 0% packet loss, time 0ms
> rtt min/avg/max/mdev = 11.677/11.677/11.677/0.000 ms
>
> This looks good or?
>
> Yes i am from germany. So you are from germany too?
>
> Thanks for your time and help :)
>
>
> Best regards
> Dennis
>
>
> Am 10.06.2016 um 15:02 schrieb moeller0:
>
>> Hi Dennis,
>>
>> On Jun 10, 2016, at 14:43 , Dennis Fedtke  wrote:
>>>
>>> Hi Sebastian,
>>>
>>> i used the default setting of 1000.
>>>
>> Okay, that should work i assume unless you have a very fast link…
>> What link at what ISP do you actually have?
>>
>> But it seems that my isp is dropping icmp packets if there are exceeding
>>> some sending threshold.
>>>
>> I would be amazed if they did, a sympotom of that would be rsate
>> reduction to all ICMP probe flows independent of target host. If however
>> you only see this with specific hosts it is very likely that that host rate
>> limits its ICMP responses. In either case try another host further
>> upstream. II think I has reasonable decent results with targeting 8.8.8.8,
>> googles dns servers.
>>
>> So there is a lot of none usable ping data.
>>>
>> Again, try another host…
>>
>> I increased the send delay to 50ms. 25 ms already shows dropped requests.
>>>
>> That might also help, as long as you stay below their throttling
>> rate the chosen host might still work okay.
>>
>> This is the third run now. Waiting for completion.
>>>
>> Well, sorry that the method is not as slick and streamlined, but
>> there are no guarded good ICMP reflectors available on the net.
>>
>> The ping target is my first hop.
>>>
>> Try the next hop then ;)
>>
>> Actually my ping always varies around +-5ms even at idle and
>>> independently of ping target.
>>>
>> This is via wifi/wlan? If so try from a wired connection instead.
>>
>> When i look through the ping file the increase in ping times are actually
>>> appear to be random to me.
>>>
>> Well, we expect variability of the individual “trials” to exist,
>> that is why we collect so many and try to select the best measure in the
>> matlab code to remove the unwanted variance. Could you post a link to both
>> of the generated plots please, the first one showing te different
>> aggregation measures might be helpful in diagnosing the issues deeper.
>>
>> So how to test if my isp responses with fixed icmp packet size?
>>>
>> You could try manually. In the folloewing example I pinged
>> gstatic.com (which belongs to googles CDN as far as I know):
>>
>> bash-3.2$ ping -s 1 -c 1 gstatic.com
>> PING gstatic.com (216.58.213.195): 1 data bytes
>> 9 bytes from 216.58.213.195: icmp_seq=0 ttl=55
>>
>> --- gstatic.com ping statistics ---
>> 1 packets transmitted, 1 packets received, 0.0% packet loss
>>
>>
>> bash-3.2$ ping -s 64 -c 1 gstatic.com
>> PING gstatic.com (216.58.213.195): 64 data bytes
>> 72 bytes from 216.58.213.195: icmp_seq=0 ttl=55 time=19.446 ms
>>
>> --- gstatic.com ping statistics ---
>> 1 packets transmitted, 1 packets received, 0.0% packet loss
>> round-trip min/avg/max/stddev = 19.446/19.446/19.446/0.000 ms
>>
>>
>> bash-3.2$ ping -s 65 -c 1 gstatic.com
>> PING gstatic.com (216.58.213.195): 65 data bytes
>> 72 bytes from 216.58.213.195: icmp_seq=0 ttl=55 time=21.138 ms
>> wrong total length 92 instead of 93
>>
>> --- gstatic.com ping statistics ---
>> 1 packets transmitted, 1 packets received, 0.0% packet loss
>> round-trip min/avg/max/stddev = 21.138/21.138/21.138/0.000 ms
>> bash-3.2$
>>
>>
>> bash-3.2$ ping -s 1400 -c 1 gstatic.com
>> PING gstatic.com (216.58.213.195): 1400 data bytes
>> 72 bytes from 216.58.213.195: icmp_seq=0 ttl=55 time=6.878 ms
>> wrong total length 92 instead of 1428
>>
>> --- gstatic.com ping statistics ---
>> 1 packets transmitted, 1 packets received, 0.0% packet loss
>> round-trip min/avg/max/stddev = 6.878/6.878/6.878/0.000 ms
>>
>> Once I try to send 65 Bytes of ICMP payload the response is cut short to
>> 92 bytes, the same might