On Tue, Feb 27, 2024 at 10:52 AM Rich Brown via Bloat <
bloat@lists.bufferbloat.net> wrote:
>
>
> On Feb 27, 2024, at 12:00 PM, bloat-requ...@lists.bufferbloat.net wrote:
>
> On 2/26/2024 6:28 AM, Rich Brown via Bloat wrote:
>
> - Avoid the WAN port's DHCP assigned subnet (what if the ISP uses
>
This is good work! I love reading their posts on scale like this.
It’s wild to me that the Linux kernel has (apparently) never implemented
shrinking the receive window, or handling the case of userspace starting a
large transfer and then just not ever reading it… the latter is less
surprising,
I’ve found that _usually_ I can set cake’s bandwidth limits to 90-95% of
the advertised bandwidth, and everything “just works”. So long as you’re
routinely able to achieve the bandwidth, it tends to work.
I’ve found in my testing over the years (I’ve been a user of fq_codel since
2013) that
I like the general idea, especially if there was a site-wide controller
module that can do the sort of frequency allocation that network engineers
do in dense AP deployments today: adjacent APs run on different frequency
bands so that they reduce the likelihood of stepping on each others
I read this earlier in the week, and thought it applied well to describing
how excessive latency causes TCP (cubic, reno, etc) overshoot and collapse
in bufferbloat situations:
https://read.fluxcollective.org/i/98919216/lens-of-the-week
-Aaron
___
Are you asking what they _should_ be, or what the typical buffering seen in
equipment actually is?
On Wed, Mar 9, 2022 at 9:39 AM Michael Menth wrote:
> Hi all,
>
> I don't question the usefulness of AQMs for buffers - on the contrary.
> But what are up-to-date buffer sizes in networking
The edge of the datacenter, or the edge as in where a building meets the
internet? (either residential or commercial)
On Tue, Feb 1, 2022 at 6:27 AM Dave Taht wrote:
> One of the analogies that went by in this interview with nick mckeown
> was "programmable cables" and the IPU concept.
>
>
Can't switches send pause frames back over ethernet?
/me googles, and finds:
https://en.wikipedia.org/wiki/Ethernet_flow_control
On Wed, Jan 12, 2022 at 4:57 PM Dave Taht wrote:
> What appeared to be the case was that a ONT had a 50ms buffer at
> 100Mbit and was reconfigured to drive a gig
My own experiments with this, in the past (5+ years ago), was that you
absolutely had to use cabled setups for repeatability, but then didn't have
enough randomness in the variability to really test anything that was
problematic. We could create hidden nodes, or arbitrary meshes of devices,
but
If we see that AQM appears to be not functioning as expected for upstream
connections on DOCSIS3.1, what's the right avenue for getting that
resolved? (and does that only apply to the Comcast-owned, vs.
customer-owned, modems?)
On Sat, Jul 31, 2021 at 10:50 AM Simon Barber wrote:
> Awesome to
With the disclaimer that I'm not as strong in statistics and modelling as
I'd like to be
I think it's not useful to attempt to stochastically model the behavior of
what are actually active (well, reactive) components. The responses of
each piece are deterministic, but the inputs (users) are
On Mon, Jul 12, 2021 at 1:32 PM Ben Greear wrote:
> UDP is better for getting actual packet latency, for sure. TCP is
> typical-user-experience-latency though,
> so it is also useful.
>
> I'm interested in the test and visualization side of this. If there were
> a way to give engineers
> a
On Tue, Jul 6, 2021 at 7:26 PM Dave Taht wrote:
> On Tue, Jul 6, 2021 at 3:32 PM Aaron Wood wrote:
> >
> > I'm running an Odyssey from Seeed Studios (celeron J4125 with dual
> i211), and it can handle Cake at 1Gbps on a single core (which it needs to,
> because OpenWRT's i
rdless of the US
> mode in use (ie sc-qam (3.0) or ofdma (3.1) upstream), so it should be
> enabled.
>
> Get Outlook for Android <https://aka.ms/AAb9ysg>
>
> --
> *From:* Bloat on behalf of Aaron
> Wood
> *Sent:* Tuesday, July 6, 2021 11:1
I'm running an Odyssey from Seeed Studios (celeron J4125 with dual i211),
and it can handle Cake at 1Gbps on a single core (which it needs to,
because OpenWRT's i211 support still has multiple receive queues disabled).
On Tue, Jun 22, 2021 at 12:44 AM Giuseppe De Luca
wrote:
> Also a PC Engines
Are these in-flux changes to where the upstream split is why some modems
report DOCSIS 3.1 downstream, but only 3.0 upstream? (and therefore aren't
enabling AQM on the upstream?)
-Aaron
On Tue, Jun 22, 2021 at 4:04 PM Livingood, Jason via Bloat <
bloat@lists.bufferbloat.net> wrote:
> > For
I think one of the big advantages that AQM has is that it doesn't know, or
care, who the flow is. It can't, itself, violate NN concerns because it
has no knowledge with which to do so.
Instead, it punishes the greed flows that try to use more than their fair
share of the available bandwidth. It
I think the "I Love Lucy" chocolate factory scene is perhaps a good analogy:
https://www.youtube.com/watch?v=WmAwcMNxGqM
The chocolates start to come in too fast, and they can't keep up, but
because they aren't telling the kitchen to slow down, they keep piling up
until it collapses into a mess.
One of my long concerns with the RRUL test is that the ICMP ping test
portion is not isochronous, and runs at a variable rate based on rtt, which
means that it uses more/less bandwidth as an inverse function of rtt, and
that makes it harder to compare the actual goodput of the tcp streams
running
I'm still surprised at how hard it is to get people to understand that the
problem they're having (especially with real-time video like Zoom) isn't
bandwidth, but jitter and bloat...
On Wed, Mar 24, 2021 at 12:52 PM Jonathan Foulkes
wrote:
> Agreed, we need to be more vocal.
>
> I did look up
I'm continually frustrated that my cable headend appears to be using DOCSIS
3.1 for downstream, and 3.0 for upstream. Which means my Arris SB8200
isn't using PIE, but the standard FIFO (Gigabit down, 35Mbps up).
Cake to the rescue. With a celeron based router (x86-64), I'm hitting
line-rate
Those are great results.
I've been thinking for a while that algorithms / techniques like fq-codel
would be great if packaged into library form where they could be utilized
by application-layers. Obviously, not all application-layer queues can
deal with loss like TCP, but for all those that can,
>
> The CPE side has met willingness to investigate these issues from early
> on, but it seems that buffer handling is much harder on CPE chipsets
> than on base station chipsets. In particular on 5G. We have had some
> very good results on 4G, but they do not translate to 5G.
>
My own
pr 4, 2020 at 7:47 AM David P. Reed wrote:
> >
> > Thanks! I ordered one just now. In my experience, this company does
> rather neat stuff. Their XMOS based microphone array (ReSpeaker) is really
> useful. What's the state of play in Linux/OpenWRT for Intel 9560
> capabiliti
https://www.seeedstudio.com/ODYSSEY-X86J4105800-p-4445.html
quad-core Celeron J4105 1.5-2.5 GHz x64
8GB Ram
2x i211t intel ethernet controllers
intel 9560 802.11ac (wave2) wifi/bluetooth chipset
intel built-in graphics
onboard ARM Cortex-M0 and RPi & Arduino headers
m.2 and PCIe adapters
<$200
e interfaces, but not the
energy to deal with the troubleshooting. I think I still have an old
WNDR3700 in a box somewhere that I could prep as a backup, but I'd rather
not go through the hassle.
> On Wed, Mar 25, 2020 at 8:58 AM Aaron Wood wrote:
> >
> > One other thought I've had
On Wed, Mar 25, 2020 at 12:18 PM Dave Taht wrote:
> On Wed, Mar 25, 2020 at 8:58 AM Aaron Wood wrote:
> >
> > One other thought I've had with this, is that the apu2 is multi-core,
> and the i210 is multi-queue.
> >
> > Cake/htb aren't, iirc, setup to run
Dave Taht wrote:
>
> https://forum.openwrt.org/t/comparative-throughput-testing-including-nat-sqm-wireguard-and-openvpn/44724/44
>
> (H/T to aaron wood)
>
> The post persistently points out that openvpn tends to optimize for
> one direction only. This is in part due to the lar
elephants.
On Wed, Mar 25, 2020 at 4:03 AM Toke Høiland-Jørgensen wrote:
> Sebastian Moeller writes:
>
> > Hi Toke,
> >
> >
> >> On Mar 25, 2020, at 09:58, Toke Høiland-Jørgensen wrote:
> >>
> >> Aaron Wood writes:
> >>
> &
ter and the APs
up).
> On March 25, 2020 6:29:17 AM GMT+01:00, Matt Taggart
> wrote:
>>
>> On 3/24/20 10:01 PM, Aaron Wood wrote:
>>
>> I recently got CenturyLink gig fiber and bought one of these:
>>
>> Qotom Q355G4
>> https://www.amazon.com/gp/produc
>
> >>> But it's DOCSIS 3.1, so why isn't PIE working? Theory: It's in
> DOCSIS 3.0
> >>> upstream mode based on the status LEDs. Hopefully it will go away if
> I can
> >>> convince it to run in DOCSIS 3.1 mode.
> >>
> >> I think that while PIE is "mandatory to implement" in DOCSIS 3.1, the
>
(hit send early, somehow)...
Although this thread makes we wonder if perhaps not:
https://lists.bufferbloat.net/pipermail/cake/2018-August/004285.html
On Tue, Mar 24, 2020 at 10:01 PM Aaron Wood wrote:
> I recently upgraded service from 150up, 10dn Mbps to xfinity's gigabit
> (with 35M
I recently upgraded service from 150up, 10dn Mbps to xfinity's gigabit
(with 35Mbps up) tier, and picked up a DOCSIS 3.1 modem to go with it.
Flent test results are here:
https://burntchrome.blogspot.com/2020/03/bufferbloat-with-comcast-gigabit-with.html
tl/dr; 1000ms of upstream bufferbloat
Maybe he's on a DOCSIS 3.1 headend that's also using pie? Pie doesn't need
to know the outbound rate, correct? as it's meant to be driven by the
RTS/CTS type behavior that the upstream traffic on cable has (the correct
terms for cable aren't coming to mind at the moment).
On Sat, Oct 6, 2018 at
I'd focus on the distributors of the Linux BSP used on those routers: the
silicon vendors themselves. Current routers shouldn't be shipping with 3.2
kernels, or even 3.10, and yet... ::sigh::
I find it very frustrating they they fork the kernel for their own use,
instead of maintaining patches
For the graphs, it would be great for f they were using a normalize output
that allows for easy comparisons between runs. Especially the y axis for
the “all” graph.
On Mon, Nov 27, 2017 at 15:55 Dave Taht wrote:
> On Mon, Nov 27, 2017 at 3:16 PM, Martin Geddes
I'm comparing some numbers between the fremont node and a friend's Droplet
running netserver.
We've previous noted that we don't see more than a 120Mbps download rate
from the fremont node.
Today I was able to confirm in multiple back-to-back runs that the fremont
node was only giving me about
21, 2017 at 8:16 AM, Aaron Wood <wood...@gmail.com> wrote:
> The friend of mine that I've been working with brought up a cloud node
> somewhere with ubuntu and netperf on it, and from another location
> (business internet) able to consistently get better throughput from his
> clo
something about that node in particular. It seems to have a
125Mbps cap (so I guess about a 140-150Mbps line-rate cap?).
What kind of node is it running on?
On Thu, Sep 21, 2017 at 8:13 AM, Aaron Wood <wood...@gmail.com> wrote:
> I'd wondered about single vs. multiple, but I'm getti
-boun...@lists.bufferbloat.net] *On Behalf Of
> *Aaron
> Wood
> *Sent:* Tuesday, August 29, 2017 11:16 PM
> *To:* bloat <bloat@lists.bufferbloat.net>
> *Subject:* [Bloat] different speeds on different ports? (benchmarking fun)
>
>
>
> I don't have a full writeup yet, b
for an application?
-Aaron
On Fri, Apr 14, 2017 at 11:00 Eric Dumazet <eric.duma...@gmail.com> wrote:
> On Thu, 2017-04-13 at 20:12 -0700, Aaron Wood wrote:
> > When I was testing with my iPerf changes, I realized that the sch_fq
> > pacing (which in iperf is set via set
When I was testing with my iPerf changes, I realized that the sch_fq pacing
(which in iperf is set via setsockopt()), is pacing at a bandwidth that's
set at a pretty low level in the stack (which makes sense). This is
different from the application pacing that iperf does (which is pacing the
This is wifi tethering from my OSX laptop to my iPhone, right after the
laptop tried to reconnect to the iphone after waking up.
64 bytes from 8.8.8.8: icmp_seq=0 ttl=56 time=42907.196 ms
64 bytes from 8.8.8.8: icmp_seq=1 ttl=56 time=41922.290 ms
64 bytes from 8.8.8.8: icmp_seq=2 ttl=56
I've been seeing that as well, not sure what to make of it.
On Tue, Jan 10, 2017 at 13:34 Dave Taht wrote:
> This is from the first distanc-y test of the latest lede (lacking wifi
> ATF tho) on a picostation M2HP, about the lowest end wifi router I
> have in the field.
>
>
On Tue, Dec 20, 2016 at 11:02 AM, Dave Taht wrote:
> Active public servers include:
>
> flent-freemont.bufferbloat.net
> ( this is colocated with flent-bbr-west which has bbr on by default - an
> interesting test might be testing both these servers at the same time
> via the
What's the current status of the fq 802.11 work with respect to the Marvell
chipsets (88W8864). I'd like to switch my WRT1900AC over to LEDE and get
this feature set into testing at home.
Thanks,
Aaron
___
Bloat mailing list
Bloat@lists.bufferbloat.net
On Sun, Dec 4, 2016 at 9:13 AM, Dave Taht wrote:
> On Sun, Dec 4, 2016 at 5:40 AM, Rich Brown
> wrote:
> > As I browse the web, I see several sets of performance measurement using
> either netperf or iperf, and never know if either offers an
This is really fascinating reading.
The following made me stop for a second, though:
"The bucket is typically full at connection startup so BBR learns the
underlying network's BtlBw, but once the bucket empties, all packets sent
faster than the (much lower than BtlBw) bucket fill rate are
> Take, for example, the over-optimistic fiber build-out that
> essentially terminated in 2000 - it's taken 16 years to use all that
> up
>
That sounds like it was in the right ballpark. Trenching is so expensive,
only doing it every couple decades sounds like a reasonable plan (even
better
I need to box my test unit up and return it, but my area has 160/12 service
if you get the upgraded rates (which I do)
-Aaron
On Thu, Oct 20, 2016 at 11:48 Klatsky, Carl
wrote:
> On Thu, 20 Oct 2016, Rich Brown wrote:
>
> >
On Fri, Sep 30, 2016 at 1:12 AM, Mikael Abrahamsson <swm...@swm.pp.se>
wrote:
> On Thu, 29 Sep 2016, Aaron Wood wrote:
>
> While you think 3.10 is old, in my experience it's still seen as cutting
>> edge by many. RHEL is still only at 3.10. And routers are using much
On Thu, Sep 29, 2016 at 8:50 PM, Dave Täht wrote:
>
> > Android is still shipping linux 3.10? How... quaint.
> >
> > Since this is mobile, I'm pretty sure it will present a whole new host
> > of "data points".
>
> yes! (there have been a few studies of this, btw)
>
> The part
On Thu, Sep 29, 2016 at 12:43 PM, Dave Täht wrote:
>
>
> On 9/29/16 4:24 AM, Mário Sérgio Fujikawa Ferreira wrote:
> > Is there a mailing list I can lurk in to follow on the development?
> >
> > I'm most interested on a delta to apply to Android 6.x Franco Kernel
> >
hat used those, and I've always
> worried about iperf's internal notion of a sampling interval.
>
> On 9/20/16 3:00 PM, Aaron Wood wrote:
> > I modified iperf3 to use a 1ms timer, and was able to get things much
> > smoother. I doubt it's as smooth as iperf3 gets on Linux when
stopped
it from achieving the target rate. There will be another writeup on that,
but I need to get some good sample data together for graphing.
-Aaron Wood
___
Bloat mailing list
Bloat@lists.bufferbloat.net
https://lists.bufferbloat.net/listinfo/bloat
What about a strip-chart with multiple lanes for every device. Then use
either a line graph or a spectrograph (color of band) style marking to show
data rate used at that time. If the main goal is fairness and airtime,
then the eye can visually compute that based on how evenly spread out the
scapy (python) should be able to keep up with a voip or video stream. I've
been using it to parse packets and do some other manipulations. It's
certainly not C, performance-wise, but it's incredibly flexible at the
protocol manipulation level.
The performance issues that I'm running into with
Huh, those results are rather different from mine when I had free.fr:
http://burntchrome.blogspot.com/2014/01/bufferbloat-or-lack-thereof-on-freefr.html
-Aaron
On Fri, Jun 5, 2015 at 1:06 PM, Dave Taht dave.t...@gmail.com wrote:
63% F bloat grade for
What about the link type? If there are extra overheads going on, that's
going to muck with the calculations (possibly adding latency, but shouldn't
be cutting bandwidth), since the throttling calculations will be wrong.
His ISP may be able to help with that.
It would be interesting to see what
All,
I've been lurking on the OpenWRT forum, looking to see when the CC builds
for the WRT1900AC stabilized, and they seem to be so (for a very beta-ish
version of stable).
So I went ahead and loaded up the daily ( CHAOS CALMER (Bleeding Edge,
r45715)).
After getting Luci and sqm-scripts
But will it trigger at all? If the inbound rate is say 50Mbps, and the
link to the in-home devices are over 100Mbps ethernet, will codel _ever_
see a 5ms buffer on inbound?
Or is the shaping buffering incoming packets, and creating a bundle that it
can measure? (I don't know the internals of
On Tue, Apr 21, 2015 at 3:13 PM, jb jus...@dslr.net wrote:
Today I've switched it back to large receive window max.
The customer base is everything from GPRS to gigabit. But I know from
experience that if a test doesn't flatten someones gigabit connection they
will immediately assume oh
Toke,
I actually tend to see a bit higher latency with ICMP at the higher
percentiles.
http://burntchrome.blogspot.com/2014/05/fixing-bufferbloat-on-comcasts-blast.html
http://burntchrome.blogspot.com/2014/05/measured-bufferbloat-on-orangefr-dsl.html
Although the biggest boost I've seen ICMP
On Fri, Mar 27, 2015 at 8:08 AM, Richard Smith smithb...@gmail.com wrote:
Using horst I've discovered that the major reason our WiFi network sucks
is because 90% of the packets are sent at the 6mbit rate. Most of the rest
show up in the 12 and 24mbit zone with a tiny fraction of them using
I do this often at work, using a separate machine to capture traffic using
wireshark. Wireshark makes a lot of the analysis fairly straightforward
(especially with it's excellent packet dissectors). By capturing in
radiotap mode, you get RSSI/noise levels on the 802.11n packet, the rates
Perhaps just a wall of shame? No venom, just point out the failings, and
call people out.
But, frankly, I don't think any of the router mfr's actually care (I've
seen no evidence of it), and since they're not in the business of actually
making these things (just putting their labels on them), I
On Wed, Sep 3, 2014 at 4:08 AM, Jonathan Morton chromati...@gmail.com
wrote:
Given that the CPU load is confirmed as high, the pcap probably isn't as
useful. The rest would be interesting to look at.
Are you able to test with smaller packet sizes? That might help to
isolate
But this doesn't really answer the question of why the WNDR has so much
lower a ceiling with shaping than without. The G4 is powerful enough that
the overhead of shaping simply disappears next to the overhead of shoving
data around. Even when I turn up the shaping knob to a value quite
Comcast has upped the download rates in my area, from 50Mbps to 100Mbps.
This morning I tried to find the limit of the WNDR3800. And I found it.
50Mbps is still well within capabilities, 100Mbps isn't.
And as I've seen Dave say previously, it's right around 80Mbps total
(download + upload).
On Tue, Apr 29, 2014 at 5:46 PM, Dave Taht dave.t...@gmail.com wrote:
Yes, but as soon as you hit the long distance network the latency is the
same regardless of access method. So while I agree that understanding the
effect of latency is important, it's no longer a meaningful way of
however what we are probably seeing with the measurement flows is
slow start causing a whole bunch of packets to be lost in a bunch.
That would line up with the timing, and the periodic drops that I see in
the flows when using Toke's newer wrapper (and netperf head), which attempt
to work
I'm definitely interested in seeing if the new pie implementation fares
better than what I was seeing on 3.10.24-8.
-Aaron
On Mon, Jan 20, 2014 at 4:46 PM, Dave Taht dave.t...@gmail.com wrote:
http://www.spinics.net/lists/netdev/msg264935.html
Hat off to vijay and the pie folk at cisco who
I find the notion of LTE that's faster than DSL somewhat amazing, still.
(jealous)
-Aaron
On Fri, Nov 1, 2013 at 1:48 PM, Mikael Abrahamsson swm...@swm.pp.se wrote:
On Thu, 31 Oct 2013, Dave Taht wrote:
I'm really impressed you can get ~72Mbit down and ~4Mbit up. (closer to 8
up when you
73 matches
Mail list logo