What are the compelling reasons to try versions newer than 3.7.5-2? I'm
using a WNDR3800.
And I can't seem to find release notes for anything newer than that (but
e-mail notifications to the devel mailing list).
Thanks,
Aaron Wood
___
Blo
> Thanks for the information. I'd be interested in why you have chosen
> PIE, e.g., instead of sfq-CoDel. Any pointers to evaluation
> reports/results? Last time I saw a presentation on this it seemed
> that CoDel was performing quite well.
>
I think this cablelabs report makes the argument for PI
I find the notion of LTE that's faster than DSL somewhat amazing, still.
(jealous)
-Aaron
On Fri, Nov 1, 2013 at 1:48 PM, Mikael Abrahamsson wrote:
> On Thu, 31 Oct 2013, Dave Taht wrote:
>
> I'm really impressed you can get ~72Mbit down and ~4Mbit up. (closer to 8
>> up when you consider ac
> The comment about the WNDR3800 not being able to push this is of course
relevant, so I guess we need a better platform if we want to do testing for
these higher speeds.
One thing that I've noticed a number of newer chipsets doing is moving
"network acceleration" into hardware, as a way to get to
I'm definitely interested in seeing if the new pie implementation fares
better than what I was seeing on 3.10.24-8.
-Aaron
On Mon, Jan 20, 2014 at 4:46 PM, Dave Taht wrote:
> http://www.spinics.net/lists/netdev/msg264935.html
>
> Hat off to vijay and the pie folk at cisco who shepherded the co
>
> however what we are probably seeing with the measurement flows is
> slow start causing a whole bunch of packets to be lost in a bunch.
>
That would line up with the timing, and the periodic drops that I see in
the flows when using Toke's newer wrapper (and netperf head), which attempt
to work
On Tue, Apr 29, 2014 at 5:46 PM, Dave Taht wrote:
> > Yes, but as soon as you hit the long distance network the latency is the
> > same regardless of access method. So while I agree that understanding the
> > effect of latency is important, it's no longer a meaningful way of
> selling
> > fiber
List,
In talking with a friend over the weekend that moves data around for the
national labs (on links at rates like 10Gbps), we ended up having a rather
interesting discussion about just how radically different the problem
spaces are vs. what he's seen in the bufferbloat community.
They have few
http://www.gateworks.com/product/item/ventana-gw5310-network-processor
Out of price range in single units, but I don't know where the price breaks
kick in. Dual-core 800MHz ARM should be plenty of power for the GigE ports.
I think to get any sort of platform like this by a major vendor, we're
lo
Comcast has upped the download rates in my area, from 50Mbps to 100Mbps.
This morning I tried to find the limit of the WNDR3800. And I found it.
50Mbps is still well within capabilities, 100Mbps isn't.
And as I've seen Dave say previously, it's right around 80Mbps total
(download + upload).
ht
On Fri, Aug 29, 2014 at 11:06 AM, Dave Taht wrote:
> On Fri, Aug 29, 2014 at 9:57 AM, Aaron Wood wrote:
> > Comcast has upped the download rates in my area, from 50Mbps to 100Mbps.
> > This morning I tried to find the limit of the WNDR3800. And I found it.
> > 50Mbps
> > But this doesn't really answer the question of why the WNDR has so much
> lower a ceiling with shaping than without. The G4 is powerful enough that
> the overhead of shaping simply disappears next to the overhead of shoving
> data around. Even when I turn up the shaping knob to a value quite
Luckily, I don't mind being wrong (or even _way_ off the mark).
I don't think that's it.
>
> First a nitpick: the PowerBook version of the late-model G4 (7447A)
> doesn't have the external L3 cache interface, so it only has the 256KB or
> 512KB internal L2 cache (I forget which). The desktop ver
>
> What this makes me realize is that I should go instrument the cpu stats
> with each of the various operating modes:
>
> * no shaping, anywhere
> * egress shaping
> * egress and ingress shaping at various limited levels:
> * 10Mbps
> * 20Mbps
> * 50Mbps
> * 100Mbps
>
So I set th
On Wed, Sep 3, 2014 at 4:08 AM, Jonathan Morton
wrote:
> Given that the CPU load is confirmed as high, the pcap probably isn't as
> useful. The rest would be interesting to look at.
>
> Are you able to test with smaller packet sizes? That might help to
> isolate packet-throughput (ie. connecti
Perhaps just a wall of shame? No venom, just point out the failings, and
call people out.
But, frankly, I don't think any of the router mfr's actually care (I've
seen no evidence of it), and since they're not in the business of actually
making these things (just putting their labels on them), I d
But until the silicon vendors update _their_ forks of OpenWRT, commercially
available home routers won't have these benefits. Because the home router
market is dominated by packaged reference designs from one of a very small
number of companies that actually make all the chipsets (Dave, I know you
I do this often at work, using a separate machine to capture traffic using
wireshark. Wireshark makes a lot of the analysis fairly straightforward
(especially with it's excellent packet dissectors). By capturing in
radiotap mode, you get RSSI/noise levels on the 802.11n packet, the rates
involved
On Fri, Mar 27, 2015 at 8:08 AM, Richard Smith wrote:
> Using horst I've discovered that the major reason our WiFi network sucks
> is because 90% of the packets are sent at the 6mbit rate. Most of the rest
> show up in the 12 and 24mbit zone with a tiny fraction of them using the
> higher MCS ra
Toke,
I actually tend to see a bit higher latency with ICMP at the higher
percentiles.
http://burntchrome.blogspot.com/2014/05/fixing-bufferbloat-on-comcasts-blast.html
http://burntchrome.blogspot.com/2014/05/measured-bufferbloat-on-orangefr-dsl.html
Although the biggest "boost" I've seen ICMP g
On Tue, Apr 21, 2015 at 3:13 PM, jb wrote:
> Today I've switched it back to large receive window max.
>
> The customer base is everything from GPRS to gigabit. But I know from
> experience that if a test doesn't flatten someones gigabit connection they
> will immediately assume "oh congested serv
On Thu, Apr 30, 2015 at 8:10 PM, jb wrote:
> Already users are like "how can i fix this!".
>
> I've just replied to one who has lower speeds on the surfboard SB6141
> which is a modem designed for crazy cable speeds. He has an "F" and his
> downstream bloat is terrible, and upstream not much bett
ICMP prioritization over TCP?
> >Ideas?
>
___
Bloat mailing list
Bloat@lists.bufferbloat.net
https://lists.bufferbloat.net/listinfo/bloat
But will it trigger at all? If the inbound rate is say 50Mbps, and the
link to the in-home devices are over 100Mbps ethernet, will codel _ever_
see a 5ms buffer on inbound?
Or is the shaping buffering incoming packets, and creating a bundle that it
can measure? (I don't know the internals of how
All,
I've been lurking on the OpenWRT forum, looking to see when the CC builds
for the WRT1900AC stabilized, and they seem to be so (for a very "beta"-ish
version of stable).
So I went ahead and loaded up the daily ( CHAOS CALMER (Bleeding Edge,
r45715)).
After getting Luci and sqm-scripts insta
May 23, 2015 at 10:17 PM, Aaron Wood wrote:
> All,
>
> I've been lurking on the OpenWRT forum, looking to see when the CC builds
> for the WRT1900AC stabilized, and they seem to be so (for a very "beta"-ish
> version of stable).
>
> So I went ahead and loaded u
On Sat, May 23, 2015 at 11:44 PM, Dave Taht wrote:
>
> And it has a fan. Hate fans. Amusingly (I guess), I had this same
> chipset to fiddle with in the "mirabox" and it ran wy too hot.
>
I haven't hit the fan, yet
> It is not clear why you are getting an inaccurate rate out of it, eit
What about the link type? If there are extra overheads going on, that's
going to muck with the calculations (possibly adding latency, but shouldn't
be cutting bandwidth), since the throttling calculations will be wrong.
His ISP may be able to help with that.
It would be interesting to see what wo
Huh, those results are rather different from mine when I had free.fr:
http://burntchrome.blogspot.com/2014/01/bufferbloat-or-lack-thereof-on-freefr.html
-Aaron
On Fri, Jun 5, 2015 at 1:06 PM, Dave Taht wrote:
> 63% F bloat grade for
> http://www.dslreports.com/speedtest/results/isp/r3895-Orang
'5Gbps system throughput "without taxing the CPU,"'
Lots of offloads?
On Mon, Jan 4, 2016 at 10:37 PM, Jonathan Morton
wrote:
> This looks potentially interesting:
> http://www.theregister.co.uk/2016/01/05/broadcom_pimps_iot_router_chip/
>
> Even if that particular device turns out to be hard t
scapy (python) should be able to keep up with a voip or video stream. I've
been using it to parse packets and do some other manipulations. It's
certainly not C, performance-wise, but it's incredibly flexible at the
protocol manipulation level.
The performance issues that I'm running into with it
Un-bloated power-line-to-AP units would be awesome. As would power-line to
POE adapters for small electronics. Although you have the same difficulty
with on-boarding there that you do with wifi.
-Aaron Wood
On Mon, Apr 18, 2016 at 9:35 AM, Jonathan Morton
wrote:
>
> > On 18 Apr, 20
What about a strip-chart with multiple lanes for every device. Then use
either a line graph or a spectrograph (color of band) style marking to show
data rate used at that time. If the main goal is fairness and airtime,
then the eye can visually compute that based on how evenly spread out the
slic
I'm looking at DPDK for a project, but I think I can make substantial gains
with just AF_PACKET + FANOUT and SO_REUSEPORT. It's not clear to my yet
how much DPDK is going to gain over those (and those can go a long way on
higher-powered platforms).
On lower-end systems, I'm more suspicious of the
> where the tx and rx rings are cleaned up in the same thread and there
> is only one interrupt line for both.
>
> 51: 18 59244 253350 314273 PCI-MSI
> 1572865-edge enp3s0-TxRx-0
> 52: 5 484274 141746 197260 PCI-MSI
> 1572866-edge enp3s0-T
Just came across this at the top of the README for the ixgbe driver: (
http://downloadmirror.intel.com/22919/eng/README.txt)
WARNING: The ixgbe driver compiles by default with the LRO (Large Receive
Offload) feature enabled. This option offers the lowest CPU utilization for
receives, but is comp
imum rate after congestion has stopped
it from achieving the target rate. There will be another writeup on that,
but I need to get some good sample data together for graphing.
-Aaron Wood
___
Bloat mailing list
Bloat@lists.bufferbloat.net
https://lists.bufferbloat.net/listinfo/bloat
nd I've always
> worried about iperf's internal notion of a sampling interval.
>
> On 9/20/16 3:00 PM, Aaron Wood wrote:
> > I modified iperf3 to use a 1ms timer, and was able to get things much
> > smoother. I doubt it's as smooth as iperf3 gets on Linux when fq
On Thu, Sep 29, 2016 at 12:43 PM, Dave Täht wrote:
>
>
> On 9/29/16 4:24 AM, Mário Sérgio Fujikawa Ferreira wrote:
> > Is there a mailing list I can lurk in to follow on the development?
> >
> > I'm most interested on a delta to apply to Android 6.x Franco Kernel
> > (https://github.com/francisco
On Thu, Sep 29, 2016 at 8:50 PM, Dave Täht wrote:
>
> > Android is still shipping linux 3.10? How... quaint.
> >
> > Since this is mobile, I'm pretty sure it will present a whole new host
> > of "data points".
>
> yes! (there have been a few studies of this, btw)
>
> The part that we don't re
On Fri, Sep 30, 2016 at 1:12 AM, Mikael Abrahamsson
wrote:
> On Thu, 29 Sep 2016, Aaron Wood wrote:
>
> While you think 3.10 is old, in my experience it's still seen as cutting
>> edge by many. RHEL is still only at 3.10. And routers are using much
>> older 3.x ke
bucket intervals).
The branch is here: https://github.com/woody77/iperf/tree/pacing_timer
-Aaron
On Tue, Sep 20, 2016 at 4:32 PM, Aaron Wood wrote:
> On Tue, Sep 20, 2016 at 3:03 PM, Dave Täht wrote:
>
>> Groovy. I note that I am really fond of the linux "fdtimer" not
I need to box my test unit up and return it, but my area has 160/12 service
if you get the upgraded rates (which I do)
-Aaron
On Thu, Oct 20, 2016 at 11:48 Klatsky, Carl
wrote:
> On Thu, 20 Oct 2016, Rich Brown wrote:
>
> > https://www.nanog.org/sites/default/files/20160922_Klatsky_First_Steps
>
On Thu, Oct 27, 2016 at 12:30 PM, David Lang wrote:
> On Thu, 27 Oct 2016, Dave Taht wrote:
>
>>
>> I am increasingly convinced that without a killer application that
>> requires it,
>> we've hit "peak bandwidth".
>>
>
> You sound like my College Professor from the early 90's who said that the
>
> Take, for example, the over-optimistic fiber build-out that
> essentially terminated in 2000 - it's taken 16 years to use all that
> up
>
That sounds like it was in the right ballpark. Trenching is so expensive,
only doing it every couple decades sounds like a reasonable plan (even
better i
> 5) But I'm not hopeful that any of the COTS router vendors are going to
> adopt these techniques, simply because they've been impervious to our
> earlier entreaties. That doesn't mean we shouldn't try again - it'd be a
> helluva competitive advantage to incorporate the 25-50 man years of intense
This is really fascinating reading.
The following made me stop for a second, though:
"The bucket is typically full at connection startup so BBR learns the
underlying network's BtlBw, but once the bucket empties, all packets sent
faster than the (much lower than BtlBw) bucket fill rate are dropped
On Sun, Dec 4, 2016 at 9:13 AM, Dave Taht wrote:
> On Sun, Dec 4, 2016 at 5:40 AM, Rich Brown
> wrote:
> > As I browse the web, I see several sets of performance measurement using
> either netperf or iperf, and never know if either offers an advantage.
> >
> > I know Flent uses netperf by defaul
What's the current status of the fq 802.11 work with respect to the Marvell
chipsets (88W8864). I'd like to switch my WRT1900AC over to LEDE and get
this feature set into testing at home.
Thanks,
Aaron
___
Bloat mailing list
Bloat@lists.bufferbloat.net
On Tue, Dec 20, 2016 at 11:02 AM, Dave Taht wrote:
> Active public servers include:
>
> flent-freemont.bufferbloat.net
> ( this is colocated with flent-bbr-west which has bbr on by default - an
> interesting test might be testing both these servers at the same time
> via the rtt_fair* tests from
On Tue, Dec 20, 2016 at 12:20 PM, Joel Wirāmu Pauling
wrote:
> My biggest bug bear is that reliance on netperf/netserver with -DEMO mode
> compilation time flag breaks compilation on recent RHEL and Fedora boxes
> due to recent GCC incompatibilities.
>
I ran into some issues on OSX due to the gc
I've been seeing that as well, not sure what to make of it.
On Tue, Jan 10, 2017 at 13:34 Dave Taht wrote:
> This is from the first distanc-y test of the latest lede (lacking wifi
> ATF tho) on a picostation M2HP, about the lowest end wifi router I
> have in the field.
>
> It's just as lovely as
This is wifi tethering from my OSX laptop to my iPhone, right after the
laptop tried to reconnect to the iphone after waking up.
64 bytes from 8.8.8.8: icmp_seq=0 ttl=56 time=42907.196 ms
64 bytes from 8.8.8.8: icmp_seq=1 ttl=56 time=41922.290 ms
64 bytes from 8.8.8.8: icmp_seq=2 ttl=56 time=40971
When I was testing with my iPerf changes, I realized that the sch_fq pacing
(which in iperf is set via setsockopt()), is pacing at a bandwidth that's
set at a pretty low level in the stack (which makes sense). This is
different from the application pacing that iperf does (which is pacing the
goodp
for an application?
-Aaron
On Fri, Apr 14, 2017 at 11:00 Eric Dumazet wrote:
> On Thu, 2017-04-13 at 20:12 -0700, Aaron Wood wrote:
> > When I was testing with my iPerf changes, I realized that the sch_fq
> > pacing (which in iperf is set via setsockopt()), is pacing at a
> >
s many streams from different servers to achieve these
> speeds.
>
> I’m assuming flent is a single stream, so you’re at the mercy of TCP
> receive windows and latency limiting how fast you can go on that single
> stream.
>
>
>
> *From:* Bloat [mailto:bloat-boun...@lists.b
that it's something about that node in particular. It seems to have a
125Mbps cap (so I guess about a 140-150Mbps line-rate cap?).
What kind of node is it running on?
On Thu, Sep 21, 2017 at 8:13 AM, Aaron Wood wrote:
> I'd wondered about single vs. multiple, but I'm getting pre
21, 2017 at 8:16 AM, Aaron Wood wrote:
> The friend of mine that I've been working with brought up a cloud node
> somewhere with ubuntu and netperf on it, and from another location
> (business internet) able to consistently get better throughput from his
> cloud node setup th
I'm comparing some numbers between the fremont node and a friend's Droplet
running netserver.
We've previous noted that we don't see more than a 120Mbps download rate
from the fremont node.
Today I was able to confirm in multiple back-to-back runs that the fremont
node was only giving me about 12
For the graphs, it would be great for f they were using a normalize output
that allows for easy comparisons between runs. Especially the y axis for
the “all” graph.
On Mon, Nov 27, 2017 at 15:55 Dave Taht wrote:
> On Mon, Nov 27, 2017 at 3:16 PM, Martin Geddes
> wrote:
> > Hi Toke,
> >
> > The t
I'd focus on the distributors of the Linux BSP used on those routers: the
silicon vendors themselves. Current routers shouldn't be shipping with 3.2
kernels, or even 3.10, and yet... ::sigh::
I find it very frustrating they they fork the kernel for their own use,
instead of maintaining patches
Maybe he's on a DOCSIS 3.1 headend that's also using pie? Pie doesn't need
to know the outbound rate, correct? as it's meant to be driven by the
RTS/CTS type behavior that the upstream traffic on cable has (the correct
terms for cable aren't coming to mind at the moment).
On Sat, Oct 6, 2018 at
I also know of commercial product(s) using it internally.
On Fri, Jun 7, 2019 at 4:03 PM Bruce Ferrell wrote:
> On 6/7/19 1:23 PM, Dave Taht wrote:
> > what is openvswitch used for nowadays?
> >
> > https://mail.openvswitch.org/pipermail/ovs-dev/2015-March/296317.html
> >
> I haven't done it mys
I recently upgraded service from 150up, 10dn Mbps to xfinity's gigabit
(with 35Mbps up) tier, and picked up a DOCSIS 3.1 modem to go with it.
Flent test results are here:
https://burntchrome.blogspot.com/2020/03/bufferbloat-with-comcast-gigabit-with.html
tl/dr; 1000ms of upstream bufferbloat
Bu
(hit send early, somehow)...
Although this thread makes we wonder if perhaps not:
https://lists.bufferbloat.net/pipermail/cake/2018-August/004285.html
On Tue, Mar 24, 2020 at 10:01 PM Aaron Wood wrote:
> I recently upgraded service from 150up, 10dn Mbps to xfinity's gigabit
> (wit
>
> >>> But it's DOCSIS 3.1, so why isn't PIE working? Theory: It's in
> DOCSIS 3.0
> >>> upstream mode based on the status LEDs. Hopefully it will go away if
> I can
> >>> convince it to run in DOCSIS 3.1 mode.
> >>
> >> I think that while PIE is "mandatory to implement" in DOCSIS 3.1, the
> >>
e router and the APs
up).
> On March 25, 2020 6:29:17 AM GMT+01:00, Matt Taggart
> wrote:
>>
>> On 3/24/20 10:01 PM, Aaron Wood wrote:
>>
>> I recently got CenturyLink gig fiber and bought one of these:
>>
>> Qotom Q355G4
>> https://www.amazon.com/gp/p
7;t likely to all
be elephants.
On Wed, Mar 25, 2020 at 4:03 AM Toke Høiland-Jørgensen wrote:
> Sebastian Moeller writes:
>
> > Hi Toke,
> >
> >
> >> On Mar 25, 2020, at 09:58, Toke Høiland-Jørgensen wrote:
> >>
> >> Aaron Wood writes:
> &g
40 AM Dave Taht wrote:
>
> https://forum.openwrt.org/t/comparative-throughput-testing-including-nat-sqm-wireguard-and-openvpn/44724/44
>
> (H/T to aaron wood)
>
> The post persistently points out that openvpn tends to optimize for
> one direction only. This is in part due to th
On Sat, Mar 28, 2020 at 7:30 AM Sebastian Moeller wrote:
>
> > On Mar 27, 2020, at 22:41, Dave Taht wrote:
> >
> > "put everyone on a schedule"... sigh
>
> Sorry to disagree a bit, but I consider this to be conceptually
> decent advice. If a problem can be avoided by a simple behavioral
On Wed, Mar 25, 2020 at 12:18 PM Dave Taht wrote:
> On Wed, Mar 25, 2020 at 8:58 AM Aaron Wood wrote:
> >
> > One other thought I've had with this, is that the apu2 is multi-core,
> and the i210 is multi-queue.
> >
> > Cake/htb aren't, iirc, set
ve the interfaces, but not the
energy to deal with the troubleshooting. I think I still have an old
WNDR3700 in a box somewhere that I could prep as a backup, but I'd rather
not go through the hassle.
> On Wed, Mar 25, 2020 at 8:58 AM Aaron Wood wrote:
> >
> > One other thought
https://www.seeedstudio.com/ODYSSEY-X86J4105800-p-4445.html
quad-core Celeron J4105 1.5-2.5 GHz x64
8GB Ram
2x i211t intel ethernet controllers
intel 9560 802.11ac (wave2) wifi/bluetooth chipset
intel built-in graphics
onboard ARM Cortex-M0 and RPi & Arduino headers
m.2 and PCIe adapters
<$200
___
his board.
>
> On Sat, Apr 4, 2020 at 7:47 AM David P. Reed wrote:
> >
> > Thanks! I ordered one just now. In my experience, this company does
> rather neat stuff. Their XMOS based microphone array (ReSpeaker) is really
> useful. What's the state of play in Linux/Ope
>
> The CPE side has met willingness to investigate these issues from early
> on, but it seems that buffer handling is much harder on CPE chipsets
> than on base station chipsets. In particular on 5G. We have had some
> very good results on 4G, but they do not translate to 5G.
>
My own experienc
;t determined how long it will take to
thermally throttle, and if bandwidth suffers as a result.
Pretty happy with it so far, though.
On Sun, Apr 26, 2020 at 7:46 PM Dave Taht wrote:
> anyone got around to hacking on this board yet?
>
> On Sat, Apr 4, 2020 at 9:27 AM Aaron Wood wrote:
> &
Those are great results.
I've been thinking for a while that algorithms / techniques like fq-codel
would be great if packaged into library form where they could be utilized
by application-layers. Obviously, not all application-layer queues can
deal with loss like TCP, but for all those that can,
I'm continually frustrated that my cable headend appears to be using DOCSIS
3.1 for downstream, and 3.0 for upstream. Which means my Arris SB8200
isn't using PIE, but the standard FIFO (Gigabit down, 35Mbps up).
Cake to the rescue. With a celeron based router (x86-64), I'm hitting
line-rate with
I'm still surprised at how hard it is to get people to understand that the
problem they're having (especially with real-time video like Zoom) isn't
bandwidth, but jitter and bloat...
On Wed, Mar 24, 2021 at 12:52 PM Jonathan Foulkes
wrote:
> Agreed, we need to be more vocal.
>
> I did look up my
iperf3 isn’t “academic”, but is more focused on scientific computing (ESNet
pushes a LOT of data CERN around, on 100Gbps backbones).
But that also skews their usage/needs. Very high throughput bulk transfers
with long durations, over mixed systems. Not as many concerns about
latency, except in th
One of my long concerns with the RRUL test is that the ICMP ping test
portion is not isochronous, and runs at a variable rate based on rtt, which
means that it uses more/less bandwidth as an inverse function of rtt, and
that makes it harder to compare the actual goodput of the tcp streams
running i
I think the "I Love Lucy" chocolate factory scene is perhaps a good analogy:
https://www.youtube.com/watch?v=WmAwcMNxGqM
The chocolates start to come in too fast, and they can't keep up, but
because they aren't telling the kitchen to slow down, they keep piling up
until it collapses into a mess.
I think one of the big advantages that AQM has is that it doesn't know, or
care, who the flow is. It can't, itself, violate NN concerns because it
has no knowledge with which to do so.
Instead, it punishes the greed flows that try to use more than their fair
share of the available bandwidth. It
Are these in-flux changes to where the upstream split is why some modems
report DOCSIS 3.1 downstream, but only 3.0 upstream? (and therefore aren't
enabling AQM on the upstream?)
-Aaron
On Tue, Jun 22, 2021 at 4:04 PM Livingood, Jason via Bloat <
bloat@lists.bufferbloat.net> wrote:
> > For DOCS
I'm running an Odyssey from Seeed Studios (celeron J4125 with dual i211),
and it can handle Cake at 1Gbps on a single core (which it needs to,
because OpenWRT's i211 support still has multiple receive queues disabled).
On Tue, Jun 22, 2021 at 12:44 AM Giuseppe De Luca
wrote:
> Also a PC Engines
ems regardless of the US
> mode in use (ie sc-qam (3.0) or ofdma (3.1) upstream), so it should be
> enabled.
>
> Get Outlook for Android <https://aka.ms/AAb9ysg>
>
> ------
> *From:* Bloat on behalf of Aaron
> Wood
> *Sent:* Tuesday, Ju
On Tue, Jul 6, 2021 at 7:26 PM Dave Taht wrote:
> On Tue, Jul 6, 2021 at 3:32 PM Aaron Wood wrote:
> >
> > I'm running an Odyssey from Seeed Studios (celeron J4125 with dual
> i211), and it can handle Cake at 1Gbps on a single core (which it needs to,
> because OpenWR
On Mon, Jul 12, 2021 at 1:32 PM Ben Greear wrote:
> UDP is better for getting actual packet latency, for sure. TCP is
> typical-user-experience-latency though,
> so it is also useful.
>
> I'm interested in the test and visualization side of this. If there were
> a way to give engineers
> a good
With the disclaimer that I'm not as strong in statistics and modelling as
I'd like to be
I think it's not useful to attempt to stochastically model the behavior of
what are actually active (well, reactive) components. The responses of
each piece are deterministic, but the inputs (users) are n
If we see that AQM appears to be not functioning as expected for upstream
connections on DOCSIS3.1, what's the right avenue for getting that
resolved? (and does that only apply to the Comcast-owned, vs.
customer-owned, modems?)
On Sat, Jul 31, 2021 at 10:50 AM Simon Barber wrote:
> Awesome to h
My own experiments with this, in the past (5+ years ago), was that you
absolutely had to use cabled setups for repeatability, but then didn't have
enough randomness in the variability to really test anything that was
problematic. We could create hidden nodes, or arbitrary meshes of devices,
but th
Can't switches send pause frames back over ethernet?
/me googles, and finds:
https://en.wikipedia.org/wiki/Ethernet_flow_control
On Wed, Jan 12, 2022 at 4:57 PM Dave Taht wrote:
> What appeared to be the case was that a ONT had a 50ms buffer at
> 100Mbit and was reconfigured to drive a gig and
The edge of the datacenter, or the edge as in where a building meets the
internet? (either residential or commercial)
On Tue, Feb 1, 2022 at 6:27 AM Dave Taht wrote:
> One of the analogies that went by in this interview with nick mckeown
> was "programmable cables" and the IPU concept.
>
> http
Are you asking what they _should_ be, or what the typical buffering seen in
equipment actually is?
On Wed, Mar 9, 2022 at 9:39 AM Michael Menth wrote:
> Hi all,
>
> I don't question the usefulness of AQMs for buffers - on the contrary.
> But what are up-to-date buffer sizes in networking gears,
I read this earlier in the week, and thought it applied well to describing
how excessive latency causes TCP (cubic, reno, etc) overshoot and collapse
in bufferbloat situations:
https://read.fluxcollective.org/i/98919216/lens-of-the-week
-Aaron
___
Bloat
I like the general idea, especially if there was a site-wide controller
module that can do the sort of frequency allocation that network engineers
do in dense AP deployments today: adjacent APs run on different frequency
bands so that they reduce the likelihood of stepping on each others
transmiss
I’ve found that _usually_ I can set cake’s bandwidth limits to 90-95% of
the advertised bandwidth, and everything “just works”. So long as you’re
routinely able to achieve the bandwidth, it tends to work.
I’ve found in my testing over the years (I’ve been a user of fq_codel since
2013) that limit
This is good work! I love reading their posts on scale like this.
It’s wild to me that the Linux kernel has (apparently) never implemented
shrinking the receive window, or handling the case of userspace starting a
large transfer and then just not ever reading it… the latter is less
surprising, I
On Tue, Feb 27, 2024 at 10:52 AM Rich Brown via Bloat <
bloat@lists.bufferbloat.net> wrote:
>
>
> On Feb 27, 2024, at 12:00 PM, bloat-requ...@lists.bufferbloat.net wrote:
>
> On 2/26/2024 6:28 AM, Rich Brown via Bloat wrote:
>
> - Avoid the WAN port's DHCP assigned subnet (what if the ISP uses
> 1
99 matches
Mail list logo