Webinar on LibreQos June 14rh

2024-06-09 Thread Dave Taht


We are doing an hour long deep dive webinar on LibreQos June 14th, courtesy
of the APNIC Academy, with an astounding 196 signups so far. We are pushing
multiple terabits of traffic around the world through it now, not just for
ISPs, but for MDUs, post VPN processing, upstream DIY circuits, on
everything from DSL to fiber.

As an open source, eBPF based transparent bridge, it only takes a few
minutes to install in-band, and be able to pull stats about your networks'
behaviors that you've never seen before, and then apply the best shaping
and bufferbloat-beating code in the world to it.

Sign up here: https://academy.apnic.net/en/events?id=a0BOc00Pgk1MAC -
especially if you have been deploying fq_codel or cake, replacing brittle
policers, etc, etc.

We have a very popular chat too:
https://chat.libreqos.io/join/fvu3cerayyaumo377xwvpev6/

PS The upcoming v1.5 release of libreqos is about 30% faster (example:
pushing over 60Gbits through a $1500 Ryzen box), which is enough for 10s of
thousands of customer networks on the other side of it. We have the ability
to model and debloat complicated networks 8 hops deep, or along a
troublesome edge, new support for emitting netflow, and quite a few other
useful things in that release. and we hope to put it into beta by the end
of this month, or earlier.

Please come to the webinar to see the latest code, live,  in production!

https://github.com/LibreQoE/LibreQoS/tree/develop



I am in general deliriously happy with the stability, and performance of
this stuff and the growth of uptake worldwide over the last 6 months, (and
also watching starlink get thoroughly debloated a few months back was a
real high, tho they just used fq and something codel-ly not LibreQos)

Y'all are just minutes and minor CAPEX away from a vastly better internet.

-- 
https://www.linkedin.com/feed/update/urn:li:activity:7203400057172180992/
Donations Drive.
Dave Täht CSO, LibreQos


Re: [LibreQoS] transit and peering costs projections

2023-10-15 Thread Tim Burke
Man, I wanna know where you’re getting 100g transit for $4500 a month! Even 
someone as fly by night as Cogent wants almost double that, unfortunately.

On Oct 15, 2023, at 07:43, Jim Troutman  wrote:


Transit 1G wholesale in the right DCs is below $500 per port.  10gigE full port 
can be had around $1k-1.5k month on long term deals from multiple sources.   
100g IP transit ports start around $4k.

The cost of transport (dark or wavelength) is generally at least as much as the 
IP transit cost, and usually more in underserved markets.  In the northeast it 
is very hard to get 10GigE wavelengths below $2k/month to any location, and is 
generally closer to $3k.  100g waves are starting around $4k and go up a lot.

Pricing has come down somewhat over time, but not as fast as transit prices.   
6 years ago a 10Gig wave to Boston from Maine would be about $5k/month. Today 
about $2800.

With the cost of XCs in data centers and transport costs, you generally don’t 
want to go beyond 2x10gigE before jumping to 100.

On Sat, Oct 14, 2023 at 19:02 Dave Taht via LibreQoS 
mailto:libre...@lists.bufferbloat.net>> wrote:
This set of trendlines was very interesting. Unfortunately the data
stops in 2015. Does anyone have more recent data?

https://drpeering.net/white-papers/Internet-Transit-Pricing-Historical-And-Projected.php

I believe a gbit circuit that an ISP can resell still runs at about
$900 - $1.4k (?) in the usa? How about elsewhere?

...

I am under the impression that many IXPs remain very successful,
states without them suffer, and I also find the concept of doing micro
IXPs at the city level, appealing, and now achievable with cheap gear.
Finer grained cross connects between telco and ISP and IXP would lower
latencies across town quite hugely...

PS I hear ARIN is planning on dropping the price for, and bundling 3
BGP AS numbers at a time, as of the end of this year, also.



--
Oct 30: https://netdevconf.info/0x17/news/the-maestro-and-the-music-bof.html
Dave Täht CSO, LibreQos
___
LibreQoS mailing list
libre...@lists.bufferbloat.net<mailto:libre...@lists.bufferbloat.net>
https://lists.bufferbloat.net/listinfo/libreqos


Re: [LibreQoS] transit and peering costs projections

2023-10-15 Thread Jim Troutman
Transit 1G wholesale in the right DCs is below $500 per port.  10gigE full
port can be had around $1k-1.5k month on long term deals from multiple
sources.   100g IP transit ports start around $4k.

The cost of transport (dark or wavelength) is generally at least as much as
the IP transit cost, and usually more in underserved markets.  In the
northeast it is very hard to get 10GigE wavelengths below $2k/month to any
location, and is generally closer to $3k.  100g waves are starting around
$4k and go up a lot.

Pricing has come down somewhat over time, but not as fast as transit
prices.   6 years ago a 10Gig wave to Boston from Maine would be about
$5k/month. Today about $2800.

With the cost of XCs in data centers and transport costs, you generally
don’t want to go beyond 2x10gigE before jumping to 100.

On Sat, Oct 14, 2023 at 19:02 Dave Taht via LibreQoS <
libre...@lists.bufferbloat.net> wrote:

> This set of trendlines was very interesting. Unfortunately the data
> stops in 2015. Does anyone have more recent data?
>
>
> https://drpeering.net/white-papers/Internet-Transit-Pricing-Historical-And-Projected.php
>
> I believe a gbit circuit that an ISP can resell still runs at about
> $900 - $1.4k (?) in the usa? How about elsewhere?
>
> ...
>
> I am under the impression that many IXPs remain very successful,
> states without them suffer, and I also find the concept of doing micro
> IXPs at the city level, appealing, and now achievable with cheap gear.
> Finer grained cross connects between telco and ISP and IXP would lower
> latencies across town quite hugely...
>
> PS I hear ARIN is planning on dropping the price for, and bundling 3
> BGP AS numbers at a time, as of the end of this year, also.
>
>
>
> --
> Oct 30:
> https://netdevconf.info/0x17/news/the-maestro-and-the-music-bof.html
> Dave Täht CSO, LibreQos
> ___
> LibreQoS mailing list
> libre...@lists.bufferbloat.net
> https://lists.bufferbloat.net/listinfo/libreqos
>


Re: LibreQos

2023-05-13 Thread Dave Taht
On Sat, May 13, 2023 at 12:28 AM Mark Tinka  wrote:
>
>
>
> On 5/12/23 17:59, Dave Taht wrote:
>
> > :blush:
> >
> > We have done a couple podcasts about it, like this one:
> >
> > https://packetpushers.net/podcast/heavy-networking-666-improving-quality-of-experience-with-libreqos/
> >
> > and have perhaps made a mistake by using matrix chat, rather than a
> > web forum, to too-invisibly, do development and support in, but it has
> > been a highly entertaining way to get a better picture of the real
> > problems caring ISPs have.
> >
> > I see you are in Africa? We have a few ISPs playing with this in kenya...
>
> DM me, please, the ones you are aware about that would be willing to
> share their experiences. I'd like to get them to talk about what they've
> gathered at the upcoming SAFNOG meeting in Lusaka.
>
> We have a fairly large network in Kenya, so would be happy to engage
> with the operators running the LibreQoS there.

I forwarded your info.

Slide 40 here, has an anonymized libreqos report of observed latencies
in Africa. The RTTs there are severely bimodal (30ms vs 300ms), which
mucks with a default assumption in sch_cake of a default 100ms RTT.

http://www.taht.net/~d/Misunderstanding_Residential_Bandwidth_Latency.pdf

There are two ways to deal with this, right now we are recommending
cake rtt 200ms setting to keep throughput up. The FQ component
dominates for most traffic, anyway. With a bit more work we hope to
come up with a way of more consistent queuing delay. Or, we could just
wait for more CDNs to and IXPs to deploy there.

/me hides

A note about our public plots:  We had a lot of people sharing
screenshots, so we added a "klingon mode" to consistently
transliterate the more private data to that language.

Another fun fact was by deploying this stuff several folk found
sufficient non-paying clients on their network to pay for the hardware
inside of a month or two.

>
> > We do not know. Presently our work is supported by equinix´s open
> > source program, with four servers in their Dallas DC, and they are
> > 25Gbit ports. Putting together enough dough to get to 100Gbit or
> > finding someone willing to send traffic through more bare metal at
> > that data center or elsewhere is on my mind. In other words, we can
> > easily spin up the ability to L2 route some traffic through a box in
> > their DCs, if only we knew where to find it. :)
> >
> > If you assume linearity to cores (which is a lousy assumption, ok?),
> > 64 Xeon cores could do about 200Gbit, running flat out. I am certain
> > it will not scale linearly and we will hit multiple bottlenecks on a
> > way to that goal.
> >
> > Limits we know about:
> >
> > A) Trying to drive 10s of gbits of realistic traffic through this
> > requires more test clients and servers than we have, or someone with
> > daring and that kind of real traffic in the first place. For example
> > one of our most gung-ho clients has 100Gbit ports, but not anywhere
> > near that amount of inbound traffic. (they are crazy enough to pull
> > git head, try it for a few minutes in production, and then roll back
> > or leave it up)
> >
> > B) A brief test of a 64 core AMD + Nvidia ethernet was severely
> > outperformed by our current choice of a 20 core xeon gold + intel 710
> > or 810 card. It is far more the ethernet card that is the dominating
> > factor. I would kill if I could find one that did a LPM -> CPU
> > mapping... (e.g. instead of a LPM->route mapping, LPM to what cpu to
> > interrupt). We also tried an 80 core arm to inconclusive results early
> > on.
> >
> > Tests of the latest ubuntu release are ongoing. I am not prepared to
> > bless that or release any results yet.
> >
> > C) A single cake instance on one of the more high end Xeons can
> > *almost* push 10Gbit/sec while eating a core.
> >
> > D) Our model is one cake instance per subscriber + the ability to
> > establish trees emulating links further down the chain. One ISP is
> > modeling 10 mmwave hops. Another is just putting in multiple boxes
> > closer to the towers.
> >
> > So in other words, 100s of gbits is achievable today if you throw
> > boxes at it, and more cost effective to do that way. We will of
> > course, keep striving to crack 100gbit native on a single box with
> > multiple cards. It is a nice goal to have.
> >
> > E) In our present, target markets, 10k typical residential subscribers
> > only eat 11Gbit/sec at peak. That is a LOT of the smaller ISPs and
> > networks that fit into that space, so of late we have been focusing
> > more on anal

Re: LibreQos

2023-05-13 Thread Mark Tinka




On 5/12/23 17:59, Dave Taht wrote:


:blush:

We have done a couple podcasts about it, like this one:

https://packetpushers.net/podcast/heavy-networking-666-improving-quality-of-experience-with-libreqos/

and have perhaps made a mistake by using matrix chat, rather than a
web forum, to too-invisibly, do development and support in, but it has
been a highly entertaining way to get a better picture of the real
problems caring ISPs have.

I see you are in Africa? We have a few ISPs playing with this in kenya...


DM me, please, the ones you are aware about that would be willing to 
share their experiences. I'd like to get them to talk about what they've 
gathered at the upcoming SAFNOG meeting in Lusaka.


We have a fairly large network in Kenya, so would be happy to engage 
with the operators running the LibreQoS there.




We do not know. Presently our work is supported by equinix´s open
source program, with four servers in their Dallas DC, and they are
25Gbit ports. Putting together enough dough to get to 100Gbit or
finding someone willing to send traffic through more bare metal at
that data center or elsewhere is on my mind. In other words, we can
easily spin up the ability to L2 route some traffic through a box in
their DCs, if only we knew where to find it. :)

If you assume linearity to cores (which is a lousy assumption, ok?),
64 Xeon cores could do about 200Gbit, running flat out. I am certain
it will not scale linearly and we will hit multiple bottlenecks on a
way to that goal.

Limits we know about:

A) Trying to drive 10s of gbits of realistic traffic through this
requires more test clients and servers than we have, or someone with
daring and that kind of real traffic in the first place. For example
one of our most gung-ho clients has 100Gbit ports, but not anywhere
near that amount of inbound traffic. (they are crazy enough to pull
git head, try it for a few minutes in production, and then roll back
or leave it up)

B) A brief test of a 64 core AMD + Nvidia ethernet was severely
outperformed by our current choice of a 20 core xeon gold + intel 710
or 810 card. It is far more the ethernet card that is the dominating
factor. I would kill if I could find one that did a LPM -> CPU
mapping... (e.g. instead of a LPM->route mapping, LPM to what cpu to
interrupt). We also tried an 80 core arm to inconclusive results early
on.

Tests of the latest ubuntu release are ongoing. I am not prepared to
bless that or release any results yet.

C) A single cake instance on one of the more high end Xeons can
*almost* push 10Gbit/sec while eating a core.

D) Our model is one cake instance per subscriber + the ability to
establish trees emulating links further down the chain. One ISP is
modeling 10 mmwave hops. Another is just putting in multiple boxes
closer to the towers.

So in other words, 100s of gbits is achievable today if you throw
boxes at it, and more cost effective to do that way. We will of
course, keep striving to crack 100gbit native on a single box with
multiple cards. It is a nice goal to have.

E) In our present, target markets, 10k typical residential subscribers
only eat 11Gbit/sec at peak. That is a LOT of the smaller ISPs and
networks that fit into that space, so of late we have been focusing
more on analytics and polish than pushing more traffic. Some of our
new R/T analytics break down at 10k cake instances (that is 40 million
fq_codel queues, ok?), and we cannot sample at 10ms rates, falling
back to (presently) 1s conservatively.

We are nearing putting out a v1.4-rc7 which is just features and
polish, you can get a .deb of v1.4-rc6 here:

https://github.com/LibreQoE/LibreQoS/releases/tag/v1.4-rc6

There is an optional, and anonymized reporting facility built into
that. In the last two months, 44404 cake shaped devices shaping
.19Tbits that we know of have come online. Aside from that we have no
idea how many ISPs have picked it up! a best guess would be well over
100k subs at this point.

Putting in libreqos is massively cheaper than upgrading all the cpe to
good queue management, (it takes about 8 minutes to get it going in
monitor mode, but exporting shaping data into it requires glue, and
time) but better cpe remains desirable - especially that the uplink
component of the cpe also do sane shaping natively.

"And dang, it, ISPs of the world, please ship decent wifi!?", because
we can see the wifi going south in many cases from this vantage point
now. In the past year mikrotik in particular has done a nice update to
fq_codel and cake in RouterOS, eero 6s have got quite good, much of
openwifi/openwrt, evenroute  is good...

It feels good, after 14 years of trying to fix the internet, to be
seeing such progress, on fixing bufferbloat, and in understanding and
explaining the internet better. jon us..


All sounds very exciting.

I'll share this with some friends at Cisco who are actively looking at 
ways to incorporate such tech. in their routers in response to Q

LibreQos

2023-05-12 Thread Dave Taht
Changing the topic...

On Fri, May 12, 2023 at 7:11 AM Mark Tinka  wrote:
>
>
>
> On 5/12/23 15:03, Dave Taht wrote:
>
> > Libreqos is free software, working as a bridge, you can plug it in
> > between any two points on your network, and on cheap (350 bucks off of
> > ebay) xeon gold hardware easily cracks 25Gbits while shaping with a
> > goal of cracking 100Gbits one day soon.
>
> This is fantastic!

:blush:

We have done a couple podcasts about it, like this one:

https://packetpushers.net/podcast/heavy-networking-666-improving-quality-of-experience-with-libreqos/

and have perhaps made a mistake by using matrix chat, rather than a
web forum, to too-invisibly, do development and support in, but it has
been a highly entertaining way to get a better picture of the real
problems caring ISPs have.

I see you are in Africa? We have a few ISPs playing with this in kenya...

>
> I also found your post about it here:
>
> https://www.reddit.com/r/HomeNetworking/comments/11pmc9a/a_latency_on_the_internet_update_bufferbloat_sqm/
>
> If you could throw more hardware at it, could it do several 100's of Gbps?

We do not know. Presently our work is supported by equinix´s open
source program, with four servers in their Dallas DC, and they are
25Gbit ports. Putting together enough dough to get to 100Gbit or
finding someone willing to send traffic through more bare metal at
that data center or elsewhere is on my mind. In other words, we can
easily spin up the ability to L2 route some traffic through a box in
their DCs, if only we knew where to find it. :)

If you assume linearity to cores (which is a lousy assumption, ok?),
64 Xeon cores could do about 200Gbit, running flat out. I am certain
it will not scale linearly and we will hit multiple bottlenecks on a
way to that goal.

Limits we know about:

A) Trying to drive 10s of gbits of realistic traffic through this
requires more test clients and servers than we have, or someone with
daring and that kind of real traffic in the first place. For example
one of our most gung-ho clients has 100Gbit ports, but not anywhere
near that amount of inbound traffic. (they are crazy enough to pull
git head, try it for a few minutes in production, and then roll back
or leave it up)

B) A brief test of a 64 core AMD + Nvidia ethernet was severely
outperformed by our current choice of a 20 core xeon gold + intel 710
or 810 card. It is far more the ethernet card that is the dominating
factor. I would kill if I could find one that did a LPM -> CPU
mapping... (e.g. instead of a LPM->route mapping, LPM to what cpu to
interrupt). We also tried an 80 core arm to inconclusive results early
on.

Tests of the latest ubuntu release are ongoing. I am not prepared to
bless that or release any results yet.

C) A single cake instance on one of the more high end Xeons can
*almost* push 10Gbit/sec while eating a core.

D) Our model is one cake instance per subscriber + the ability to
establish trees emulating links further down the chain. One ISP is
modeling 10 mmwave hops. Another is just putting in multiple boxes
closer to the towers.

So in other words, 100s of gbits is achievable today if you throw
boxes at it, and more cost effective to do that way. We will of
course, keep striving to crack 100gbit native on a single box with
multiple cards. It is a nice goal to have.

E) In our present, target markets, 10k typical residential subscribers
only eat 11Gbit/sec at peak. That is a LOT of the smaller ISPs and
networks that fit into that space, so of late we have been focusing
more on analytics and polish than pushing more traffic. Some of our
new R/T analytics break down at 10k cake instances (that is 40 million
fq_codel queues, ok?), and we cannot sample at 10ms rates, falling
back to (presently) 1s conservatively.

We are nearing putting out a v1.4-rc7 which is just features and
polish, you can get a .deb of v1.4-rc6 here:

https://github.com/LibreQoE/LibreQoS/releases/tag/v1.4-rc6

There is an optional, and anonymized reporting facility built into
that. In the last two months, 44404 cake shaped devices shaping
.19Tbits that we know of have come online. Aside from that we have no
idea how many ISPs have picked it up! a best guess would be well over
100k subs at this point.

Putting in libreqos is massively cheaper than upgrading all the cpe to
good queue management, (it takes about 8 minutes to get it going in
monitor mode, but exporting shaping data into it requires glue, and
time) but better cpe remains desirable - especially that the uplink
component of the cpe also do sane shaping natively.

"And dang, it, ISPs of the world, please ship decent wifi!?", because
we can see the wifi going south in many cases from this vantage point
now. In the past year mikrotik in particular has done a nice update to
fq_codel and cake in RouterOS, eero 6s have got quite good, much of
openwifi/openwrt, evenroute  is good...

It feels

Re: bufferbloat-beating customer shaping via LibreQoS

2022-09-18 Thread Jeremy Austin
Thanks for the shoutout, Norman. Preseem isn’t at 50Gbps in 1U yet, but we
will get there.

I hope more folks listen to Dave, open vs. closed source solutions aside —
AQM makes a shocking amount of difference to quality of experience.

Jeremy



On Sun, Sep 18, 2022 at 2:14 PM Norman Jester  wrote:

>
> > On Sep 18, 2022, at 12:25 PM, Dave Taht  wrote:
> >
> > There's been a huge uptake in interest lately in doing better per
> > device and per customer shaping, especially for
> > ISPs, in the libreQoS.io project, which is leveraging the best ideas
> > bufferbloat project members have had over the
> > past decade (cake, bpf, xdp) to push an x86 middlebox well past the
> > 10Gbit barrier, on sub-2k boxes, with really
> > good stats on backlogs, drops, and ecn marks. I've long primarily
> > tried to get fq_codel and cake running on the CPE (most recently
> > mikrotik), and that's been taking too long.
> >
> > I have no idea to what extent members of this list have interest in
> > this, but if you know of a smaller ISP with bad bufferbloat,
> > please pass that link along? It's got ridiculously easier to set up as
> > a vm of late.
> >
> > There is presently a design discussion going on over here:
> >
> > https://github.com/rchac/LibreQoS/issues/57
> >
> > And by mentioning it here, today, I'm mostly asking what other real
> > life use cases we should try to tackle? What backend tools should we
> > try to integrate with?
> >
> > --
> > FQ World Domination pending:
> https://blog.cerowrt.org/post/state_of_fq_codel/
> > Dave Täht CEO, TekLibre, LLC
>
> Take a look at Preseem as the features it has and graphs are great. WISPs
> need this type of system and would show added interest if it has those
> charts and metrics. The integrations are good also. HubSpot integration is
> a plus so we can pull user data out of it and add it to their HubSpot
> profiles.
>
> --
Jeremy Austin
jhaus...@gmail.com


Re: bufferbloat-beating customer shaping via LibreQoS

2022-09-18 Thread Norman Jester


> On Sep 18, 2022, at 12:25 PM, Dave Taht  wrote:
> 
> There's been a huge uptake in interest lately in doing better per
> device and per customer shaping, especially for
> ISPs, in the libreQoS.io project, which is leveraging the best ideas
> bufferbloat project members have had over the
> past decade (cake, bpf, xdp) to push an x86 middlebox well past the
> 10Gbit barrier, on sub-2k boxes, with really
> good stats on backlogs, drops, and ecn marks. I've long primarily
> tried to get fq_codel and cake running on the CPE (most recently
> mikrotik), and that's been taking too long.
> 
> I have no idea to what extent members of this list have interest in
> this, but if you know of a smaller ISP with bad bufferbloat,
> please pass that link along? It's got ridiculously easier to set up as
> a vm of late.
> 
> There is presently a design discussion going on over here:
> 
> https://github.com/rchac/LibreQoS/issues/57
> 
> And by mentioning it here, today, I'm mostly asking what other real
> life use cases we should try to tackle? What backend tools should we
> try to integrate with?
> 
> -- 
> FQ World Domination pending: https://blog.cerowrt.org/post/state_of_fq_codel/
> Dave Täht CEO, TekLibre, LLC

Take a look at Preseem as the features it has and graphs are great. WISPs need 
this type of system and would show added interest if it has those charts and 
metrics. The integrations are good also. HubSpot integration is a plus so we 
can pull user data out of it and add it to their HubSpot profiles.



bufferbloat-beating customer shaping via LibreQoS

2022-09-18 Thread Dave Taht
There's been a huge uptake in interest lately in doing better per
device and per customer shaping, especially for
ISPs, in the libreQoS.io project, which is leveraging the best ideas
bufferbloat project members have had over the
past decade (cake, bpf, xdp) to push an x86 middlebox well past the
10Gbit barrier, on sub-2k boxes, with really
good stats on backlogs, drops, and ecn marks. I've long primarily
tried to get fq_codel and cake running on the CPE (most recently
mikrotik), and that's been taking too long.

I have no idea to what extent members of this list have interest in
this, but if you know of a smaller ISP with bad bufferbloat,
please pass that link along? It's got ridiculously easier to set up as
a vm of late.

There is presently a design discussion going on over here:

https://github.com/rchac/LibreQoS/issues/57

And by mentioning it here, today, I'm mostly asking what other real
life use cases we should try to tackle? What backend tools should we
try to integrate with?

-- 
FQ World Domination pending: https://blog.cerowrt.org/post/state_of_fq_codel/
Dave Täht CEO, TekLibre, LLC