FYI, I believe the old 150ms was at the threshold where a person would hear the 
telephony far-end echo. At the (approx) 250ms from the geos, the far-end echo 
would interfere with speaking.

What is interesting is the speed of sound in air (STP) is about 1ms/foot (about 
35 cm). On a big stage that is about 60 ms stage left to stage right. The 
internet transporting sound would be awesome without jitter.

Gene
----------------------------------------------
Eugene Chang
IEEE Senior Life Member
[email protected]
781-799-0233 (in Honolulu)



> On Sep 27, 2022, at 11:54 PM, Sebastian Moeller <[email protected]> wrote:
> 
> Hi Gene,
> 
>> On Sep 28, 2022, at 00:46, Eugene Y Chang <[email protected]> wrote:
>> 
>> Sebastian,
>> Good to know. I haven’t had time to follow all the work going on globally.
>> I was only commenting on how the ISP support team behaves.
>> How do we bring this better attitude to the US.
> 
>       [SM] Good question. In the EU it was actually actually an official EU 
> regulation by the european council and the parliament 
> (https://eur-lex.europa.eu/legal-content/EN/TXT/HTML/?uri=CELEX:32015R2120&from=de)
>  that is the basis for the national regulators to act.
> 
>> Sadly the FCC seems stuck accommodating the big ISPs.
> 
>       [SM] Similar in Germany the regulator always sees to see itself both as 
> represntative of the end-user and at the same time as partner of the ISPs and 
> since ISP have better communication channels to the regulators they also 
> often seem to accomodate the big ISPs. However if the relevant law is clear 
> and unambiguous they act according to it.
> Not sure though whether "go through the politicians" to improve the 
> directives of the FCC is a better approach given the state of USian politics. 
> However it should be conceptually easy to convince politicians of the issue, 
> it is not that they do not use the internet themselves and might be open for 
> a demonstration (nicely balanced between the two sides of the aisle to avoid 
> making this a partisan issue).
> 
>> Of course, we want the FCC, the ISPs, and the networking world to go beyond 
>> speedtest.
> 
>       [SM] +1; for all my praise for the local regulators action, they also 
> have dropped the ball on acceptable latency; they just defined the minimum 
> internet access people living in Germany are entitled to by law (something 
> like 10/1.7 Mbps but up to 150ms latency, all measured against the regulators 
> testing systems in the internet), the rates while not great seem OK (as an 
> absolute floor) but 150ms latency? What where they thinking? I bet this comes 
> from the ITU's old characterization of mouth to ear latencies <= 150ms being 
> OK, ignoring that mouth to ear contains more than pure network delay and that 
> if two of such users actually try a VoIP call they end up with 300ms delay, 
> which is deeply in the awkward territory IIRC.
> 
> Regards
>       Sebastian
> 
> 
>> 
>> Gene
>> ----------------------------------------------
>> Eugene Chang
>> IEEE Senior Life Member
>> [email protected]
>> 781-799-0233 (in Honolulu)
>> 
>> 
>> 
>>> On Sep 26, 2022, at 9:09 PM, Sebastian Moeller <[email protected]> wrote:
>>> 
>>> Hi Gene,
>>> 
>>> 
>>>> On Sep 27, 2022, at 05:50, Eugene Y Chang <[email protected]> wrote:
>>>> 
>>>> Of course…. But the ISP’s maxim is "don’t believe the customer’s speedtest 
>>>> numbers if it is not with their host".
>>> 
>>>     [SM] In an act of reasonable regulation did the EU give national 
>>> regulatory agencies the right to define and set-up national "speed-tests" 
>>> (located outside of the access ISPs networks) which ISPs effectively need 
>>> to accept. In Germany the local NRA (Bundes-Netz-Agentur, BNetzA) created a 
>>> speedtest application against servers operated on their behalf and will, if 
>>> a customers demonstrates the ISP falling short of the contracted rates 
>>> (according to a somewhat complicated definition and measurement rules). 
>>> convince ISPs to follow the law and either release customers from their 
>>> contracts immediately or lower the price in relation to the 
>>> under-fulfillment. (Unfortunately all related official web pages are in 
>>> German only).
>>>     This put a hold on the practice of gaming speedtests, like DOCSIS ISPs 
>>> measuring an in-segment speedtest server which conveniently hides if a 
>>> segment's uplink is congested... Not all ISPs gamed their speedtests, and 
>>> even the in-segment speedtest can be justified for some measurements (e.g. 
>>> when trying to figure out whether congestion is in-segment or 
>>> out-of-segment), but the temptation must have been large to not set-up the 
>>> most objective spedtest. (We have a saying along the lines of "making a 
>>> goat your gardener" which generally is considered a sub-optimal approach).
>>> 
>>> Regards
>>>     Sebastian
>>> 
>>> 
>>>> 
>>>> 
>>>> Gene
>>>> ----------------------------------------------
>>>> Eugene Chang
>>>> IEEE Senior Life Member
>>>> [email protected]
>>>> 781-799-0233 (in Honolulu)
>>>> 
>>>> 
>>>> 
>>>>> On Sep 26, 2022, at 11:44 AM, Bruce Perens <[email protected]> wrote:
>>>>> 
>>>>> That's a good maxim: Don't believe a speed test that is hosted by your 
>>>>> own ISP.
>>>>> 
>>>>> On Mon, Sep 26, 2022 at 2:36 PM Eugene Y Chang via Starlink 
>>>>> <[email protected]> wrote:
>>>>> Thank you for the dialog,.
>>>>> This discussion with regards to Starlink is interesting as it confirms my 
>>>>> guesses about the gap between Starlinks overly simplified, over 
>>>>> optimistic marketing and the reality as they acquire subscribers.
>>>>> 
>>>>> I am actually interested in a more perverse issue. I am seeing latency 
>>>>> and bufferbloat as a consequence from significant under provisioning. It 
>>>>> doesn’t matter that the ISP is selling a fiber drop, if (parts) of their 
>>>>> network is under provisioned. Two end points can be less than 5 mile 
>>>>> apart and realize 120+ ms latency. Two Labor Days ago (a holiday) the max 
>>>>> latency was 230+ ms. The pattern I see suggest digital redlining. The 
>>>>> older communities appear to have much more severe under provisioning.
>>>>> 
>>>>> Another observation. Running speedtest appears to go from the edge of the 
>>>>> network by layer 2 to the speedtest host operated by the ISP. Yup, 
>>>>> bypasses the (suspected overloaded) routers.
>>>>> 
>>>>> Anyway, just observing.
>>>>> 
>>>>> Gene
>>>>> ----------------------------------------------
>>>>> Eugene Chang
>>>>> IEEE Senior Life Member
>>>>> [email protected]
>>>>> 781-799-0233 (in Honolulu)
>>>>> 
>>>>> 
>>>>> 
>>>>>> On Sep 26, 2022, at 11:20 AM, Sebastian Moeller <[email protected]> wrote:
>>>>>> 
>>>>>> Hi Gene,
>>>>>> 
>>>>>> 
>>>>>>> On Sep 26, 2022, at 23:10, Eugene Y Chang <[email protected]> wrote:
>>>>>>> 
>>>>>>> Comments inline below.
>>>>>>> 
>>>>>>> Gene
>>>>>>> ----------------------------------------------
>>>>>>> Eugene Chang
>>>>>>> IEEE Senior Life Member
>>>>>>> [email protected]
>>>>>>> 781-799-0233 (in Honolulu)
>>>>>>> 
>>>>>>> 
>>>>>>> 
>>>>>>>> On Sep 26, 2022, at 11:01 AM, Sebastian Moeller <[email protected]> 
>>>>>>>> wrote:
>>>>>>>> 
>>>>>>>> Hi Eugene,
>>>>>>>> 
>>>>>>>> 
>>>>>>>>> On Sep 26, 2022, at 22:54, Eugene Y Chang via Starlink 
>>>>>>>>> <[email protected]> wrote:
>>>>>>>>> 
>>>>>>>>> Ok, we are getting into the details. I agree.
>>>>>>>>> 
>>>>>>>>> Every node in the path has to implement this to be effective.
>>>>>>>> 
>>>>>>>>        Amazingly the biggest bang for the buck is gotten by fixing 
>>>>>>>> those nodes that actually contain a network path's bottleneck. Often 
>>>>>>>> these are pretty stable. So yes for fully guaranteed service quality 
>>>>>>>> all nodes would need to participate, but for improving things 
>>>>>>>> noticeably it is sufficient to improve the usual bottlenecks, e.g. for 
>>>>>>>> many internet access links the home gateway is a decent point to 
>>>>>>>> implement better buffer management. (In short the problem are 
>>>>>>>> over-sized and under-managed buffers, and one of the best solution is 
>>>>>>>> better/smarter buffer management).
>>>>>>>> 
>>>>>>> 
>>>>>>> This is not completely true.
>>>>>> 
>>>>>>  [SM] You are likely right, trying to summarize things leads to 
>>>>>> partially incorrect generalizations.
>>>>>> 
>>>>>> 
>>>>>>> Say the bottleneck is at node N. During the period of congestion, the 
>>>>>>> upstream node N-1 will have to buffer. When node N recovers, the 
>>>>>>> bufferbloat at N-1 will be blocking until the bufferbloat drains. Etc. 
>>>>>>> etc.  Making node N better will reduce the extent of the backup at N-1, 
>>>>>>> but N-1 should implement the better code.
>>>>>> 
>>>>>>  [SM] It is the node that builds up the queue that profits most from 
>>>>>> better queue management.... (again I generalize, the node with the queue 
>>>>>> itself probably does not care all that much, but the endpoints will 
>>>>>> profit if the queue experiencing node deals with that queue more 
>>>>>> gracefully).
>>>>>> 
>>>>>> 
>>>>>>> 
>>>>>>> 
>>>>>>>> 
>>>>>>>>> In fact, every node in the path has to have the same prioritization 
>>>>>>>>> or the scheme becomes ineffective.
>>>>>>>> 
>>>>>>>>        Yes and no, one of the clearest winners has been flow queueing, 
>>>>>>>> IMHO not because it is the most optimal capacity sharing scheme, but 
>>>>>>>> because it is the least pessimal scheme, allowing all (or none) flows 
>>>>>>>> forward progress. You can interpret that as a scheme in which flows 
>>>>>>>> below their capacity share are prioritized, but I am not sure that is 
>>>>>>>> the best way to look at these things.
>>>>>>> 
>>>>>>> The hardest part is getting competing ISPs to implement and coordinate.
>>>>>> 
>>>>>>  [SM] Yes, but it turned out even with non-cooperating ISPs there is a 
>>>>>> lot end-users can do unilaterally on their side to improve both ingress 
>>>>>> and egress congestion. Admittedly especially ingress congestion would be 
>>>>>> even better handled with cooperation of the ISP.
>>>>>> 
>>>>>>> Bufferbloat and handoff between ISPs will be hard. The only way to fix 
>>>>>>> this is to get the unwashed public to care. Then they can say “we don’t 
>>>>>>> care about the technical issues, just fix it.” Until then …..
>>>>>> 
>>>>>>  [SM] Well we do this one home network at a time (not because that is 
>>>>>> efficient or ideal, but simply because it is possible). Maybe, if you 
>>>>>> have not done so already try OpenWrt with sqm-scripts (and maybe 
>>>>>> cake-autorate in addition) on your home internet access link for say a 
>>>>>> week and let us know ih/how your experience changed?
>>>>>> 
>>>>>> Regards
>>>>>>  Sebastian
>>>>>> 
>>>>>> 
>>>>>>> 
>>>>>>> 
>>>>>>> 
>>>>>>>> 
>>>>>>>> Regards
>>>>>>>>        Sebastian
>>>>>>>> 
>>>>>>>> 
>>>>>>>>> 
>>>>>>>>> Gene
>>>>>>>>> ----------------------------------------------
>>>>>>>>> Eugene Chang
>>>>>>>>> IEEE Senior Life Member
>>>>>>>>> [email protected]
>>>>>>>>> 781-799-0233 (in Honolulu)
>>>>>>>>> 
>>>>>>>>> 
>>>>>>>>> 
>>>>>>>>>> On Sep 26, 2022, at 10:48 AM, David Lang <[email protected]> wrote:
>>>>>>>>>> 
>>>>>>>>>> software updates can do far more than just improve recovery.
>>>>>>>>>> 
>>>>>>>>>> In practice, large data transfers are less sensitive to latency than 
>>>>>>>>>> smaller data transfers (i.e. downloading a CD image vs a video 
>>>>>>>>>> conference), software can ensure better fairness in preventing a 
>>>>>>>>>> bulk transfer from hurting the more latency sensitive transfers.
>>>>>>>>>> 
>>>>>>>>>> (the example below is not completely accurate, but I think it gets 
>>>>>>>>>> the point across)
>>>>>>>>>> 
>>>>>>>>>> When buffers become excessivly large, you have the situation where a 
>>>>>>>>>> video call is going to generate a small amount of data at a regular 
>>>>>>>>>> interval, but a bulk data transfer is able to dump a huge amount of 
>>>>>>>>>> data into the buffer instantly.
>>>>>>>>>> 
>>>>>>>>>> If you just do FIFO, then you get a small chunk of video call, then 
>>>>>>>>>> several seconds worth of CD transfer, followed by the next small 
>>>>>>>>>> chunk of the video call.
>>>>>>>>>> 
>>>>>>>>>> But the software can prevent the one app from hogging so much of the 
>>>>>>>>>> connection and let the chunk of video call in sooner, avoiding the 
>>>>>>>>>> impact to the real time traffic. Historically this has required the 
>>>>>>>>>> admin classify all traffic and configure equipment to implement 
>>>>>>>>>> different treatment based on the classification (and this requires 
>>>>>>>>>> trust in the classification process), the bufferbloat team has 
>>>>>>>>>> developed options (fq_codel and cake) that can ensure fairness 
>>>>>>>>>> between applications/servers with little or no configuration, and no 
>>>>>>>>>> trust in other systems to properly classify their traffic.
>>>>>>>>>> 
>>>>>>>>>> The one thing that Cake needs to work really well is to be able to 
>>>>>>>>>> know what the data rate available is. With Starlink, this changes 
>>>>>>>>>> frequently and cake integrated into the starlink dish/router 
>>>>>>>>>> software would be far better than anything that can be done 
>>>>>>>>>> externally as the rate changes can be fed directly into the settings 
>>>>>>>>>> (currently they are only indirectly detected)
>>>>>>>>>> 
>>>>>>>>>> David Lang
>>>>>>>>>> 
>>>>>>>>>> 
>>>>>>>>>> On Mon, 26 Sep 2022, Eugene Y Chang via Starlink wrote:
>>>>>>>>>> 
>>>>>>>>>>> You already know this. Bufferbloat is a symptom and not the cause. 
>>>>>>>>>>> Bufferbloat grows when there are (1) periods of low or no bandwidth 
>>>>>>>>>>> or (2) periods of insufficient bandwidth (aka network congestion).
>>>>>>>>>>> 
>>>>>>>>>>> If I understand this correctly, just a software update cannot make 
>>>>>>>>>>> bufferbloat go away. It might improve the speed of recovery (e.g. 
>>>>>>>>>>> throw away all time sensitive UDP messages).
>>>>>>>>>>> 
>>>>>>>>>>> Gene
>>>>>>>>>>> ----------------------------------------------
>>>>>>>>>>> Eugene Chang
>>>>>>>>>>> IEEE Senior Life Member
>>>>>>>>>>> [email protected]
>>>>>>>>>>> 781-799-0233 (in Honolulu)
>>>>>>>>>>> 
>>>>>>>>>>> 
>>>>>>>>>>> 
>>>>>>>>>>>> On Sep 26, 2022, at 10:04 AM, Bruce Perens <[email protected]> 
>>>>>>>>>>>> wrote:
>>>>>>>>>>>> 
>>>>>>>>>>>> Please help to explain. Here's a draft to start with:
>>>>>>>>>>>> 
>>>>>>>>>>>> Starlink Performance Not Sufficient for Military Applications, Say 
>>>>>>>>>>>> Scientists
>>>>>>>>>>>> 
>>>>>>>>>>>> The problem is not availability: Starlink works where nothing but 
>>>>>>>>>>>> another satellite network would. It's not bandwidth, although 
>>>>>>>>>>>> others have questions about sustaining bandwidth as the customer 
>>>>>>>>>>>> base grows. It's latency and jitter. As load increases, latency, 
>>>>>>>>>>>> the time it takes for a packet to get through, increases more than 
>>>>>>>>>>>> it should. The scientists who have fought bufferbloat, a major 
>>>>>>>>>>>> cause of latency on the internet, know why. SpaceX needs to 
>>>>>>>>>>>> upgrade their system to use the scientist's Open Source 
>>>>>>>>>>>> modifications to Linux to fight bufferbloat, and thus reduce 
>>>>>>>>>>>> latency. This is mostly just using a newer version, but there are 
>>>>>>>>>>>> some tunable parameters. Jitter is a change in the speed of 
>>>>>>>>>>>> getting a packet through the network during a connection, which is 
>>>>>>>>>>>> inevitable in satellite networks, but will be improved by making 
>>>>>>>>>>>> use of the bufferbloat-fighting software, and probably with the 
>>>>>>>>>>>> addition of more satellites.
>>>>>>>>>>>> 
>>>>>>>>>>>> We've done all of the work, SpaceX just needs to adopt it by 
>>>>>>>>>>>> upgrading their software, said scientist Dave Taht. Jim Gettys, 
>>>>>>>>>>>> Taht's collaborator and creator of the X Window System, chimed in: 
>>>>>>>>>>>> <fill in here please>
>>>>>>>>>>>> Open Source luminary Bruce Perens said: sometimes Starlink's 
>>>>>>>>>>>> latency and jitter make it inadequate to remote-control my ham 
>>>>>>>>>>>> radio station. But the military is experimenting with 
>>>>>>>>>>>> remote-control of vehicles on the battlefield and other 
>>>>>>>>>>>> applications that can be demonstrated, but won't happen at scale 
>>>>>>>>>>>> without adoption of bufferbloat-fighting strategies.
>>>>>>>>>>>> 
>>>>>>>>>>>> On Mon, Sep 26, 2022 at 12:59 PM Eugene Chang 
>>>>>>>>>>>> <[email protected]<mailto:[email protected]>> 
>>>>>>>>>>>> wrote:
>>>>>>>>>>>> The key issue is most people don’t understand why latency matters. 
>>>>>>>>>>>> They don’t see it or feel it’s impact.
>>>>>>>>>>>> 
>>>>>>>>>>>> First, we have to help people see the symptoms of latency and how 
>>>>>>>>>>>> it impacts something they care about.
>>>>>>>>>>>> - gamers care but most people may think it is frivolous.
>>>>>>>>>>>> - musicians care but that is mostly for a hobby.
>>>>>>>>>>>> - business should care because of productivity but they don’t know 
>>>>>>>>>>>> how to “see” the impact.
>>>>>>>>>>>> 
>>>>>>>>>>>> Second, there needs to be a “OMG, I have been seeing the action of 
>>>>>>>>>>>> latency all this time and never knew it! I was being shafted.” 
>>>>>>>>>>>> Once you have this awakening, you can get all the press you want 
>>>>>>>>>>>> for free.
>>>>>>>>>>>> 
>>>>>>>>>>>> Most of the time when business apps are developed, “we” hide the 
>>>>>>>>>>>> impact of poor performance (aka latency) or they hide from the 
>>>>>>>>>>>> discussion because the developers don’t have a way to fix the 
>>>>>>>>>>>> latency. Maybe businesses don’t care because any employees 
>>>>>>>>>>>> affected are just considered poor performers. (In bad economic 
>>>>>>>>>>>> times, the poor performers are just laid off.) For employees, if 
>>>>>>>>>>>> they happen to be at a location with bad latency, they don’t know 
>>>>>>>>>>>> that latency is hurting them. Unfair but most people don’t know 
>>>>>>>>>>>> the issue is latency.
>>>>>>>>>>>> 
>>>>>>>>>>>> Talking and explaining why latency is bad is not as effective as 
>>>>>>>>>>>> showing why latency is bad. Showing has to be with something that 
>>>>>>>>>>>> has a person impact.
>>>>>>>>>>>> 
>>>>>>>>>>>> Gene
>>>>>>>>>>>> -----------------------------------
>>>>>>>>>>>> Eugene Chang
>>>>>>>>>>>> [email protected] <mailto:[email protected]>
>>>>>>>>>>>> +1-781-799-0233 (in Honolulu)
>>>>>>>>>>>> 
>>>>>>>>>>>> 
>>>>>>>>>>>> 
>>>>>>>>>>>> 
>>>>>>>>>>>> 
>>>>>>>>>>>>> On Sep 26, 2022, at 6:32 AM, Bruce Perens via Starlink 
>>>>>>>>>>>>> <[email protected]<mailto:[email protected]>>
>>>>>>>>>>>>>  wrote:
>>>>>>>>>>>>> 
>>>>>>>>>>>>> If you want to get attention, you can get it for free. I can 
>>>>>>>>>>>>> place articles with various press if there is something 
>>>>>>>>>>>>> interesting to say. Did this all through the evangelism of Open 
>>>>>>>>>>>>> Source. All we need to do is write, sign, and publish a 
>>>>>>>>>>>>> statement. What they actually write is less relevant if they 
>>>>>>>>>>>>> publish a link to our statement.
>>>>>>>>>>>>> 
>>>>>>>>>>>>> Right now I am concerned that the Starlink latency and jitter is 
>>>>>>>>>>>>> going to be a problem even for remote controlling my ham station. 
>>>>>>>>>>>>> The US Military is interested in doing much more, which they have 
>>>>>>>>>>>>> demonstrated, but I don't see happening at scale without some 
>>>>>>>>>>>>> technical work on the network. Being able to say this isn't ready 
>>>>>>>>>>>>> for the government's application would be an attention-getter.
>>>>>>>>>>>>> 
>>>>>>>>>>>>> Thanks
>>>>>>>>>>>>> 
>>>>>>>>>>>>> Bruce
>>>>>>>>>>>>> 
>>>>>>>>>>>>> On Mon, Sep 26, 2022 at 9:21 AM Dave Taht via Starlink 
>>>>>>>>>>>>> <[email protected]<mailto:[email protected]>>
>>>>>>>>>>>>>  wrote:
>>>>>>>>>>>>> These days, if you want attention, you gotta buy it. A 50k half 
>>>>>>>>>>>>> page
>>>>>>>>>>>>> ad in the wapo or NYT riffing off of It's the latency, Stupid!",
>>>>>>>>>>>>> signed by the kinds of luminaries we got for the fcc wifi fight, 
>>>>>>>>>>>>> would
>>>>>>>>>>>>> go a long way towards shifting the tide.
>>>>>>>>>>>>> 
>>>>>>>>>>>>> On Mon, Sep 26, 2022 at 8:29 AM Dave Taht <[email protected] 
>>>>>>>>>>>>> <mailto:[email protected]>> wrote:
>>>>>>>>>>>>>> 
>>>>>>>>>>>>>> On Mon, Sep 26, 2022 at 8:20 AM Livingood, Jason
>>>>>>>>>>>>>> <[email protected] 
>>>>>>>>>>>>>> <mailto:[email protected]>> wrote:
>>>>>>>>>>>>>>> 
>>>>>>>>>>>>>>> The awareness & understanding of latency & impact on QoE is 
>>>>>>>>>>>>>>> nearly unknown among reporters. IMO maybe there should be some 
>>>>>>>>>>>>>>> kind of background briefings for reporters - maybe like a 
>>>>>>>>>>>>>>> simple YouTube video explainer that is short & high level & 
>>>>>>>>>>>>>>> visual? Otherwise reporters will just continue to focus on what 
>>>>>>>>>>>>>>> they know...
>>>>>>>>>>>>>> 
>>>>>>>>>>>>>> That's a great idea. I have visions of crashing the washington
>>>>>>>>>>>>>> correspondents dinner, but perhaps
>>>>>>>>>>>>>> there is some set of gatherings journalists regularly attend?
>>>>>>>>>>>>>> 
>>>>>>>>>>>>>>> 
>>>>>>>>>>>>>>> On 9/21/22, 14:35, "Starlink on behalf of Dave Taht via 
>>>>>>>>>>>>>>> Starlink" <[email protected] 
>>>>>>>>>>>>>>> <mailto:[email protected]> on behalf of 
>>>>>>>>>>>>>>> [email protected] 
>>>>>>>>>>>>>>> <mailto:[email protected]>> wrote:
>>>>>>>>>>>>>>> 
>>>>>>>>>>>>>>> I still find it remarkable that reporters are still missing the
>>>>>>>>>>>>>>> meaning of the huge latencies for starlink, under load.
>>>>>>>>>>>>>>> 
>>>>>>>>>>>>>>> 
>>>>>>>>>>>>>> 
>>>>>>>>>>>>>> 
>>>>>>>>>>>>>> --
>>>>>>>>>>>>>> FQ World Domination pending: 
>>>>>>>>>>>>>> https://blog.cerowrt.org/post/state_of_fq_codel/<https://blog.cerowrt.org/post/state_of_fq_codel/>
>>>>>>>>>>>>>> Dave Täht CEO, TekLibre, LLC
>>>>>>>>>>>>> 
>>>>>>>>>>>>> 
>>>>>>>>>>>>> 
>>>>>>>>>>>>> --
>>>>>>>>>>>>> FQ World Domination pending: 
>>>>>>>>>>>>> https://blog.cerowrt.org/post/state_of_fq_codel/<https://blog.cerowrt.org/post/state_of_fq_codel/>
>>>>>>>>>>>>> Dave Täht CEO, TekLibre, LLC
>>>>>>>>>>>>> _______________________________________________
>>>>>>>>>>>>> Starlink mailing list
>>>>>>>>>>>>> [email protected] 
>>>>>>>>>>>>> <mailto:[email protected]>
>>>>>>>>>>>>> https://lists.bufferbloat.net/listinfo/starlink 
>>>>>>>>>>>>> <https://lists.bufferbloat.net/listinfo/starlink>
>>>>>>>>>>>>> 
>>>>>>>>>>>>> 
>>>>>>>>>>>>> --
>>>>>>>>>>>>> Bruce Perens K6BP
>>>>>>>>>>>>> _______________________________________________
>>>>>>>>>>>>> Starlink mailing list
>>>>>>>>>>>>> [email protected] 
>>>>>>>>>>>>> <mailto:[email protected]>
>>>>>>>>>>>>> https://lists.bufferbloat.net/listinfo/starlink 
>>>>>>>>>>>>> <https://lists.bufferbloat.net/listinfo/starlink>
>>>>>>>>>>>> 
>>>>>>>>>>>> 
>>>>>>>>>>>> 
>>>>>>>>>>>> --
>>>>>>>>>>>> Bruce Perens K6BP
>>>>>>>>> 
>>>>>>>>> _______________________________________________
>>>>>>>>> Starlink mailing list
>>>>>>>>> [email protected]
>>>>>>>>> https://lists.bufferbloat.net/listinfo/starlink
>>>>> 
>>>>> _______________________________________________
>>>>> Starlink mailing list
>>>>> [email protected]
>>>>> https://lists.bufferbloat.net/listinfo/starlink
>>>>> 
>>>>> 
>>>>> --
>>>>> Bruce Perens K6BP
>>>> 
>>> 
>> 
> 

Attachment: signature.asc
Description: Message signed with OpenPGP

_______________________________________________
Starlink mailing list
[email protected]
https://lists.bufferbloat.net/listinfo/starlink

Reply via email to