Re: [Starlink] The "reasons" that bufferbloat isn't a problem

2024-05-06 Thread Eugene Y Chang via Starlink
Dave,
We just can’t represent that we have the total solution.
We need to show the problem can be reduced.
We need to show that latency is a significant negative phenomena.
Take out one contributor and sic the users to the next contributor.

If we expect to solve the whole problem in one step, we end up where we are and 
effectively say the problem is too complex to solve.


Gene
--
Eugene Chang
IEEE Life Senior Member
IEEE Communications Society & Signal Processing Society,
Hawaii Chapter Chair
IEEE Life Member Affinity Group Hawaii Chair
IEEE Entrepreneurship, Mentor
eugene.ch...@ieee.org
m 781-799-0233 (in Honolulu)



> On May 6, 2024, at 2:11 AM, Dave Collier-Brown via Starlink 
>  wrote:
> 
> I think that gamer experience doing simple (over-simple) tests with CAKE is a 
> booby-trap. This discussion suggests that the real performance of their link 
> is horrid, and that they turn off CAKE to get what they think is full 
> performance... but isn't.
> 
> https://www.reddit.com/r/HomeNetworking/comments/174k0ko/low_latency_gaming_and_bufferbloat/#:~:text=If%20there's%20any%20chance%20that,out%20any%20intermittent%20latency%20spikes
>  
> .
>  (I used to work for World Gaming, and follow the game commentators more that 
> I do now)
> 
> --dave
> 
> 
> On 2024-05-06 07:25, Rich Brown via Starlink wrote:
>> Hi Gene,
>> 
>> I've been vacillating on whether to send this note, but have decided to pull 
>> the trigger. I apologize in advance for the "Debbie Downer" nature of this 
>> message. I also apologize for any errors, omissions, or over-simplifications 
>> of the "birth of bufferbloat" story and its fixes. Corrections welcome.
>> 
>> Rich
>> --
>> 
>> If we are going to take a shot at opening people's eyes to bufferbloat, we 
>> should know some of the "objections" we'll run up against. Even though 
>> there's terrific technical data to back it up, people seem especially 
>> resistant to thinking that bufferbloat might affect their network, even when 
>> they're seeing problems that sound exactly like bufferbloat symptoms. But 
>> first, some history:
>> 
>> The very idea of bufferbloat is simply unbelievable. Jim Gettys in 2011 [1] 
>> couldn't believe it, and he's a smart networking guy,. At the time, it 
>> seemed incredible (that is "not credible" == impossible) that something 
>> could induce 1.2 seconds of latency into his home network connection. He 
>> called in favors from technical contacts at his ISP and at Bell Labs who 
>> went over everything with a fine-toothed comb. It was all exactly as spec'd. 
>> But he still had the latency.
>> 
>> This led Jim and Dave Täht to start the investigation into the phenomenon 
>> known today as "bufferbloat" - the undesirable latency that comes from a 
>> router or other network equipment buffering too much data. Over several 
>> years, a group of smart people made huge improvements: fq_codel was released 
>> 14 May 2012 [3]; it was incorporated into the Linux kernel shortly 
>> afterward. CAKE came in 2015, and the fixes that minimize bufferbloat in 
>> Wi-Fi arrived in 2018. In 2021 cake-autorate [4] arrived to handle varying 
>> speed ISP links. All these techniques work great: in 2014, my 7mbps DSL link 
>> was quite usable. And when the pandemic hit, fq_codel on my OpenWrt router 
>> allowed me to use that same 7mbps DSL line for two simultaneous zoom calls.
>> 
>> As one of the authors of [2], I am part of the team that has tried over the 
>> years to explain bufferbloat and how to fix it. We've spoken with vendors. 
>> We've spent untold hours responding to posts on assorted boards and forums 
>> with the the bufferbloat story.
>> 
>> With these technical fixes in hand, we cockily set about to tell the world 
>> about how to fix bufferbloat. Our efforts have been met with skepticism at 
>> best, or stony silence. What are the objections?
>> 
>> - This is just the ordinary behavior: I would expect things to be slower 
>> when there's more traffic (Willfully ignoring orders of magnitude increase 
>> in delay.)
>> - Besides, I'm the only one using the internet. (Except when my phone 
>> uploads photos. Or my computer kicks off some automated process. Or I browse 
>> the web. Or ...)
>> - It only happens some of the time. (Exactly. That's probably when 
>> something's uploading photos, or your computer is doing stuff in the 
>> background.)
>> - Those bufferbloat tests you hear about are bogus. They artificially add 
>> load, which isn't a realistic test. (...and if you actually are downloading 
>> a file?)
>> - Bufferbloat only happens when the network is 100% loaded. (True. But when 
>> you open a web page, your browser briefly uses 100% of the link. Is this 
>> enough to cause momentary lag?)
>> - It's OK. I just tell 

Re: [Starlink] The "reasons" that bufferbloat isn't a problem

2024-05-06 Thread Eugene Y Chang via Starlink
Rich,
Thanks for the recap in the email.
I have seen all of those bits.

I will help with the marketing magic needed.
We need a team of smart people engaged to help vouch for the technical 
integrity.

We need a simple case (call it a special case, if you must) that shows the 
problem can be fixed.
Never mind if it is not a universal fix.
We only need to show one happy, very visible community.
Give me something to work with that we can defend from the list of do-nothing 
reasons you list.

It seems like you signed off on this challenge. Don’t do that. Help give me the 
tools to push this to the next level.
An energetic, vocal community is very valuable. They aren’t satisfying if we 
want to debate the technology. We shouldn’t care. We just want to win adoption.

Gene
--
Eugene Chang
IEEE Life Senior Member
IEEE Communications Society & Signal Processing Society,
Hawaii Chapter Chair
IEEE Life Member Affinity Group Hawaii Chair
IEEE Entrepreneurship, Mentor
eugene.ch...@ieee.org
m 781-799-0233 (in Honolulu)



> On May 6, 2024, at 1:25 AM, Rich Brown  wrote:
> 
> Hi Gene,
> 
> I've been vacillating on whether to send this note, but have decided to pull 
> the trigger. I apologize in advance for the "Debbie Downer" nature of this 
> message. I also apologize for any errors, omissions, or over-simplifications 
> of the "birth of bufferbloat" story and its fixes. Corrections welcome.
> 
> Rich
> --
> 
> If we are going to take a shot at opening people's eyes to bufferbloat, we 
> should know some of the "objections" we'll run up against. Even though 
> there's terrific technical data to back it up, people seem especially 
> resistant to thinking that bufferbloat might affect their network, even when 
> they're seeing problems that sound exactly like bufferbloat symptoms. But 
> first, some history:
> 
> The very idea of bufferbloat is simply unbelievable. Jim Gettys in 2011 [1] 
> couldn't believe it, and he's a smart networking guy,. At the time, it seemed 
> incredible (that is "not credible" == impossible) that something could induce 
> 1.2 seconds of latency into his home network connection. He called in favors 
> from technical contacts at his ISP and at Bell Labs who went over everything 
> with a fine-toothed comb. It was all exactly as spec'd. But he still had the 
> latency.
> 
> This led Jim and Dave Täht to start the investigation into the phenomenon 
> known today as "bufferbloat" - the undesirable latency that comes from a 
> router or other network equipment buffering too much data. Over several 
> years, a group of smart people made huge improvements: fq_codel was released 
> 14 May 2012 [3]; it was incorporated into the Linux kernel shortly afterward. 
> CAKE came in 2015, and the fixes that minimize bufferbloat in Wi-Fi arrived 
> in 2018. In 2021 cake-autorate [4] arrived to handle varying speed ISP links. 
> All these techniques work great: in 2014, my 7mbps DSL link was quite usable. 
> And when the pandemic hit, fq_codel on my OpenWrt router allowed me to use 
> that same 7mbps DSL line for two simultaneous zoom calls.
> 
> As one of the authors of [2], I am part of the team that has tried over the 
> years to explain bufferbloat and how to fix it. We've spoken with vendors. 
> We've spent untold hours responding to posts on assorted boards and forums 
> with the the bufferbloat story.
> 
> With these technical fixes in hand, we cockily set about to tell the world 
> about how to fix bufferbloat. Our efforts have been met with skepticism at 
> best, or stony silence. What are the objections?
> 
> - This is just the ordinary behavior: I would expect things to be slower when 
> there's more traffic (Willfully ignoring orders of magnitude increase in 
> delay.)
> - Besides, I'm the only one using the internet. (Except when my phone uploads 
> photos. Or my computer kicks off some automated process. Or I browse the web. 
> Or ...)
> - It only happens some of the time. (Exactly. That's probably when 
> something's uploading photos, or your computer is doing stuff in the 
> background.)
> - Those bufferbloat tests you hear about are bogus. They artificially add 
> load, which isn't a realistic test. (...and if you actually are downloading a 
> file?)
> - Bufferbloat only happens when the network is 100% loaded. (True. But when 
> you open a web page, your browser briefly uses 100% of the link. Is this 
> enough to cause momentary lag?)
> - It's OK. I just tell my kids/spouse not to use the internet when I'm 
> gaming. (Huh?)
> - I have gigabit service from my ISP. (That helps, but if you're complaining 
> about "slowness" you still need to rule out bufferbloat in your router.)
> - I can't believe that router manufacturers would ever allow such a thing to 
> happen in their gear. (See the Jim Gettys story above.)
> - I mean... wouldn't router vendors want to provide the best for their 
> customers? (No - implementing this (new-ish) code 

Re: [Starlink] The "reasons" that bufferbloat isn't a problem

2024-05-06 Thread Rich Brown via Starlink
Thanks! I just posted to: 
https://randomneuronsfiring.com/all-the-reasons-that-bufferbloat-isnt-a-problem/
 

It has mild edits from the original to address a broader audience. Also posted 
to the bloat list.

Rich

> On May 6, 2024, at 3:05 PM, Frantisek Borsik  
> wrote:
> 
> Hey Rich,
> 
> This was really great trip down the memory lane.
> 
> Could you please publish it somewhere, like on your blog?
> 
> Would be great to share it with the world!
> 
> Greetings from Prague.
> 
> All the best,
> 
> Frank
> 
> Frantisek (Frank) Borsik
> 

___
Starlink mailing list
Starlink@lists.bufferbloat.net
https://lists.bufferbloat.net/listinfo/starlink


Re: [Starlink] It’s the Latency, FCC

2024-05-06 Thread David Fernández via Starlink
" there is not a widely accepted standard for evaluating video quality (at
least not one of which I’m aware"

What about ITU-T BT.500? https://www.itu.int/rec/R-REC-BT.500
Well, AFAIK, Netflix invented VMAF because ITU methods are very expensive
to implement, not automated and PSNR was not good enough.

"I have no doubt that there exist today and will exist even more so in the
future superior compression that could lower the bitrate needed"
Yes, 25 Mbit/s is for HEVC (H.265), but the successor H.266 (VVC) is
already here and it reduces the data rate required by ~20%, but it seems
that Netflix may prefer AV1, which is between HEVC and VVC in terms of
performance.

Regards,

David F.

Date: Mon, 6 May 2024 15:22:04 +
From: Colin_Higbie 
To: Nathan Owens , Alexandre Petrescu

Cc: Frantisek Borsik ,
"starlink@lists.bufferbloat.net" 
Subject: Re: [Starlink] It’s the Latency, FCC
Message-ID:
<
mn2pr16mb3391a263e084ae9b74c2de5ff1...@mn2pr16mb3391.namprd16.prod.outlook.com
>

Content-Type: text/plain; charset="utf-8"

Nathan,

While you hit the point in your second paragraph, namely that Apple
REQUIRES 25Mbps (as do others of the major streaming services, including
Netflix today), your first paragraph misses it. It doesn’t matter what is
POSSIBLE (unless you have the ability to persuade all streaming services to
implement those technologies and ensure they work for the lion’s share of
installed end-user equipment and 4K HDR streams, in which case, well done
and I would agree that a lower bitrate is sufficient). The ONLY factor that
matters in terms of required bandwidth to be considered a fully capable ISP
service is what the market demands for the mainstream Internet services.
That is 25Mbps.

As the article you linked to points out, those lower bitrates are NOT for
4K HDR (10-bit color depth per pixel). For those, even in the authors’
chosen examples, and possibly only at 8-bit color (not clear), the article
claims to only get down to a low of about 17Mbps for the highest quality.
I’ve seen other reports that say anything below 20Mbps will occasionally
fail on particular complex scenes that don’t compress well. Add a little
bit of overhead or assume some additional traffic (an important
consideration, given the raison d’être of this group – reduce latency under
load from multiple streams), and you’re back to 25Mbps on needed bandwidth
to support multiple concurrent activities.

While I concede there is not a widely accepted standard for evaluating
video quality (at least not one of which I’m aware), I dislike that Y axis
(Quality) on their graphs has no metric, especially without a definition
for how they define quality – is it based on lost data, % of pixels
expressing compression artifacts, pixel color drift, or something else they
created for the purpose of making their case? I would say that NONE of the
photos shown constitute a good or excellent quality level, where all show
significant compression artifacts at the high-contrast boundaries. These
are distinct from natural focal problems with analog systems that are not
contrast-dependent. Further, these all appear to be relatively static
scenes with just a few small moving objects – the kinds of frames and
scenes that compress extremely well. Again, this is why we must look to the
market to determine what it needs, not individual proposals.

The article also acknowledges that the graph points represent the average,
meaning some frames are better and some are worse. This is bad because with
any lossy compression system, there is a (subjective) “good enough” level,
where values above that don’t add much, but frames that are worse will
stand out as bad. You can’t just watch the average – you’re forced to also
watch the bad frames. In real-world usage, these will be the frames during
high-speed image changes – explosions in action movies or a fast-panning
scene), often the times when preserving fidelity are most important (e.g.,
you lose track of the football during the fast pan downfield, or you really
want to see the detail in the X-wing fighters as the dogfight leads to
explosions around them).

Further, that article is really targeting mobile usage for cellular
bandwidth, where many of these viewing issues are fundamentally different
from the 65” living room TV. The mobile display may offer 120Hz, but
showing a movie or show at 30Hz (except for some sports) is still the
standard.

Now, to be fair, I have no doubt that there exist today and will exist even
more so in the future superior compression that could lower the bitrate
needed at any given resolution and quality level. The one described in the
article could be an important step in that direction. No doubt Netflix
already has multiple economic incentives to reduce required bandwidth –
their own bandwidth costs, which are a substantial % of their total
operating costs, access to customers who can’t get 25Mbps connections,
competition from other streaming services if they can 

Re: [Starlink] It’s the Latency, FCC

2024-05-06 Thread Colin_Higbie via Starlink
Nathan,

While you hit the point in your second paragraph, namely that Apple REQUIRES 
25Mbps (as do others of the major streaming services, including Netflix today), 
your first paragraph misses it. It doesn’t matter what is POSSIBLE (unless you 
have the ability to persuade all streaming services to implement those 
technologies and ensure they work for the lion’s share of installed end-user 
equipment and 4K HDR streams, in which case, well done and I would agree that a 
lower bitrate is sufficient). The ONLY factor that matters in terms of required 
bandwidth to be considered a fully capable ISP service is what the market 
demands for the mainstream Internet services. That is 25Mbps.

As the article you linked to points out, those lower bitrates are NOT for 4K 
HDR (10-bit color depth per pixel). For those, even in the authors’ chosen 
examples, and possibly only at 8-bit color (not clear), the article claims to 
only get down to a low of about 17Mbps for the highest quality. I’ve seen other 
reports that say anything below 20Mbps will occasionally fail on particular 
complex scenes that don’t compress well. Add a little bit of overhead or assume 
some additional traffic (an important consideration, given the raison d’être of 
this group – reduce latency under load from multiple streams), and you’re back 
to 25Mbps on needed bandwidth to support multiple concurrent activities.

While I concede there is not a widely accepted standard for evaluating video 
quality (at least not one of which I’m aware), I dislike that Y axis (Quality) 
on their graphs has no metric, especially without a definition for how they 
define quality – is it based on lost data, % of pixels expressing compression 
artifacts, pixel color drift, or something else they created for the purpose of 
making their case? I would say that NONE of the photos shown constitute a good 
or excellent quality level, where all show significant compression artifacts at 
the high-contrast boundaries. These are distinct from natural focal problems 
with analog systems that are not contrast-dependent. Further, these all appear 
to be relatively static scenes with just a few small moving objects – the kinds 
of frames and scenes that compress extremely well. Again, this is why we must 
look to the market to determine what it needs, not individual proposals.

The article also acknowledges that the graph points represent the average, 
meaning some frames are better and some are worse. This is bad because with any 
lossy compression system, there is a (subjective) “good enough” level, where 
values above that don’t add much, but frames that are worse will stand out as 
bad. You can’t just watch the average – you’re forced to also watch the bad 
frames. In real-world usage, these will be the frames during high-speed image 
changes – explosions in action movies or a fast-panning scene), often the times 
when preserving fidelity are most important (e.g., you lose track of the 
football during the fast pan downfield, or you really want to see the detail in 
the X-wing fighters as the dogfight leads to explosions around them).

Further, that article is really targeting mobile usage for cellular bandwidth, 
where many of these viewing issues are fundamentally different from the 65” 
living room TV. The mobile display may offer 120Hz, but showing a movie or show 
at 30Hz (except for some sports) is still the standard.

Now, to be fair, I have no doubt that there exist today and will exist even 
more so in the future superior compression that could lower the bitrate needed 
at any given resolution and quality level. The one described in the article 
could be an important step in that direction. No doubt Netflix already has 
multiple economic incentives to reduce required bandwidth – their own bandwidth 
costs, which are a substantial % of their total operating costs, access to 
customers who can’t get 25Mbps connections, competition from other streaming 
services if they can claim that their streams are less affected by what others 
in the house are doing or are higher quality at any given bandwidth, etc. As 
noted above, however, that is all moot unless all of the major streamers adopt 
comparable bandwidth reduction technologies and ALSO that all major existing 
home equipment can support it today (i.e., without requiring people replace 
their TV’s or STB’s). Absent that, it’s just a technical novelty that may or 
may not take hold, like Betamax videotapes or HD-DVD.

On the contrary, what we see today is that the major streaming services REQUIRE 
users to have 25Mbps connections in order to offer the 4K HDR streams. Yes, 
users can lie and may find they can watch most of the 4K content they wish with 
only 20Mbps or in some cases 15Mbps connections, but that’s clearly not a 
reason why an ISP should say, “We don’t need to offer 25Mbps for our customers 
to be able to access any major streaming service.”

Cheers,
Colin

From: Nathan Owens 
Sent: Monday, May 6, 

Re: [Starlink] It’s the Latency, FCC

2024-05-06 Thread Nathan Owens via Starlink
You really don’t need 25Mbps for decent 4K quality - depends on the
content. Netflix has some encodes that go down to 1.8Mbps with a very high
VMAF:
https://netflixtechblog.com/optimized-shot-based-encodes-for-4k-now-streaming-47b516b10bbb

Apple TV has the highest bitrate encodes of any mainstream streaming
service, and those do top out at ~25Mbps. Could they be more efficient?
Probably…

On Mon, May 6, 2024 at 7:19 AM Alexandre Petrescu via Starlink <
starlink@lists.bufferbloat.net> wrote:

>
> Le 02/05/2024 à 21:50, Frantisek Borsik a écrit :
>
> Thanks, Colin. This was just another great read on video (and audio - in
> the past emails from you) bullet-proofing for the near future.
>
> To be honest, the consensus on the bandwidth overall in the bufferbloat
> related circles was in the 25/3 - 100/20 ballpark
>
>
> To continue on this discussion of 25mbit/s (mbyte/s ?) of 4k, and 8k, here
> are some more thoughts:
>
> - about 25mbit/s bw needs for 4K:  hdmi cables for 4K HDR10 (high dynamic
> range) are specified at 18gbit/s and not 25mbit/s (mbyte?).  These HDMI
> cables dont run IP.  But, supposedly, the displayed 4K image is of a higher
> quality if played over hdmi (presumably from a player) than from a server
> remote on the Internet.   To achieve parity, maybe one wants to run that
> hdmi flow from the server with IP, and at that point the bandwidth
> requirement is higher than 25mbit/s.  This goes hand in hand with the disc
> evolutions (triple-layer bluray discs of 120Gbyte capacity is the most
> recent; I dont see signs of that to slow).
>
> - in some regions, the terrestrial DVB (TV on radio frequencies, with
> antenna receivers, not  IP) run at 4K HDR10 starting this year.  I dont
> know what MPEG codec is it, at what mbit/s speed.  But it is not over the
> Internet.  This means that probably  ISPs are inclined to do more than that
> 4K over the Internet, maybe 8K, to distinguish their service from DVB.  The
> audience of these DVB streams is very wide, with cheap one-time buy
> receivers (no subscription, like with ISP) already widely available in
> electronics stores.
>
> - a reduced audience, yet important,  is that of 8K TV via satellites.
> There is one japanese 8K TV satcom provider, and the audience (number of
> watchers) is probably smaller than that of DVB 4K HDR.  Still, it
> constitutes competition for IPTV from ISPs.
>
> To me, that reflects a direction of growth of the 4K to 8K capability
> requirement from the Internet.
>
> Still, that growth in bandwidth requirement does not say anything about
> the latency requirement.  That can be found elsewhere, and probably it is
> very little related to TV.
>
> Alex
>
> , but all what many of us were trying to achieve while talking to FCC (et
> al) was to point out, that in order to really make it bulletproof and
> usable for not only near future, but for today, a reasonable Quality of
> Experience requirement is necessary to be added to the definition of
> broadband. Here is the link to the FCC NOI and related discussion:
> https://circleid.com/posts/20231211-its-the-latency-fcc
>
> Hopefully, we have managed to get that message over to the other side. At
> least 2 of 5 FCC Commissioners seems to be getting it - Nathan Simington
> and Brendan Carr - and Nathan event arranged for his staffers to talk with
> Dave and others. Hope that this line of of cooperation will continue and we
> will manage to help the rest of the FCC to understand the issues at hand
> correctly.
>
> All the best,
>
> Frank
>
> Frantisek (Frank) Borsik
>
>
>
> https://www.linkedin.com/in/frantisekborsik
>
> Signal, Telegram, WhatsApp: +421919416714
>
> iMessage, mobile: +420775230885
>
> Skype: casioa5302ca
>
> frantisek.bor...@gmail.com
>
>
> On Thu, May 2, 2024 at 4:47 PM Colin_Higbie via Starlink <
> starlink@lists.bufferbloat.net> wrote:
>
>> Alex, fortunately, we are not bound to use personal experiences and
>> observations on this. We have real market data that can provide an
>> objective, data-supported conclusion. No need for a
>> chocolate-or-vanilla-ice-cream-tastes-better discussion on this.
>>
>> Yes, cameras can film at 8K (and higher in some cases). However, at those
>> resolutions (with exceptions for ultra-high end cameras, such as those used
>> by multi-million dollar telescopes), except under very specific conditions,
>> the actual picture quality doesn't vary past about 5.5K. The loss of detail
>> simply moves from a consequence of too few pixels to optical and focus
>> limits of the lenses. Neighboring pixels simply hold a blurry image,
>> meaning they don't actually carry any usable information. A still shot with
>> 1/8 of a second exposure can easily benefit from an 8K or higher sensor.
>> Video sometimes can under bright lights with a relatively still or slow
>> moving scene. Neither of these requirements lends itself to typical home
>> video at 30 (or 24) frames per second – that's 0.03s of time per frame. We
>> can imagine AI getting to the 

Re: [Starlink] It’s the Latency, FCC

2024-05-06 Thread David Fernández via Starlink
For " I dont know what MPEG codec is it, at what mbit/s speed" you may
check this:
https://lists.bufferbloat.net/pipermail/starlink/2024-April/002706.html

From: Alexandre Petrescu 
To: Frantisek Borsik , Colin_Higbie

Cc: "starlink@lists.bufferbloat.net" 
Subject: Re: [Starlink] It’s the Latency, FCC
Message-ID: <298126c9-7854-47c5-a965-c0f89a855...@gmail.com>
Content-Type: text/plain; charset="utf-8"; Format="flowed"


Le 02/05/2024 à 21:50, Frantisek Borsik a écrit :
> Thanks, Colin. This was just another great read on video (and audio -
> in the past emails from you) bullet-proofing for the near future.
>
> To be honest, the consensus on the bandwidth overall in the
> bufferbloat related circles was in the 25/3 - 100/20 ballpark


To continue on this discussion of 25mbit/s (mbyte/s ?) of 4k, and 8k,
here are some more thoughts:

- about 25mbit/s bw needs for 4K:  hdmi cables for 4K HDR10 (high
dynamic range) are specified at 18gbit/s and not 25mbit/s (mbyte?).
These HDMI cables dont run IP.  But, supposedly, the displayed 4K image
is of a higher quality if played over hdmi (presumably from a player)
than from a server  remote on the Internet.   To achieve parity, maybe
one wants to run that hdmi flow from the server with IP, and at that
point the bandwidth requirement is higher than 25mbit/s.  This goes hand
in hand with the disc evolutions (triple-layer bluray discs of 120Gbyte
capacity is the most recent; I dont see signs of that to slow).

- in some regions, the terrestrial DVB (TV on radio frequencies, with
antenna receivers, not  IP) run at 4K HDR10 starting this year.  I dont
know what MPEG codec is it, at what mbit/s speed. But it is not over the
Internet.  This means that probably  ISPs are inclined to do more than
that 4K over the Internet, maybe 8K, to distinguish their service from
DVB.  The audience of these DVB streams is very wide, with cheap
one-time buy receivers (no subscription, like with ISP) already widely
available in electronics stores.
___
Starlink mailing list
Starlink@lists.bufferbloat.net
https://lists.bufferbloat.net/listinfo/starlink


Re: [Starlink] The "reasons" that bufferbloat isn't a problem

2024-05-06 Thread Dave Collier-Brown via Starlink

I think that gamer experience doing simple (over-simple) tests with CAKE is a 
booby-trap. This discussion suggests that the real performance of their link is 
horrid, and that they turn off CAKE to get what they think is full 
performance... but isn't.

https://www.reddit.com/r/HomeNetworking/comments/174k0ko/low_latency_gaming_and_bufferbloat/#:~:text=If%20there's%20any%20chance%20that,out%20any%20intermittent%20latency%20spikes.

(I used to work for World Gaming, and follow the game commentators more that I 
do now)

--dave

On 2024-05-06 07:25, Rich Brown via Starlink wrote:
Hi Gene,

I've been vacillating on whether to send this note, but have decided to pull the trigger. I 
apologize in advance for the "Debbie Downer" nature of this message. I also apologize for 
any errors, omissions, or over-simplifications of the "birth of bufferbloat" story and 
its fixes. Corrections welcome.

Rich
--

If we are going to take a shot at opening people's eyes to bufferbloat, we should know 
some of the "objections" we'll run up against. Even though there's terrific 
technical data to back it up, people seem especially resistant to thinking that 
bufferbloat might affect their network, even when they're seeing problems that sound 
exactly like bufferbloat symptoms. But first, some history:

The very idea of bufferbloat is simply unbelievable. Jim Gettys in 2011 [1] couldn't 
believe it, and he's a smart networking guy,. At the time, it seemed incredible (that is 
"not credible" == impossible) that something could induce 1.2 seconds of 
latency into his home network connection. He called in favors from technical contacts at 
his ISP and at Bell Labs who went over everything with a fine-toothed comb. It was all 
exactly as spec'd. But he still had the latency.

This led Jim and Dave Täht to start the investigation into the phenomenon known today as 
"bufferbloat" - the undesirable latency that comes from a router or other 
network equipment buffering too much data. Over several years, a group of smart people 
made huge improvements: fq_codel was released 14 May 2012 [3]; it was incorporated into 
the Linux kernel shortly afterward. CAKE came in 2015, and the fixes that minimize 
bufferbloat in Wi-Fi arrived in 2018. In 2021 cake-autorate [4] arrived to handle varying 
speed ISP links. All these techniques work great: in 2014, my 7mbps DSL link was quite 
usable. And when the pandemic hit, fq_codel on my OpenWrt router allowed me to use that 
same 7mbps DSL line for two simultaneous zoom calls.

As one of the authors of [2], I am part of the team that has tried over the 
years to explain bufferbloat and how to fix it. We've spoken with vendors. 
We've spent untold hours responding to posts on assorted boards and forums with 
the the bufferbloat story.

With these technical fixes in hand, we cockily set about to tell the world 
about how to fix bufferbloat. Our efforts have been met with skepticism at 
best, or stony silence. What are the objections?

- This is just the ordinary behavior: I would expect things to be slower when 
there's more traffic (Willfully ignoring orders of magnitude increase in delay.)
- Besides, I'm the only one using the internet. (Except when my phone uploads 
photos. Or my computer kicks off some automated process. Or I browse the web. 
Or ...)
- It only happens some of the time. (Exactly. That's probably when something's 
uploading photos, or your computer is doing stuff in the background.)
- Those bufferbloat tests you hear about are bogus. They artificially add load, 
which isn't a realistic test. (...and if you actually are downloading a file?)
- Bufferbloat only happens when the network is 100% loaded. (True. But when you 
open a web page, your browser briefly uses 100% of the link. Is this enough to 
cause momentary lag?)
- It's OK. I just tell my kids/spouse not to use the internet when I'm gaming. 
(Huh?)
- I have gigabit service from my ISP. (That helps, but if you're complaining about 
"slowness" you still need to rule out bufferbloat in your router.)
- I can't believe that router manufacturers would ever allow such a thing to 
happen in their gear. (See the Jim Gettys story above.)
- I mean... wouldn't router vendors want to provide the best for their customers? (No - 
implementing this (new-ish) code requires engineering effort. They're selling plenty of 
routers with decade-old software. The Boss says, "would we sell more if they made 
these changes? Probably not.")
- Why would my ISP provision/sell me a router that gave crappy service? They're 
a big company, they must know about this stuff. (Maybe. We have reached out to 
all the vendors. But remember they profit if you decide your network is too 
slow and you upgrade to a faster device/plan.)
- But couldn't I just tweak the QoS on my router? (Maybe. But see [5])
- Besides, I just spent $300 on a "gaming router". Obviously, I bought the most 
expensive/best possible solution on the market (But I still have lag...)
- 

[Starlink] The "reasons" that bufferbloat isn't a problem

2024-05-06 Thread Rich Brown via Starlink
Hi Gene,

I've been vacillating on whether to send this note, but have decided to pull 
the trigger. I apologize in advance for the "Debbie Downer" nature of this 
message. I also apologize for any errors, omissions, or over-simplifications of 
the "birth of bufferbloat" story and its fixes. Corrections welcome.

Rich
--

If we are going to take a shot at opening people's eyes to bufferbloat, we 
should know some of the "objections" we'll run up against. Even though there's 
terrific technical data to back it up, people seem especially resistant to 
thinking that bufferbloat might affect their network, even when they're seeing 
problems that sound exactly like bufferbloat symptoms. But first, some history:

The very idea of bufferbloat is simply unbelievable. Jim Gettys in 2011 [1] 
couldn't believe it, and he's a smart networking guy,. At the time, it seemed 
incredible (that is "not credible" == impossible) that something could induce 
1.2 seconds of latency into his home network connection. He called in favors 
from technical contacts at his ISP and at Bell Labs who went over everything 
with a fine-toothed comb. It was all exactly as spec'd. But he still had the 
latency. 

This led Jim and Dave Täht to start the investigation into the phenomenon known 
today as "bufferbloat" - the undesirable latency that comes from a router or 
other network equipment buffering too much data. Over several years, a group of 
smart people made huge improvements: fq_codel was released 14 May 2012 [3]; it 
was incorporated into the Linux kernel shortly afterward. CAKE came in 2015, 
and the fixes that minimize bufferbloat in Wi-Fi arrived in 2018. In 2021 
cake-autorate [4] arrived to handle varying speed ISP links. All these 
techniques work great: in 2014, my 7mbps DSL link was quite usable. And when 
the pandemic hit, fq_codel on my OpenWrt router allowed me to use that same 
7mbps DSL line for two simultaneous zoom calls. 

As one of the authors of [2], I am part of the team that has tried over the 
years to explain bufferbloat and how to fix it. We've spoken with vendors. 
We've spent untold hours responding to posts on assorted boards and forums with 
the the bufferbloat story. 

With these technical fixes in hand, we cockily set about to tell the world 
about how to fix bufferbloat. Our efforts have been met with skepticism at 
best, or stony silence. What are the objections? 

- This is just the ordinary behavior: I would expect things to be slower when 
there's more traffic (Willfully ignoring orders of magnitude increase in delay.)
- Besides, I'm the only one using the internet. (Except when my phone uploads 
photos. Or my computer kicks off some automated process. Or I browse the web. 
Or ...)
- It only happens some of the time. (Exactly. That's probably when something's 
uploading photos, or your computer is doing stuff in the background.)
- Those bufferbloat tests you hear about are bogus. They artificially add load, 
which isn't a realistic test. (...and if you actually are downloading a file?)
- Bufferbloat only happens when the network is 100% loaded. (True. But when you 
open a web page, your browser briefly uses 100% of the link. Is this enough to 
cause momentary lag?)
- It's OK. I just tell my kids/spouse not to use the internet when I'm gaming. 
(Huh?)
- I have gigabit service from my ISP. (That helps, but if you're complaining 
about "slowness" you still need to rule out bufferbloat in your router.)
- I can't believe that router manufacturers would ever allow such a thing to 
happen in their gear. (See the Jim Gettys story above.)
- I mean... wouldn't router vendors want to provide the best for their 
customers? (No - implementing this (new-ish) code requires engineering effort. 
They're selling plenty of routers with decade-old software. The Boss says, 
"would we sell more if they made these changes? Probably not.")
- Why would my ISP provision/sell me a router that gave crappy service? They're 
a big company, they must know about this stuff. (Maybe. We have reached out to 
all the vendors. But remember they profit if you decide your network is too 
slow and you upgrade to a faster device/plan.)
- But couldn't I just tweak the QoS on my router? (Maybe. But see [5])
- Besides, I just spent $300 on a "gaming router". Obviously, I bought the most 
expensive/best possible solution on the market (But I still have lag...)
- You're telling me that a bunch of pointy-headed academics are smarter than 
commercial router developers - who sold me that $300 router? (I can't believe 
it.)
- And then you say that I should throw away that gaming router and install some 
"open source firmware"? (What the heck is that? And why should I believe you?) 
- What if it doesn't solve the problem? Who will give me support? And how will 
I get back to a vendor-supported system? (Valid point - the first valid point)
- Aren't there any commercial solutions I can just buy? (Not at the moment. 
IQrouter was a shining light 

Re: [Starlink] It’s the Latency, FCC

2024-05-06 Thread Alexandre Petrescu via Starlink


Le 02/05/2024 à 21:50, Frantisek Borsik a écrit :
Thanks, Colin. This was just another great read on video (and audio - 
in the past emails from you) bullet-proofing for the near future.


To be honest, the consensus on the bandwidth overall in the 
bufferbloat related circles was in the 25/3 - 100/20 ballpark



To continue on this discussion of 25mbit/s (mbyte/s ?) of 4k, and 8k, 
here are some more thoughts:


- about 25mbit/s bw needs for 4K:  hdmi cables for 4K HDR10 (high 
dynamic range) are specified at 18gbit/s and not 25mbit/s (mbyte?).  
These HDMI cables dont run IP.  But, supposedly, the displayed 4K image 
is of a higher quality if played over hdmi (presumably from a player) 
than from a server  remote on the Internet.   To achieve parity, maybe 
one wants to run that hdmi flow from the server with IP, and at that 
point the bandwidth requirement is higher than 25mbit/s.  This goes hand 
in hand with the disc evolutions (triple-layer bluray discs of 120Gbyte 
capacity is the most recent; I dont see signs of that to slow).


- in some regions, the terrestrial DVB (TV on radio frequencies, with 
antenna receivers, not  IP) run at 4K HDR10 starting this year.  I dont 
know what MPEG codec is it, at what mbit/s speed. But it is not over the 
Internet.  This means that probably  ISPs are inclined to do more than 
that 4K over the Internet, maybe 8K, to distinguish their service from 
DVB.  The audience of these DVB streams is very wide, with cheap 
one-time buy receivers (no subscription, like with ISP) already widely 
available in electronics stores.


- a reduced audience, yet important,  is that of 8K TV via satellites.   
There is one japanese 8K TV satcom provider, and the audience (number of 
watchers) is probably smaller than that of DVB 4K HDR.  Still, it 
constitutes competition for IPTV from ISPs.


To me, that reflects a direction of growth of the 4K to 8K capability 
requirement from the Internet.


Still, that growth in bandwidth requirement does not say anything about 
the latency requirement.  That can be found elsewhere, and probably it 
is very little related to TV.


Alex

, but all what many of us were trying to achieve while talking to FCC 
(et al) was to point out, that in order to really make it bulletproof 
and usable for not only near future, but for today, a reasonable 
Quality of Experience requirement is necessary to be added to the 
definition of broadband. Here is the link to the FCC NOI and related 
discussion:

https://circleid.com/posts/20231211-its-the-latency-fcc

Hopefully, we have managed to get that message over to the other side. 
At least 2 of 5 FCC Commissioners seems to be getting it - Nathan 
Simington and Brendan Carr - and Nathan event arranged for his 
staffers to talk with Dave and others. Hope that this line of of 
cooperation will continue and we will manage to help the rest of the 
FCC to understand the issues at hand correctly.


All the best,

Frank

Frantisek (Frank) Borsik

https://www.linkedin.com/in/frantisekborsik

Signal, Telegram, WhatsApp: +421919416714

iMessage, mobile: +420775230885

Skype: casioa5302ca

frantisek.bor...@gmail.com



On Thu, May 2, 2024 at 4:47 PM Colin_Higbie via Starlink 
 wrote:


Alex, fortunately, we are not bound to use personal experiences
and observations on this. We have real market data that can
provide an objective, data-supported conclusion. No need for a
chocolate-or-vanilla-ice-cream-tastes-better discussion on this.

Yes, cameras can film at 8K (and higher in some cases). However,
at those resolutions (with exceptions for ultra-high end cameras,
such as those used by multi-million dollar telescopes), except
under very specific conditions, the actual picture quality doesn't
vary past about 5.5K. The loss of detail simply moves from a
consequence of too few pixels to optical and focus limits of the
lenses. Neighboring pixels simply hold a blurry image, meaning
they don't actually carry any usable information. A still shot
with 1/8 of a second exposure can easily benefit from an 8K or
higher sensor. Video sometimes can under bright lights with a
relatively still or slow moving scene. Neither of these
requirements lends itself to typical home video at 30 (or 24)
frames per second – that's 0.03s of time per frame. We can imagine
AI getting to the point where it can compensate for lack of
clarity, and this is already being used for game rendering (e.g.,
Nvidia's DLSS and Intel's XESS), but that requires training per
scene in those games and there hasn't been much development work
done on this for filming, at least not yet.

Will sensors (or AI) improve to capture images faster per amount
of incoming photons so that effective digital shutter speeds can
get faster at lower light levels? No doubt. Will it materially
change video quality so that 8K is a similar step up from 4K as 4K
is from HD (or as HD was 

[Starlink] Intended BUFFERBLOAT FREE AND CONGESTION RESILIENT SATCOM NETWORKS (ARTES AT 6B.129)

2024-05-06 Thread David Fernández via Starlink
Dear all,

You may express your interest on this:
https://esastar-publication.sso.esa.int/ESATenderActions/details/75587

Regards,

David F.
___
Starlink mailing list
Starlink@lists.bufferbloat.net
https://lists.bufferbloat.net/listinfo/starlink