Re: [NNagain] "FCC explicitly prohibits fast lanes, closing possible net neutrality loophole"

2024-05-16 Thread Sebastian Moeller via Nnagain
Hi Karl,

being a lawyer you know how this works, until a court come to an interpretation 
all is in some degree of limbo, so if you want clarification sue somebody to 
force an interpretation ;)


> On 15. May 2024, at 23:43, Karl Auerbach via Nnagain 
>  wrote:
> 
> As a matter of drafting the FCC has left some potholes:
> "We clarify that a BIAS [Broadband Internet Access Service] provider's 
> decision to speed up 'on the basis of Internet content, applications, or 
> services' would 'impair or degrade' other content, applications, or services 
> which are not given the same treatment,"
> That phrase "speed up" is too vague. 

[SM] I wholeheartedly agree, speed is delta distance/delta time, and is 
typically 2/3 of the speed of light in vacuum for almost all internet access 
technologies and not really what an ISP sells, they offer capacity or 
information rate, not speed.

> Does it conflict with active or fair queue management? 

[SM] No it does not, equitable sharing between entities is a core principle 
behind the internet fair queuing is just the consequent implementation of the 
IETF's 'do not starve any flow' principle... you could rather argue that not 
doing equitable sharing might get you in hot water. (Sidenote the IMHO best 
argument for equitable sharing is not that it is the 'most best' policy but 
that it is the 'least bad', to elaborate knowing the relative importance of 
data packets one can do a lot better than treating all equal, but lacking that 
knowledge treating all equally is what avoids really bad outcomes, and since 
arbitrary bottlenecks never will have robust and reliable information about the 
relative importance of packets... equitable sharng is the 'good enough' 
solution.)

> Does it prohibit things that some Ethernet NIC "offloads" do (but which could 
> be done by a provider) such as TCP data aggregation (i.e. the merging of lots 
> of small TCP segments into one big one)?

[SM] No the relevant TCP stack actions happens on the end points not the ISP, 
the ISP's job is to transport the data, if that entails transient merging and 
re-segmentation as long as that is opaque to the end user no. Another thing is 
ACK filtering where ISPs interfere with customer data, albeit with the goal to 
improve the link quality for the customer.

> Does it prohibit insertion of an ECN bit that would have the effect of 
> slowing a sender of packets? 

[SM] Not really, as alternatively a packet would have been dropped with even 
more drastic effects, not only is this a 'slow down' signal but the dropped 
packet will need to be retransmitted

> Might it preclude a provider "helpfully" dropping stale video packets that 
> would arrive at a users video rendering codec too late to be useful? 

[SM] Hard to say, even  a late packet might help decoding later packets versus 
dropping it, so I would argue not the ISPs business to interfere.

> Could there be an issue with selective compression? 

[SM] Yes, if an ISP recompresses material unasked he better make sure the 
compression is completely invisible to the end user, which IMHO rules out lossy 
compression schemes. An ISP is in the business of transporting bits not 
manipulating bits.

> Or, to really get nerdy - given that a lot of traffic uses Ethernet frames as 
> a model, there can be a non-trivial amount of hidden, usually unused, 
> bandwidth in that gap between the end of tiny IP packets and the end of 
> minimum length Ethernet frames. (I've seen that space used for things like 
> license management.) 

[SM] The maximal minimum payload size for ethernet is 46 bytes (with VLAN only 
42), an IPv4 header takes 20 bytes, a UDP or TCP header another 20, so we are 
down to 6-2 bytes for license management (controllable ny the user via adding a 
VLAN tag) that seems a harebrained idea based on 'nobody is going to look 
there'. More space left over with ICMP or ARP... 
But at least that is novel, all I knew is that the MACs of network cards have 
been used for licensing.

> Or might this impact larger path issues, such as routing choices, either 
> dynamic or based on contractual relationships - such as conversational voice 
> over terrestrial or low-earth-orbit paths while background file transfers are 
> sent via fat, but large latency paths such as geo-synch satellite? 
> If an ISP found a means of blocking spam from being delivered, would that 
> violate the rules?  (Same question for blocking of VoIP calls from 
> undesirable sources. 

[SM] That one is easy, if the ISP acts under explicit instruction of the end 
user this is A-OK otherwise not.

> It may also call into question even the use of IP address blacklists or 
> reverse path algorithms that block traffic coming from places where it has no 
> business coming from.)

[SM] That is IMHO FUD, BCP38 still applies. That is normal network hygene...

> The answers may be obvious to tech folks here but in the hands of troublesome 
> lawyers (I'm one of those) these ambiguities 

Re: [NNagain] Flash priority

2024-03-09 Thread Sebastian Moeller via Nnagain
Hi Bob,

so having iperf2 actually check and report this information, obviously is the 
end game here (especially reporting, the DSCP and ECN pattern send, the 
patterns recswivec by the other side and what the receiver saw in the response 
packets would be really helpful).

But one can use tcpdump as a crude hack to get the desired information:

Here are my GOTO tcpdump invocations for that purpose...

# ECN IPv4/6
tcpdump -i pppoe-wan -v -n '(ip6 and (ip6[0:2] & 0x30) >> 4  != 0)' or '(ip and 
(ip[1] & 0x3) != 0)' # NOT Not-ECT
tcpdump -i pppoe-wan -v -n '(ip6 and (ip6[0:2] & 0x30) >> 4  == 1)' or '(ip and 
(ip[1] & 0x3) == 1)' # ECT(1)
tcpdump -i pppoe-wan -v -n '(ip6 and (ip6[0:2] & 0x30) >> 4  == 2)' or '(ip and 
(ip[1] & 0x3) == 2)' # ECT(0)
tcpdump -i pppoe-wan -v -n '(ip6 and (ip6[0:2] & 0x30) >> 4  == 3)' or '(ip and 
(ip[1] & 0x3) == 3)' # CE

# TCP ECN IPv4/6: (for IPv6 see see 
https://ask.wireshark.org/question/27153/i-am-trying-to-capture-tcp-syn-on-ipv6-packets-but-i-only-get-ipv4/)
tcpdump -i pppoe-wan -v -n '(tcp[tcpflags] & (tcp-ece|tcp-cwr) != 0)' or 
'((ip6[6] = 6) and (ip6[53] & 0xC0 != 0))' # TCP ECN flags, ECN in action
tcpdump -i pppoe-wan -v -n '(tcp[tcpflags] & tcp-ece != 0)' or '((ip6[6] = 6) 
and (ip6[53] & 0x40 != 0))' # TCP ECN flags, ECE: ECN-Echo (reported as E)
tcpdump -i pppoe-wan -v -n '(tcp[tcpflags] & tcp-cwr != 0)' or '((ip6[6] = 6) 
and (ip6[53] & 0x80 != 0))' # TCP ECN flags, CWR: Congestion Window Reduced 
(reported as W)


# IPv4/6 everything decimal DSCP 45 0x2D
tcpdump -i pppoe-wan -v -n '(ip and (ip[1] & 0xfc) >> 2 == 0x2D)' or '(ip6 and 
(ip6[0:2] & 0xfc0) >> 4  == 0x2D)'


Sure this is not super convenient, but they can help a lot in quick and dirty 
debugging...

Note: pppoe-wan is my OpenWrt router's wan interface.



> On 9. Mar 2024, at 19:43, rjmcmahon via Nnagain 
>  wrote:
> 
> I should note that I haven't evaluated ECN marks, just that 45 gets passed 
> to/fro
> 
> Bob
>>> [JL] Quite true: each network tends to use DSCP marks on a
>>> private/internal basis and so will bleach the DSCP marks on ingress
>>> from peers. This will, however, change with the upcoming IETF RFC on
>>> Non-Queue-Building (NQB) Per Hop Behavior -
>>> https://datatracker.ietf.org/doc/html/draft-ietf-tsvwg-nqb. And I can
>>> report that we at Comcast now permit DSCP-45 inbound for NQB packets,
>>> in case developers would like to experiment with this (we just
>>> finished updating router configs last week for residential users on
>>> DOCSIS; FTTP and commercial are still in process).
>> iperf 2 now supports a --dscp option as a convenience (vs setting the
>> --tos byte.) I can confirm --dscp 45 is being passed over my xfinity
>> hop to my linodes (now Akamai) servers in both directions at multiple
>> colo locations.
>> The --dscp is in the master branch.
>> https://sourceforge.net/p/iperf2/code/ci/master/tree/  Older versions
>> require --tos and setting the byte, e.g. 180
>> Bob
>> ___
>> Nnagain mailing list
>> Nnagain@lists.bufferbloat.net
>> https://lists.bufferbloat.net/listinfo/nnagain
> ___
> Nnagain mailing list
> Nnagain@lists.bufferbloat.net
> https://lists.bufferbloat.net/listinfo/nnagain

___
Nnagain mailing list
Nnagain@lists.bufferbloat.net
https://lists.bufferbloat.net/listinfo/nnagain


Re: [NNagain] Flash priority

2024-03-09 Thread Sebastian Moeller via Nnagain
Hi Jason,

> On 9. Mar 2024, at 15:38, Livingood, Jason via Nnagain 
>  wrote:
> 
> On 3/8/24, 22:02, "Nnagain on behalf of David Lang via Nnagain" 
>   on behalf of 
> nnagain@lists.bufferbloat.net > wrote:
> 
>> In practice, priority bits are ignored on the Internet. There are no legal
> limits on what bits can be generated, and no reason to trust priority bits 
> that 
> come from a different network.
>> As I understand the current state of the art, best practice is to zero out
> priorities at organizational boundries
> 
> [JL] Quite true: each network tends to use DSCP marks on a private/internal 
> basis and so will bleach the DSCP marks on ingress from peers. This will, 
> however, change with the upcoming IETF RFC on Non-Queue-Building (NQB) Per 
> Hop Behavior - h++ps://datatracker.ietf.org/doc/html/draft-ietf-tsvwg-nqb.

[SM] In all respect, that is wishful thinking. Just because an IETF RFC 
states/recommends something does not mean it actually is implemented that way 
in the existing internet... Current in-effect RFCs already recommend that ISPs 
should not change DSCPs that they do not need to use for their own PHB-needs 
but simply treat them to default forwarding, but that is not what ISPs actually 
do. Case in point, a big (probably the biggest) DOCSIS ISP in the USA had been 
remarking a noticeable fraction of packets to CS1 for years (which at a time 
was defined to mean background or lower priority and is treated as such by 
default WiFi APs) causing issues at end users' home networks. (Said ISP, to its 
credit, did fix the issue recently, but it tool a few years...).

Just becyause something is writen in an RFC does not make it reality. And given 
the hogwash that some RFCs contain, that is not even a bad thing per se. 
(Examples on request ;) )

> And I can report that we at Comcast now permit DSCP-45 inbound for NQB 
> packets, in case developers would like to experiment with this (we just 
> finished updating router configs last week for residential users on DOCSIS; 
> FTTP and commercial are still in process).

[SM] Since I have your attention, if I try comcast's bespoke networkQuality 
server (from your L4S tests):
networkQuality -C https://rpm-nqtest-st.comcast.net/.well-known/nq -k -s -f 
h3,L4S
I saw ECT(1) marking on my egressing packets, but none on the ingressing 
packets... that does not seem to be in line with the L4S RFCs (giving another 
example why RFC text alone is not sufficient for much). (Sidenote: if all L4S 
testing is happening in isolated networks, why wait for L4S becoming RFCs 
before starting these tests?)

> 
> 
> 
> 
> ___
> Nnagain mailing list
> Nnagain@lists.bufferbloat.net
> h++ps://lists.bufferbloat.net/listinfo/nnagain

___
Nnagain mailing list
Nnagain@lists.bufferbloat.net
https://lists.bufferbloat.net/listinfo/nnagain


Re: [NNagain] Verizon, T-Mobile, Nokia get noisy on network slicing and net neutrality (LightReading)

2024-03-09 Thread Sebastian Moeller via Nnagain
Hi Jason.

> On 9. Mar 2024, at 00:30, Livingood, Jason via Nnagain 
>  wrote:
> 
> I find it difficult to imagine a lot of consumer use cases for this (and find 
> it another rather complex 3GPP spec). I can see some enterprise, industrial, 
> and event (e.g. sports venue) use cases - but those seem like simple give X 
> devices priority over Y devices sorts of scenarios. 

[SM] Looking at the wikipedia article on slicing I see:
"Network slicing emerges as an essential technique in 5G networks to 
accommodate such different and possibly contrasting quality of service (QoS) 
requirements exploiting a single physical network infrastructure.[1][13]
[...]
Impact and applications
In commercial terms, network slicing allows a mobile operator to create 
specific virtual networks that cater to particular clients and use cases. 
Certain applications - such as mobile broadband, machine-to-machine 
communications (e.g. in manufacturing or logistics), or smart cars - will 
benefit from leveraging different aspects of 5G technology. One might require 
higher speeds, another low latency, and yet another access to edge 
computingresources. By creating separate slices that prioritise specific 
resources a 5G operator can offer tailored solutions to particular 
industries.[14][15]: 3  Some sources insist this will revolutionise industries 
like marketing, augmented reality, or mobile gaming,[16][17] while others are 
more cautious, pointing to unevenness in network coverage and poor reach of 
advantages beyond increased speed.[18][19]"

As expected this technique is designed to allow exactly what NN was designed to 
prohibit (treating packets differentially in the internet based on economic 
considerations*)... this is IMHO why instead of calling a spade a spade mobile 
carriers avoid describing this in a useful way, as it is exactly about 
prioritisation... IMHO that will back fire, and a better avenue would be to be 
open about what it enables and propose a method to restrict the potential 
issues. E.g. (I am making this up on the fly, so it will likely not hold up to 
any degree of scrutiny) by self limiting to never commit more than X% of a 
cell's capacity to slicing, IFF the cell is used for normal end user service at 
all. So admit that there is some trade-off here, limit the fall-out, and then 
describe why we as a society should embrace that trade-off. I am a bit 
sceptical about the whole car 2 car communication thing (that is cars talk to 
cars, not people n cars talk to people on cars ;) ), but if a Carrier believes 
there is value in that for e.g. accident avoidance, then tell how this requires 
the stricter network guarantees that (only?) slicing can deliver.

Personally I still think this is not an attractive proposition, but I am not 
the audience for that anyway; the relevant regulatory agency and the 
legislative is.

Regards
Sebastian

*) This is a (too) short condensation of the rationale of the EU for stepping 
into the NN debate.

> From: Nnagain  on behalf of the 
> keyboard of geoff goodfellow via Nnagain 
> Sent: Friday, March 8, 2024 5:08:28 PM
> To: Network Neutrality is back! Let´s make the technical aspects heard this 
> time! 
> Cc: the keyboard of geoff goodfellow 
> Subject: [NNagain] Verizon, T-Mobile, Nokia get noisy on network slicing and 
> net neutrality (LightReading)   'Placing unnecessary restrictions on this 
> technology could stifle it in its infancy,' Verizon wrote of network slicing, 
> in a widening debate involving the FCC's net neutrality proceeding and new 
> wireless technologies...
> [...]
> https://www.lightreading.com/regulatory-politics/verizon-t-mobile-nokia-get-noisy-on-network-slicing-and-net-neutrality
> via
> https://twitter.com/mikeddano/status/1766207009106669682
> 
> -- 
> geoff.goodfel...@iconia.com
> living as The Truth is True
> 
> ___
> Nnagain mailing list
> Nnagain@lists.bufferbloat.net
> https://lists.bufferbloat.net/listinfo/nnagain

___
Nnagain mailing list
Nnagain@lists.bufferbloat.net
https://lists.bufferbloat.net/listinfo/nnagain


Re: [NNagain] Geoff Huston's panel

2024-02-29 Thread Sebastian Moeller via Nnagain


> On 29. Feb 2024, at 18:26, Lee via Nnagain  
> wrote:
> 
> On Thu, Feb 29, 2024 at 9:12 AM Dave Taht via Nnagain wrote:
>> 
>> He is being incredibly provocative this week. It hurt to sit through this.
>> 
>> https://www.youtube.com/watch?v=gxO73fH0VqM
> 
> Yes, he's provocative - but also entertaining.  And don't forget the audience:
> 
> ABOUT APRICOT
> 
> Representing Asia Pacific's largest international Internet conference,
> Asia Pacific Regional Internet Conference on Operational Technologies
> (APRICOT) draws many of the world's best Internet engineers,
> operators, researchers, service providers, users and policy
> communities from over 50 countries to teach, present, and do their own
> human networking.
> 
> His last slide deck seemed to be a call to arms.  He's near the end of
> his career, so for all the Internet engineers, etc.  I saw it as a
> "here's where we're going.  Do you want to contribute to this trend or
> take the Internet in a different direction?"
> 
> For example, after talking about CDNs and how most content is now
> local he brings up the bit about if 10% of your traffic costs you 90%
> of your carriage costs, if I was a rational provider, I would say to
> all those customers who need that 10% of the traffic go find someone
> else. I'm not going to do it.  Don't forget, this is a deregulated
> world - you can do that.  There is no universal obligation to carry
> default.
> 
> Does network neutrality require an ISP to connect you to the Internet
> at large?

At least the EU sees it that way:
https://eur-lex.europa.eu/legal-content/EN/TXT/HTML/?uri=CELEX:32015R2120
An internet access service provides access to the internet, and in principle to 
all the end-points thereof, irrespective of the network technology and terminal 
equipment used by end-users. However, for reasons outside the control of 
providers of internet access services, certain end points of the internet may 
not always be accessible. Therefore, such providers should be deemed to have 
complied with their obligations related to the provision of an internet access 
service within the meaning of this Regulation when that service provides 
connectivity to virtually all end points of the internet. Providers of internet 
access services should therefore not restrict connectivity to any accessible 
end-points of the internet.

So you need to at least try... not sure about other jurisdictions.


>  Or do they get to drop the "expensive" traffic that
> requires connecting to a transit provider (or however they do it now
> to connect to the global Internet).
> 
> I was a bit dubious about the assertion that most traffic stays within
> the AS but surprise, surprise, surprise (most people here are old
> enough to remember Gomer Pyle.. right?).. youtube content is in the
> Verizon network.  Start wireshark, get the IP address of the youtube
> server and
> $ sudo traceroute -6TAn 2600:803:f00::e
> traceroute to 2600:803:f00::e (2600:803:f00::e), 30 hops max, 72 byte packets
>  <.. snip ..>
> 3  2600:4000:1:236::326 [AS701]  33.323 ms 2600:4000:1:236::324
> [AS701]  2.542 ms 2600:4000:1:236::326 [AS701]  33.315 ms
> 4  * * *
> 5  2600:803:6af::6 [AS701]  3.843 ms  3.838 ms  3.834 ms
> 6  2600:803:f00::e [AS701]  2.911 ms  2.216 ms  2.472 ms
> 
> Do the same for Netflix and I get three [??] different ASs:
> $ sudo traceroute -6TAn 2600:1f18:631e:2f84:4f7a:4092:e2e9:c617
> traceroute to 2600:1f18:631e:2f84:4f7a:4092:e2e9:c617
> (2600:1f18:631e:2f84:4f7a:4092:e2e9:c617), 30 hops max, 72 byte
> packets
>  <.. snip ..>
> 5  2600:803:9af::82 [AS701]  8.048 ms 2600:803:9af::5a [AS701]  8.297
> ms 2600:803:2::5a [AS701]  8.294 ms
> 6  * 2620:107:4000:c5c0::f3fd:f [*]  2.846 ms
> 2620:107:4000:c5c1::f3fd:20 [*]  2.810 ms
> 7  2620:107:4000:cfff::f202:d5b1 [*]  8.148 ms
> 2620:107:4000:cfff::f203:54b1 [*]  5.289 ms
> 2620:107:4000:cfff::f202:d4b1 [*]  4.300 ms
> 8  2620:107:4000:a793::f000:3863 [*]  4.865 ms
> 2620:107:4000:a610::f000:2403 [*]  5.245 ms
> 2620:107:4000:acd3::f000:e060 [*]  5.201 ms
> 9  * * *
> 10  2600:1f18:631e:2f84:4f7a:4092:e2e9:c617 [AS14618/AS16509]  4.881
> ms  4.864 ms  4.848 ms
> 11  2600:1f18:631e:2f84:4f7a:4092:e2e9:c617 [AS14618/AS16509]  6.351
> ms  6.075 ms  5.935 ms
> 
> Does it violate network neutrality that youtube content takes the
> "fast lane" getting to me?
> 
> and just for chuckles..
> $ dig 2024.apricot.net  +short
> 2001:dd8:f::1
> 
> $ sudo traceroute -6TAn 2001:dd8:f::1
> traceroute to 2001:dd8:f::1 (2001:dd8:f::1), 30 hops max, 72 byte packets
>  <.. snip ..>
> 3  2600:4000:1:236::324 [AS701]  27.390 ms 2600:4000:1:236::326
> [AS701]  5.711 ms 2600:4000:1:236::324 [AS701]  27.384 ms
> 4  * * *
> 5  * * 2001:2035:0:bb3::1 [AS1299]  7.235 ms
> 6  2001:2034:1:73::1 [AS1299]  7.763 ms  6.033 ms  5.996 ms
> 7  2001:2034:1:b7::1 [AS1299]  11.530 ms 2001:2034:1:b8::1 [AS1299]
> 10.704 ms *
> 8  * * *
> 9  2001:2000:3080:230d::2 [AS1299]  72.609 ms  72.594 ms  73.096 ms
> 10  * * *
> 11  * * *

Re: [NNagain] are you Bill Woodcock?

2024-01-18 Thread Sebastian Moeller via Nnagain
Hi Bill,

thank you for this great explanation.

> On 18. Jan 2024, at 23:38, Bill Woodcock via Nnagain 
>  wrote:
> 
>> On Jan 18, 2024, at 22:51, le berger des photons via Nnagain 
>>  wrote:
>> First I've ever seen the term IXP.  It seems interesting.  Can you point me 
>> to some documentation at a level which only requires the ability to read in 
>> english?  Lots of what I've seen here has initials for things which I 
>> haven't even been able to decode.
>> I've been connecting 200 families in a 25 km radius to internet via 8 fiber 
>> optic connections for the last 20 years.
>> I've been thinking of inviting others to participate,  help them get going.
>> Thinking how it might be useful to provide each client two accesses.  one to 
>> the global internet,  one to a local network which isn't being watched by 
>> big brother.
>> Does any of this warrant my looking further into IXP technology?
> 
> Hi, Jay.
> 
> I’m afraid I’m really bad at getting all this stuff written down, though I 
> know it would be useful.  I am planning to write a doctoral thesis on exactly 
> this topic (the societal and economic impact of Internet exchange points) for 
> Universite Paris 8 next year, but that will need to be a bit more academic 
> than practical, to satisfy, you know, academia.
> 
> So, really basically, it sounds like you’re already building an internet 
> exchange.  Internet exchanges are where Internet bandwidth comes from.  
> Internet service providers bring Internet bandwidth from IXPs to the places 
> where people want to use it: their homes, their offices, their phones.  
> Internet bandwidth is free _at_ the exchange, but transport costs money.  
> Speed times distance equals cost.  So the cost of Internet bandwidth is 
> proportional to the speed and the distance from IXPs.  Plus a profit margin 
> for the Internet service provider.
> 
> So, if one Internet user wants to talk to another Internet user, generally 
> they hand off their packet to an Internet service provider, who takes it to 
> an exchange, and hands it off to another Internet service provider, who 
> delivers it to the second user.  When the second user wants to reply, the 
> process is reversed, but the two Internet service providers may choose a 
> different exchange for the hand-off: since each is economically incentivized 
> to carry the traffic the shortest possible distance (to minimize cost, speed 
> x distance = cost), the first ISP will always choose the IXP that’s nearest 
> the first user, for the hand-off, leaving the second ISP a longer distance to 
> carry the packet.  Then, when their situations are reversed, the second ISP 
> will choose the IXP nearest the second user, leaving the first ISP to carry 
> the packet a longer distance.
> [...]

I would propose a slight modification, "each is economically incentivized to 
carry the traffic the shortest possible distance" is not free of assumptions... 
namely that the shortest path is the cheapest path, which is not universally 
true. My personal take is "routing follows cost" that is it is money in the end 
that steers routing decisions not distance... (sure often shortest is also 
cheapest, but it is simply not guaranteed, at least once we include paid 
peering and transit into the equation). Most end-users would actually prefer 
shortest distance...

Case in point, my ISP aggregates its customers in a handful of locations in 
Germany, Hamburg in my case while I actually live a bit closer to Frankfurt 
than Hamburg, so all traffic first goes to Hamburg even traffic to Frankfurt 
(resulting in a 500-600 Km detour), I assume they do this for economic reasons 
and not just out of spite ;) 

Now, maybe the important point is, this does not involve IXPs so might be an 
orange to the IXP apple?

Regards & Thanks again
Sebastian
___
Nnagain mailing list
Nnagain@lists.bufferbloat.net
https://lists.bufferbloat.net/listinfo/nnagain


Re: [NNagain] WISPA Seeks Broad Net Neutrality Exemption for Small ISPs

2023-12-19 Thread Sebastian Moeller via Nnagain
Anybody has information what exact regulatory demands they object to? The 
article is light on those details ;)



> On Dec 19, 2023, at 16:28, Frantisek Borsik via Nnagain 
>  wrote:
> 
> 
> https://policyband.com/blog/wispa-seeks-broad-net-neutrality-exemption-for-small-isps-2
> 
> 
> All the best,
> 
> Frank
> Frantisek (Frank) Borsik
> 
> https://www.linkedin.com/in/frantisekborsik
> Signal, Telegram, WhatsApp: +421919416714 
> iMessage, mobile: +420775230885
> Skype: casioa5302ca
> frantisek.bor...@gmail.com
> ___
> Nnagain mailing list
> Nnagain@lists.bufferbloat.net
> https://lists.bufferbloat.net/listinfo/nnagain

___
Nnagain mailing list
Nnagain@lists.bufferbloat.net
https://lists.bufferbloat.net/listinfo/nnagain


Re: [NNagain] Fwd: The 12 Lies of Telecoms Xmas

2023-12-19 Thread Sebastian Moeller via Nnagain
Thanks for sharing. That had my 'old man shouting at clouds' vibe, I like it

On 19 December 2023 16:01:09 CET, Dave Taht via Nnagain 
 wrote:
>-- Forwarded message -
>From: Dean Bubley via LinkedIn 
>Date: Tue, Dec 19, 2023 at 5:05 AM
>Subject: The 12 Lies of Telecoms Xmas
>To: Dave Taht 
>
>
>During 2023, I've lost patience with some of the more outrageous statements…
>͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏  ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏  ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏
>͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏  ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏  ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏
>͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏  ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏  ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏
>͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏  ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏  ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏
>͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏  ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏  ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏
>͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏
>[image: LinkedIn]
>
>[image: Dave Taht]
>
>
>Newsletter on LinkedIn
>[image: Dean Bubley's Tech Musings]
>
>Dean Bubley's Tech Musings
>
>Analysis and arguments on wireless, telecoms, 5G & the wider futurism
>landscape by @disruptivedean
>
>[image: Author image]
>
>Dean Bubley
>
>Tech Industry Analyst & Futurist @ DISRUPTIVE ANALYSIS | Influential
>advisor & speaker with 25yrs+ in Telecoms Strategy, 5G / 6G / Wi-Fi,
>Spectrum, 

Re: [NNagain] Net neutrality and Bufferbloat?

2023-12-18 Thread Sebastian Moeller via Nnagain
Hi Dick,


> On Dec 18, 2023, at 21:51, Dick Roy  wrote:
> 
> Given that the capacity of a system is in essence a theoretical maximum (in
> this case data rates of a communications sytem), I am not sure what "scaling
> the capacity to the load" means.

Oh this was supposed to mean that the EU regulators expect ISPs to 
increase their internal capacity if the sustained load of their customers 
exceed the given capacity reliably for too long. If an ISP throttles all 
streaming due to a transient overload and to allow e.g. video conference 
traffic to flow smoother this is acceptable, if the same ISPs decided to do so 
ad infinitum to save the cost of removing bottlenecks from its networks that 
will be a problem (in theory, how all of this is handled in practice I can not 
tell) But hey I am extrapolation from EU regulation 2015/2120 :

The objective of reasonable traffic management is to contribute to an efficient 
use of network resources and to an optimisation of overall transmission quality 
responding to the objectively different technical quality of service 
requirements of specific categories of traffic, and thus of the content, 
applications and services transmitted. Reasonable traffic management measures 
applied by providers of internet access services should be transparent, 
non-discriminatory and proportionate, and should not be based on commercial 
considerations. The requirement for traffic management measures to be 
non-discriminatory does not preclude providers of internet access services from 
implementing, in order to optimise the overall transmission quality, traffic 
management measures which differentiate between objectively different 
categories of traffic. Any such differentiation should, in order to optimise 
overall quality and user experience, be permitted only on the basis of 
objectively different technical quality of service requirements (for example, 
in terms of latency, jitter, packet loss, and bandwidth) of the specific 
categories of traffic, and not on the basis of commercial considerations. Such 
differentiating measures should be proportionate in relation to the purpose of 
overall quality optimisation and should treat equivalent traffic equally. Such 
measures should not be maintained for longer than necessary.



> Throttling the load to the capacity I
> understand.

Yes, I thought it was clever to flip this nomenclature around, but as 
you demonstrate "far too clever"  ;)

Regards
Sebastian

> 
> Hmm 
> 
> RR
> 
> -Original Message-
> From: Nnagain [mailto:nnagain-boun...@lists.bufferbloat.net] On Behalf Of
> Sebastian Moeller via Nnagain
> Sent: Monday, December 18, 2023 7:24 AM
> To: Network Neutrality is back! Let´s make the technical aspects heard this
> time!
> Cc: Sebastian Moeller; Ronan Pigott
> Subject: Re: [NNagain] Net neutrality and Bufferbloat?
> 
> Hi Jason,
> 
> 
> during the Covid19 era, the EU issued clarifications that even throttling a
> complete class like streaming video might be within reasonable network
> management. The only stipulations wer this needs to happen only to allow
> arguably more important traffic classes (like work-from home vide
> conferences or remote schooling) to proceed with less interferences and
> blind to source and sender. That is using this to play favorites amongst
> streaming services would still be problematic, but down-prioritizing all
> streaming would be acceptable. (Now the assumption is that reasonable
> network management will not last for ever and is no replacement for scaling
> the capacity to the load in the intermediate/longer terms).
> 
> 
> 
>> On Dec 18, 2023, at 16:10, Livingood, Jason via Nnagain
>  wrote:
>> 
>>> Misapplied concepts of network neutrality is one of the things that
> killed
>>> fq codel for DOCSIS 3.1
>> 
>> I am not so sure this was the case - I think it was just that a different
> AQM was selected. DOCSIS 3.1 includes the DOCSIS-PIE AQM - see
> https://www.rfc-editor.org/rfc/rfc8034.html and 
>> 
> https://www.cablelabs.com/blog/how-docsis-3-1-reduces-latency-with-active-qu
> eue-management. I co-wrote a paper about our deployment during COVID at
> https://arxiv.org/pdf/2107.13968.pdf. See also
> https://www.ietf.org/archive/id/draft-livingood-low-latency-deployment-03.ht
> ml.
>> 
>>> Finally, some jurisdictions impose regulations that limit the ability of
>>> networks to provide differentiation of services, in large part this seems
> to
>>> be based on the belief that doing so necessarily involves prioritization
> or
>>> privileged access to bandwidth, and thus a benefit to one class of
> traffic
>>> always comes at the expense of another.
>> 
>> Much regulatory/policy 

Re: [NNagain] Net neutrality and Bufferbloat?

2023-12-18 Thread Sebastian Moeller via Nnagain
Hi Jason,


during the Covid19 era, the EU issued clarifications that even throttling a 
complete class like streaming video might be within reasonable network 
management. The only stipulations wer this needs to happen only to allow 
arguably more important traffic classes (like work-from home vide conferences 
or remote schooling) to proceed with less interferences and blind to source and 
sender. That is using this to play favorites amongst streaming services would 
still be problematic, but down-prioritizing all streaming would be acceptable. 
(Now the assumption is that reasonable network management will not last for 
ever and is no replacement for scaling the capacity to the load in the 
intermediate/longer terms).



> On Dec 18, 2023, at 16:10, Livingood, Jason via Nnagain 
>  wrote:
> 
>> Misapplied concepts of network neutrality is one of the things that killed
>> fq codel for DOCSIS 3.1
> 
> I am not so sure this was the case - I think it was just that a different AQM 
> was selected. DOCSIS 3.1 includes the DOCSIS-PIE AQM - see  
> https://www.rfc-editor.org/rfc/rfc8034.html and 
> https://www.cablelabs.com/blog/how-docsis-3-1-reduces-latency-with-active-queue-management.
>  I co-wrote a paper about our deployment during COVID at 
> https://arxiv.org/pdf/2107.13968.pdf. See also 
> https://www.ietf.org/archive/id/draft-livingood-low-latency-deployment-03.html.
> 
>> Finally, some jurisdictions impose regulations that limit the ability of
>> networks to provide differentiation of services, in large part this seems to
>> be based on the belief that doing so necessarily involves prioritization or
>> privileged access to bandwidth, and thus a benefit to one class of traffic
>> always comes at the expense of another.
> 
> Much regulatory/policy discussion still frames networks as making decisions 
> with scarce bandwidth, rather than abundant bandwidth, and prioritization in 
> that view is a zero-sum game. But IMO we're no longer in the 
> bandwidth-scarcity era but in a bandwidth-abundance era - or at least in an 
> era with declining marginal utility of bandwidth as compared to techniques to 
> improve latency. But I digress.

Speaking from my side of the pond, over here we still have a somewhat 
big divide between those sitting on heaps of capacity and those that are still 
in the painful range <= 16 Mbps (16 itself would not be so bad, but that class 
goes down below 1 Mbps links and that is IMHO painful).


> 
> To go back to the question of reasonable network management - the key is that 
> any technique used must not be application or destination-specific. So for 
> example, it cannot be focused on flows to the example.com destination or on 
> any flows that are streaming video [1]. 

See above, while as long as example.com is not violating the law this 
first is also not an option inside the EU regulatory framework, but the second 
already has been under specific limited circumstances.


> Anyway - I do not think new AQMs or dual queue low latency networking is in 
> conflict with net neutrality. 

I agree that AQMs are pretty safe, and I feel that packet schedulers 
are also fine, even conditional priority schedulers ;)

Regards
Sebastian

> 
> Jason
> 
> [1] Current rules differ between wireless/mobile and fixed last mile 
> networks; currently the MNOs have a lot more latitude that fixed networks but 
> that may be sorted out in the current NPRM. My personal view is there should 
> be a unified set of rules of all networks.
> 
> 
> 
> 
> 
> ___
> Nnagain mailing list
> Nnagain@lists.bufferbloat.net
> https://lists.bufferbloat.net/listinfo/nnagain

___
Nnagain mailing list
Nnagain@lists.bufferbloat.net
https://lists.bufferbloat.net/listinfo/nnagain


Re: [NNagain] [Starlink] FCC Upholds Denial of Starlink's RDOF Application

2023-12-15 Thread Sebastian Moeller via Nnagain
Hi Frantisek,


> On Dec 15, 2023, at 13:46, Frantisek Borsik via Nnagain 
>  wrote:
> 
> Thus, technically speaking, one would like the advantages of satcom such 
> as starlink, to be at least 5gbit/s in 10 years time, to overcome the 
> 'tangled fiber' problem.
> 
> No, not really. Starlink was about to address the issue of digital divide -

I beg to differ. Starlink is a commercial enterprise with the goal to 
make a profit by offering (usable) internet access essentially everywhere; it 
is not as far as I can tell an attempt at specifically reducing the digital 
divide (were often an important factor is not necessarily location but 
financial means).


> delivering internet to those 640k locations, where there is literally none 
> today. Fiber will NEVER get there. And it will get there, it will be like 10 
> years down the road.

This is IHO the wrong approach to take. The goal needs to be a 
universal FTTH access network (with the exception of extreme locations, no need 
to pull fiber up to the highest Bivouac shelter on Mt. Whitney). And f that 
takes a decade or two, so be it, this is infrastructure that will keep on 
helping for many decades once rolled-out. However given that time frame one 
should consider work-arounds for the interim period. I would have naively 
thought starlink would qualify for that from a technical perspective, but then 
the FCC documents actually discussion requirements and how they were or were 
not met/promised by starlink was mostly redacted. 


> The same is true for missing/loosing support for FWA in the grand/funding 
> schemes:  all the arguments thrown around by fiber cheerleaders are based on 
> bandwidth (at best) or "speed" (in most cases) or some theorethical 
> future-proofness (I mean, we don't know what will happen in next hour, little 
> less we know what will happen in next 10 years). 

I am mo cheerleader (built like a ton, nobody would like to see me with 
pompoms), yet I consider a (reasonably) universal fiber network exactly the 
right political goal. Yet, I accept that reaching that goal will not be 
instantaneous, so we should find a way of making those currently effectively 
disconnected participate more in the digital society even before the fiber 
truck reach their homes...


> HOWEVER, the real issue at hand is either absolutely missing connectivity in 
> many places. Literally ANY service (even 3/1 Mbps) will be a welcome 
> improvement on the current state of thing, let alone Starlink with all its 
> pros and cons. 

Yes I tend to agree, at least from the far away this looks like a 
reasonable way to bridge the period until a better network reaches those places.


> 
> Total reliance on fiber will lead mostly to overbuilding at locations with 
> some service, not to the overall improvements everywhere. Typical "good 
> intentions, bad consequences" type of situations. 

No, that would just be a case of bad regulation, if the goal is an 
universal FTTH network, neither planning or implementing that is "rocket 
science" unless people "cheat".


> Also, when we want to close the digital divide aka "get internet connectivity 
> everywhere" - it means to do it ASAP, even thought it would not mean a "state 
> of the art" type of the internet of some blessed hype place on the West or 
> East coast, with so many competing ISPs. 

Yes, that would appear so. However the FCC process has to be reasonably 
fair to all, and given the redactions in the official I can not realistically 
tell whether the FCC is unreasonably hard here (and if so why) or whether 
starlink was trying to under-deliver on the requirements. Given that I will 
likely never get the un-redacted information and am living far away from where 
the FCC has anything to say, I can accept that ambiguity quite easily.


> Last but not least, we should care also about the price of closing that 
> digital divide. Do we need to have "big fat pipes" just because we as a 
> industry were building and optimising everything within the Internet 
> infrastructure for bandwidth, we taught our customers that "faster speed 
> package" is the solution to all their problems and so on? It's about time to 
> fix that absolute BS narrative we have felt for over time. 

Yes, we need a universal FTTH network.. let's build this now for the 
next 100 years, instead of keeping tinkering with small updates here and 
there... Light in fiber has multiple desirable advantages, a higher theoretical 
(and practical) capacity ceiling is only one of those (although the one that 
makes an FTTH network conceptually more future proof). This is IMHO fact, not 
BS.
Other advantages of fiber are e.g. massively higher robustness against 
RF-interference (compared to DSL, DOCSIS, and wireless access techniques). This 
has an immediate latency consequence. If we look at DSL we see essentially a 4 
KHz clock that hence has a potential access latency floor on the order 

Re: [NNagain] separable processes for live in-person and live zoom-like faces

2023-11-17 Thread Sebastian Moeller via Nnagain
> On Nov 17, 2023, at 18:27, rjmcmahon via Nnagain 
>  wrote:
> 
> The human brain is way too complicated to make simplified analysis like this 
> is the latency required.

[SM] On the sensory side this is not all that hard, e.g. we can (and 
routinely do) measure how long it takes after stimulus onset until neurons 
start to significantly change their firing rate, a value that often is 
described as "neuronal latency" or "response latency". While single unit 
electro-physiologic recordings in the human brain are rare, they are not 
unheard of, most neuronal data however comes from different species. However it 
crucially depends on the definition of "latency" one uses, and I am not sure we 
are talking about the same latency here?

> It's a vast prediction machine and much, much more.

[SM] Indeed ;) and all of this in a tightly interwoven network where 
reductionism only carries so far. Still we have come a long way and gained 
educated glimpses into some of the functionality.

> 
> I found at least three ways to understand the brain;

[SM] You are ahead of me the, I still struggle to understand the brain 
;) (fine by me, there are questions big enough that one needs to expect that 
they will stubbornly withstand attempts at getting elegant and helpful 
answers/theories, for me "how does the brain work" is one of those)

> 
> 1) Read A Thousand Brains: A New Theory of Intelligence
> 2) Make friends with high skilled psychologists, people that assist world 
> athletes can be quite good
> 3) Have a daughter study neuroscience so she can answer my basic question 
> from an expert position

[SM] All seem fine, even though 3) is a bit tricky to replicate.

Regards
Sebastian
> Bob
>> sending again as my server acted up on this url, I think. sorry for the 
>> dup...
>> -- Forwarded message -
>> From: Sebastian Moeller 
>> Date: Fri, Nov 17, 2023 at 3:45 AM
>> Subject: Re: [NNagain] separable processes for live in-person and live
>> zoom-like faces
>> To: Network Neutrality is back! Let´s make the technical aspects heard
>> this time! 
>> Cc: , Dave Täht 
>> Hi Dave, dear list
>> here is the link to the paper's web page:
>> h++ps://direct.mit.edu/imag/article/doi/10.1162/imag_a_00027/117875/Separable-processes-for-live-in-person-and-live
>> from which it can be downloaded.
>> This fits right in my wheel house# ;) However I am concerned that the
>> pupil diameter differs so much between the tested conditions, which
>> implies significant differences in actual physical stimuli, making the
>> whole conclusion a bit shaky*)... Also placing the true face at twice
>> the distance of the "zoom" screens while from an experimentalist
>> perspective understandable, was a sub-optimal decision**.
>> Not a bad study (rather the opposite), but as so often poses even more
>> detail question than it answers. Regarding your point about latency,
>> this seems not well controlled at all, as all digital systems will
>> have some latency and they do not report anything substantial:
>> "In the Virtual Face condition, each dyad watched their partner’s
>> faces projected in real time on separate 24-inch 16 × 9 computer
>> monitors placed in front of the glass"
>> I note technically in "real-time" only means that the inherent delay
>> is smaller than what ever delay the relevant control loop can
>> tolerate, so depending on the problem at hand "once-per-day" can be
>> fully real-time, while for other problems "once-per-1µsec" might be
>> too slow... But to give a lower bound delay number, they likely used a
>> web cam (the paper I am afraid does not say specifically) so at best
>> running at 60Hz (or even 30Hz) rolling shutter, so we have a) a
>> potential image distortion from the rolling shutter (probably small
>> due to the faces being close to at rest) and a "lens to RAM" delay of
>> 1000/60 = 16.67 milliseconds. Then let's assume we can get this pushed
>> to the screen ASAP, we will likely incur at the very least 0.5 refresh
>> times on average for a total delay of >= 25ms. With modern "digital"
>> screens that might be doing any fancy image processing (if only to
>> calculate "over-drive" voltages to allow or faster gray-to-gray
>> changes) the camera to eye delay might be considerably larger (adding
>> a few frame times). This is a field where older analog systems could
>> operate with much lower delay...
>> I would assume that compared to the neuronal latencies of actually
>> extracting information from the faces (it takes ~74-100ms to drive
>> neurons in the more anterior face patches in macaques, and human
>> brains are noticeably larger) this delay will be smallish, but it will
>> certainly be only encountered for the "live" and not for the in-person
>> faces.
>> Regards
>>Sebastian
>> P.S.: In spite of my arguments I like the study, it is much easier to
>> pose challenges to a study than to find robust and reliable solutions
>> to the same challenges ;)
>> #) 

Re: [NNagain] The rise and fall of the 90's telecom bubble

2023-11-16 Thread Sebastian Moeller via Nnagain
Update, mmmh,

Virginia is apparently not only for 'lovers' but also for LTE, along the trip 
with the silver line to Dulles, my phone reported 4G, aka LTE, while in 
downtown DC EDGE-only it was...

Regards
 Sebsstian

On 14 November 2023 13:06:39 CET, Sebastian Moeller via Nnagain 
 wrote:
>Hi Richard,
>
>
>> On Nov 13, 2023, at 16:08, Dick Roy via Nnagain 
>>  wrote:
>> 
>>  
>>  
>> -Original Message-
>> From: Nnagain [mailto:nnagain-boun...@lists.bufferbloat.net] On Behalf Of 
>> Sebastian Moeller via Nnagain
>> Sent: Monday, November 13, 2023 6:15 AM
>> To: Network Neutrality is back! Let´s make the technical aspects heard this 
>> time!
>> Cc: Sebastian Moeller
>> Subject: Re: [NNagain] The rise and fall of the 90's telecom bubble
>>  
>> Hi Jason,
>>  
>>  
>> > On Nov 13, 2023, at 08:54, Livingood, Jason via Nnagain 
>> >  wrote:
>> > 
>> > > Would love to spend some time thinking together about what a smart 
>> > > manufacturing system would look like in terms of connectivity, latency, 
>> > > compute availability, anything that occurs to you. I know a guy who does 
>> > > devops for factories, and he has amazing stories -- might be good to 
>> > > make that connection as well. 
>> >  
>> > One of the L4S (low latency, low loss, scalable throughput) demos that 
>> > Nokia did at a recent IETF hackathon showed a simulated 5G access network 
>> > to do low latency remote control of cranes in an industrial port facility. 
>> > It seemed like one of their points was that you could remotely operate 
>> > cargo container movements with the crane via a remote workforce over a low 
>> > delay network connection - even with fairly limited bandwidth (they’d 
>> > adjust the throughput down to just a few hundred kbps).
>> >  
>> > While they did not say much more, I could envision a port operator being 
>> > able to gain more efficiency by enabling a skilled operator to control 
>> > cranes at several ports around the world on an as-needed basis (vs. being 
>> > based in 1 port and having some downtime or low utilization of their 
>> > skills/training), even from the comfort of home.
>>  
>>  
>>   I would stop doing business with such ports... there clearly are 
>> accidents (or sabotage/jamming) just waiting to happen using wireless 
>> connections for such use-cases... Yes, I understand that that is what Nokia 
>> sells, so everything looks like a nail to them, but really "caveat emptor", 
>> just because something can be done does not mean it should be done as 
>> well... 
>>  
>> Regards
>>   Sebastian
>>  
>> P.S.: Currently in the US for a conference, getting reminded how shitty 
>> GSM/LTE can be, heck the conference WiFi (with 25K attendees) is more 
>> responsive than GSM... I am sure 5G might be better, but my phone is LTE 
>> only...
>> [RR] Welcome to the “club”!  We in the US have been dealing with this for 
>> over 30 years … why you ask ... answer … CDMA and the IPR behind it!  It 
>> was and still is “all about the money!”. My phone has 5G and when download 
>> rates plummet to the floor, all I have to do is look at the top of the 
>> display, and lo and behold … I’m on 5G!!! If you believe 5G is going to be 
>> better, I have a bridge for you that “is going to be s much better” JJJ
>
>   All good explanations for what I see, yet this is happening in the 
> capital... (but truth be told, when I bought this phone I did not pay much 
> attention to which bands it was suited for, it is not impossible that it at 
> least partly my phone's fault that I am connecting with EDGE speeds, quite 
> the throw-back to the 2000s ;) but back then EDGE was indeed cutting edge). 
>About that bridge, I hope this is in NY city?
>
>
>
>Regards
>   Sebastian
>
>
>>  
>> RR
>>  
>>  
>>  
>>  
>> >  
>> > Jason
>> >  
>> > ___
>> > Nnagain mailing list
>> > Nnagain@lists.bufferbloat.net
>> > https://lists.bufferbloat.net/listinfo/nnagain
>>  
>> ___
>> Nnagain mailing list
>> Nnagain@lists.bufferbloat.net
>> https://lists.bufferbloat.net/listinfo/nnagain
>> ___
>> Nnagain mailing list
>> Nnagain@lists.bufferbloat.net
>> https://lists.bufferbloat.net/listinfo/nnagain
>
>___
>Nnagain mailing list
>Nnagain@lists.bufferbloat.net
>https://lists.bufferbloat.net/listinfo/nnagain

-- 
Sent from my Android device with K-9 Mail. Please excuse my brevity.___
Nnagain mailing list
Nnagain@lists.bufferbloat.net
https://lists.bufferbloat.net/listinfo/nnagain


Re: [NNagain] FCC NOI due dec 1 on broadband speed standards

2023-11-14 Thread Sebastian Moeller via Nnagain
Hi Jack,

My argument is this is not a hard or software problem, but a wetware problem, 
hard to shake off million years of evolution. And IIRC during covid, didn't the 
IETF do online only meetings?

I am not saying video conferencing is doomed, it came a long way in the covid 
years and is 'here to stay', but it will only replace face to face meetings for 
some conditions, is all I am saying

On 14 November 2023 14:27:28 GMT-05:00, Jack Haverty  wrote:
>In the beginning days of the Arpanet, circa early 1970s, ARPA made a policy 
>decision about use of the Arpanet.  First, Arpa Program Managers, located on 
>the East Coast of the US, were assigned computer accounts on USC-ISIA, located 
>on the West Coast in LA. Thus to do their work, exchanging email, editting 
>documents, and such, they had to *use* the Arpanet to connect their terminals 
>in Washington to the PDP-10 in California - 3000 miles away.
>
>Second, ARPA began requiring all of their contractors (researchers at 
>Universities etc.) to interact with Arpa using email and FTP. If your site was 
>"on the Arpanet", you had to use the Arpanet.  If you wanted your proposal for 
>next year's research to be funded, you had to submit your proposal using the 
>net.
>
>This policy caused a profound attention, by everyone involved, to making the 
>Arpanet work and be useful as a collaboration tool.
>
>JCR Licklider (aka Lick) was my advisor at MIT, and then my boss when I joined 
>the Research Staff.   Lick had been at ARPA for a while, promoting his vision 
>of a "Galactic Network" that resulted in the Arpanet as a first step.  At MIT, 
>Lick still had need for lots of interactions with others.   My assignment was 
>to build and operate the email system for Lick's group at MIT on our own 
>PDP-10. Lick had a terminal in his office and was online a lot.   If email 
>didn't work, I heard about it.   If the Arpanet didn't work, BBN heard about 
>it.
>
>This pressure was part of Arpa policy.   Sometimes it's referred to as "eating 
>your own dog food" -- i.e., making sure your "dog" will get the same kind of 
>nutrition you enjoy.   IMHO, that pressure policy was important, perhaps 
>crucial, to the success of the Arpanet.
>
>In the 70s, meetings still occurred, but a lot of progress was made through 
>the use of the Arpanet.   You can only do so much with email and file 
>interactions.  Today, the possibilities for far richer interactions are much 
>more prevalent.   But IMHO they are held back, possibly because no one is 
>feeling the pressure to "make it work". Gigabit throughputs are common, but 
>why does my video and audio still break up...?
>
>It's important to have face-to-face meetings, but perhaps if the IETF 
>scheduled a future meeting to be online only, whatever needs to happen to make 
>it work would happen?  Perhaps...
>
>Even a "game" might drive progress.  At Interop '92, we resurrected the old 
>"MazeWars" game using computers scattered across the show exhibit halls.  The 
>engineers in the control room above the floor felt the pressure to make sure 
>the Game continued to run.  At the time, the Internet itself was too slow for 
>enjoyable gameplay at any distance.   Will the Internet 30 years later work?
>
>Or perhaps the IETF, or ISOC, or someone could take on a highly visible demo 
>involving non-techie end users.   An online meeting of the UN General 
>Assembly?   Or some government bodies - US Congress, British Parliament, etc.
>
>Such an event would surface the issues, both technical and policy, to the 
>engineers, corporations, policy-makers, and others who might have the ability 
>and interest to "make it work".
>
>Jack
>
>
>On 11/14/23 10:10, Sebastian Moeller wrote:
>> Hi Jack,
>> 
>> 
>>> On Nov 14, 2023, at 13:02, Jack Haverty via 
>>> Nnagain  wrote:
>>> 
>>> If video conferencing worked well enough, they would not have to all get 
>>> together in one place and would instead hold IETF meetings online ...?
>>  [SM] Turns out that humans are social creatures, and some things work 
>> better face-to-face and in the hallway (and if that is only building trust 
>> and sympathy) than over any remote technology.
>> 
>> 
>>> Did anyone measure latency?   Does anyone measure throughput of "useful" 
>>> traffic - e.g., excluding video/audio data that didn't arrive in time to be 
>>> actually used on the screen or speaker?
>>  [SM] Utility is in the eye of the beholder, no?
>> 
>> 
>>> Jack Haverty
>>> 
>>> 
>>> On 11/14/23 09:25, Vint Cerf via Nnagain wrote:
 if they had not been all together they would have been consuming tons of 
 video capacity doing video conference calls
 
 :-))
 v
 
 
 On Tue, Nov 14, 2023 at 10:46 AM Livingood, Jason via 
 Nnagain  wrote:
 On the subject of how much bandwidth does one household need, here's a fun 
 stat for you.
 
   At the IETF’s 118th meeting last week (Nov 4 – 10, 2023), there were 
 over 1,000 engineers in attendance. At 

Re: [NNagain] FCC NOI due dec 1 on broadband speed standards

2023-11-14 Thread Sebastian Moeller via Nnagain
Hi Jack,


> On Nov 14, 2023, at 13:02, Jack Haverty via Nnagain 
>  wrote:
> 
> If video conferencing worked well enough, they would not have to all get 
> together in one place and would instead hold IETF meetings online ...?

[SM] Turns out that humans are social creatures, and some things work 
better face-to-face and in the hallway (and if that is only building trust and 
sympathy) than over any remote technology.


> Did anyone measure latency?   Does anyone measure throughput of "useful" 
> traffic - e.g., excluding video/audio data that didn't arrive in time to be 
> actually used on the screen or speaker?

[SM] Utility is in the eye of the beholder, no?


> 
> Jack Haverty
> 
> 
> On 11/14/23 09:25, Vint Cerf via Nnagain wrote:
>> if they had not been all together they would have been consuming tons of 
>> video capacity doing video conference calls
>> 
>> :-))
>> v
>> 
>> 
>> On Tue, Nov 14, 2023 at 10:46 AM Livingood, Jason via Nnagain 
>>  wrote:
>> On the subject of how much bandwidth does one household need, here's a fun 
>> stat for you.
>> 
>>  
>> At the IETF’s 118th meeting last week (Nov 4 – 10, 2023), there were over 
>> 1,000 engineers in attendance. At peak there were 870 devices connected to 
>> the WiFi network. Peak bandwidth usage:
>> 
>>  • Downstream peak ~750 Mbps
>>  • Upstream ~250 Mbps
>>  
>> From my pre-meeting Twitter poll 
>> (https://twitter.com/jlivingood/status/1720060429311901873):
>> 
>> 
>> 
>> ___
>> Nnagain mailing list
>> Nnagain@lists.bufferbloat.net
>> https://lists.bufferbloat.net/listinfo/nnagain
>> 
>> 
>> -- 
>> Please send any postal/overnight deliveries to:
>> Vint Cerf
>> Google, LLC
>> 1900 Reston Metro Plaza, 16th Floor
>> Reston, VA 20190
>> +1 (571) 213 1346
>> 
>> 
>> until further notice
>> 
>> 
>> 
>> 
>> 
>> ___
>> Nnagain mailing list
>> 
>> Nnagain@lists.bufferbloat.net
>> https://lists.bufferbloat.net/listinfo/nnagain
> 
> ___
> Nnagain mailing list
> Nnagain@lists.bufferbloat.net
> https://lists.bufferbloat.net/listinfo/nnagain

___
Nnagain mailing list
Nnagain@lists.bufferbloat.net
https://lists.bufferbloat.net/listinfo/nnagain


Re: [NNagain] FCC NOI due dec 1 on broadband speed standards

2023-11-14 Thread Sebastian Moeller via Nnagain
Hi Jeremy,


> On Nov 14, 2023, at 12:58, Jeremy Austin via Nnagain 
>  wrote:
> 
> 
> 
> On Tue, Nov 14, 2023 at 6:46 AM Livingood, Jason via Nnagain 
>  wrote:
> On the subject of how much bandwidth does one household need, here's a fun 
> stat for you.
> 
>  
> 
> At the IETF’s 118th meeting last week (Nov 4 – 10, 2023), there were over 
> 1,000 engineers in attendance. At peak there were 870 devices connected to 
> the WiFi network. Peak bandwidth usage:
> 
>   • Downstream peak ~750 Mbps
>   • Upstream ~250 Mbps
> 
> How was this calculated? That's an unusually high ratio of up to down, so my 
> suspicion is that they aren't time correlated; they're also not /normal/ or 
> /evening/ peaks, I'm expecting.

[SM] Given that this is from a conference network, I think it is 
expected that the pattern does not match typical end-user traffic, no?


> 
> There's a big difference between individual peaks of upload and aggregate 
> peaks of upload; most people aren't streaming high symmetric bandwidth 
> simultaneously.

[SM] Which is good, at current rates of over-subscription (or 
under-provisioning for the glass half empty folks) such a shift in usage 
behavior likely would not result in happiness all around?


> Consequently a peak busy hour online load, I'm finding, is still much more 
> like 8:1 over all users (idle and active), in Preseem's data set.

[SM] But this is end users that have been trained over decades to 
operate on heavily asymmetric links and that adjusted their usage patterns to 
match the "possible" while we might expect the network experts at IETF to have 
different expectations? (Then again these folks likely also are users of normal 
home internet links).


> 
> In addition to speed tests being, like democracy, the worst form of 
> government except for all the others that have been tried, it would be 
> instructive both for end users and ISPs to choose, agree on and understand 
> specific percentiles of expected performance at idle and at peak busy hour.

[SM] That will not be easy to achieve.


> Has anyone solved the math problem of distinguishing (from outside) a 
> constraint in supply from a reduction in demand?

[SM] Keep in mind that the former can cause the latter, if 
connectivity/responsiveness is too bad, people might shift to do other things 
there y reducing the measurable load, but not the conceptual demand (as they 
might prefer a workable internet access).

Regards
Sebastian


> 
> Jeremy
> 
> 
> ___
> Nnagain mailing list
> Nnagain@lists.bufferbloat.net
> https://lists.bufferbloat.net/listinfo/nnagain

___
Nnagain mailing list
Nnagain@lists.bufferbloat.net
https://lists.bufferbloat.net/listinfo/nnagain


Re: [NNagain] FCC NOI due dec 1 on broadband speed standards

2023-11-14 Thread Sebastian Moeller via Nnagain
Hi Jason,

Thank you very much for the information!

Regards

On 14 November 2023 11:37:26 GMT-05:00, "Livingood, Jason" 
 wrote:
>> Joking aside, how representative is this ratio of users to peak traffic for 
>> what you know about residential users? I am not looking for anything more 
>> than a very coarse replay, like same order of magnitude or not ;)
>
>Based on my experience, this is representative in so far as: 
>- People tend to use less bandwidth than they think [1]
>- Downstream/upstream asymmetry remains is prevalent / normal
>- The only real use of 1 Gbps is a speed test (artificial driver); there are 
>no user applications that place those demands naturally on the network
>
>Jason
>
>[1] In a way, more bandwidth is like an insurance policy for possible usage 
>and ensures capacity is not a constraint - and it has historically been a fair 
>proxy, if indirect, for QoE.
>
>

-- 
Sent from my Android device with K-9 Mail. Please excuse my brevity.___
Nnagain mailing list
Nnagain@lists.bufferbloat.net
https://lists.bufferbloat.net/listinfo/nnagain


Re: [NNagain] FCC NOI due dec 1 on broadband speed standards

2023-11-14 Thread Sebastian Moeller via Nnagain
Hi Dave,

On 14 November 2023 11:06:54 GMT-05:00, Dave Taht via Nnagain 
 wrote:
>As I noted also on the twitter thread for this, were I there, and
>dishonest, (particularly were gobs of money on the table) I could easily
>have permuted the bandwidth on both tests hugely upwards from a single
>laptop by running continuous speedtests. But speedtests are not what we do
>day in or out, and reflect normal usage not at all.
>
>The 83% of people (experts!!!) that were wrong is ... mindboggling.


[SM] The optimist in me reads this as only 27% were really off... I also note 
the options had an odd scaling, neither liner, nor logarithmic, making it hard 
to make inferences.

>
>PS What wifi standard was at ietf? Is this still the old ciscos? The
>headline bandwidths claimed for any version of wifi drop dramatically at
>distance and with multiple users present.  So it might have taken a couple
>laptops out of the thousand there to move the stats in a perverse
>direction, now that I think about it.
>
>Thank you for doing this experiment! While there are certainly also cases
>were mass groupings of people totally saturate the underlying mac (more
>than the perceived bandwidth - I have seen congestion collapse and a sea of
>retransmits even in small wifi gatherings), the only number that seems a
>bit off  in your test from a typical residential/small office is the
>roughly 3.5x1 ratio between down and up. I am willing (for now) to put that
>down to engineers doing actual work, rather than netflix.

[SM] +1; the typical high down/up ratio for home users is partly a result of 
offering mostly heavily asymmetric links, users inherently learn what they can 
use a link for...

Regards
Sebastian



>
>I would so love to see more measurements like this at other wifi
>concentration points, in offices and coffee shops. Packet captures too
>
>On Tue, Nov 14, 2023 at 10:46 AM Livingood, Jason via Nnagain <
>nnagain@lists.bufferbloat.net> wrote:
>
>> On the subject of how much bandwidth does one household need, here's a fun
>> stat for you.
>>
>>
>>
>> At the IETF’s 118th meeting  last
>> week (Nov 4 – 10, 2023), there were over 1,000 engineers in attendance. At
>> peak there were 870 devices connected to the WiFi network. Peak bandwidth
>> usage:
>>
>>- Downstream peak ~750 Mbps
>>- Upstream ~250 Mbps
>>
>>
>>
>> From my pre-meeting Twitter poll (
>> https://twitter.com/jlivingood/status/1720060429311901873):
>>
>> [image: A screenshot of a chat Description automatically generated]
>> ___
>> Nnagain mailing list
>> Nnagain@lists.bufferbloat.net
>> https://lists.bufferbloat.net/listinfo/nnagain
>>
>
>

-- 
Sent from my Android device with K-9 Mail. Please excuse my brevity.
___
Nnagain mailing list
Nnagain@lists.bufferbloat.net
https://lists.bufferbloat.net/listinfo/nnagain


Re: [NNagain] FCC NOI due dec 1 on broadband speed standards

2023-11-14 Thread Sebastian Moeller via Nnagain
I guess I now am prepared to upgrade my home network into the ~1Gbps class, 
before inviting 1000 engineers over ;)

Joking aside, how representative is this ratio of users to peak traffic for 
what you know about residential users? I am not looking for anything more than 
a very coarse replay, like same order of magnitude or not ;)

On 14 November 2023 10:46:17 GMT-05:00, "Livingood, Jason via Nnagain" 
 wrote:
>On the subject of how much bandwidth does one household need, here's a fun 
>stat for you.
>
>
>At the IETF’s 118th meeting last week 
>(Nov 4 – 10, 2023), there were over 1,000 engineers in attendance. At peak 
>there were 870 devices connected to the WiFi network. Peak bandwidth usage:
>
>  *   Downstream peak ~750 Mbps
>  *   Upstream ~250 Mbps
>
>
>
>From my pre-meeting Twitter poll 
>(https://twitter.com/jlivingood/status/1720060429311901873):
>
>[A screenshot of a chat  Description automatically generated]

-- 
Sent from my Android device with K-9 Mail. Please excuse my brevity.___
Nnagain mailing list
Nnagain@lists.bufferbloat.net
https://lists.bufferbloat.net/listinfo/nnagain


Re: [NNagain] The rise and fall of the 90's telecom bubble

2023-11-14 Thread Sebastian Moeller via Nnagain
Hi Richard,


> On Nov 13, 2023, at 16:08, Dick Roy via Nnagain 
>  wrote:
> 
>  
>  
> -Original Message-
> From: Nnagain [mailto:nnagain-boun...@lists.bufferbloat.net] On Behalf Of 
> Sebastian Moeller via Nnagain
> Sent: Monday, November 13, 2023 6:15 AM
> To: Network Neutrality is back! Let´s make the technical aspects heard this 
> time!
> Cc: Sebastian Moeller
> Subject: Re: [NNagain] The rise and fall of the 90's telecom bubble
>  
> Hi Jason,
>  
>  
> > On Nov 13, 2023, at 08:54, Livingood, Jason via Nnagain 
> >  wrote:
> > 
> > > Would love to spend some time thinking together about what a smart 
> > > manufacturing system would look like in terms of connectivity, latency, 
> > > compute availability, anything that occurs to you. I know a guy who does 
> > > devops for factories, and he has amazing stories -- might be good to make 
> > > that connection as well. 
> >  
> > One of the L4S (low latency, low loss, scalable throughput) demos that 
> > Nokia did at a recent IETF hackathon showed a simulated 5G access network 
> > to do low latency remote control of cranes in an industrial port facility. 
> > It seemed like one of their points was that you could remotely operate 
> > cargo container movements with the crane via a remote workforce over a low 
> > delay network connection - even with fairly limited bandwidth (they’d 
> > adjust the throughput down to just a few hundred kbps).
> >  
> > While they did not say much more, I could envision a port operator being 
> > able to gain more efficiency by enabling a skilled operator to control 
> > cranes at several ports around the world on an as-needed basis (vs. being 
> > based in 1 port and having some downtime or low utilization of their 
> > skills/training), even from the comfort of home.
>  
>  
>   I would stop doing business with such ports... there clearly are 
> accidents (or sabotage/jamming) just waiting to happen using wireless 
> connections for such use-cases... Yes, I understand that that is what Nokia 
> sells, so everything looks like a nail to them, but really "caveat emptor", 
> just because something can be done does not mean it should be done as well... 
>  
> Regards
>   Sebastian
>  
> P.S.: Currently in the US for a conference, getting reminded how shitty 
> GSM/LTE can be, heck the conference WiFi (with 25K attendees) is more 
> responsive than GSM... I am sure 5G might be better, but my phone is LTE 
> only...
> [RR] Welcome to the “club”!  We in the US have been dealing with this for 
> over 30 years … why you ask ... answer … CDMA and the IPR behind it!  It 
> was and still is “all about the money!”. My phone has 5G and when download 
> rates plummet to the floor, all I have to do is look at the top of the 
> display, and lo and behold … I’m on 5G!!! If you believe 5G is going to be 
> better, I have a bridge for you that “is going to be s much better” JJJ

All good explanations for what I see, yet this is happening in the 
capital... (but truth be told, when I bought this phone I did not pay much 
attention to which bands it was suited for, it is not impossible that it at 
least partly my phone's fault that I am connecting with EDGE speeds, quite the 
throw-back to the 2000s ;) but back then EDGE was indeed cutting edge). 
About that bridge, I hope this is in NY city?



Regards
Sebastian


>  
> RR
>  
>  
>  
>  
> >  
> > Jason
> >  
> > ___
> > Nnagain mailing list
> > Nnagain@lists.bufferbloat.net
> > https://lists.bufferbloat.net/listinfo/nnagain
>  
> ___
> Nnagain mailing list
> Nnagain@lists.bufferbloat.net
> https://lists.bufferbloat.net/listinfo/nnagain
> ___
> Nnagain mailing list
> Nnagain@lists.bufferbloat.net
> https://lists.bufferbloat.net/listinfo/nnagain

___
Nnagain mailing list
Nnagain@lists.bufferbloat.net
https://lists.bufferbloat.net/listinfo/nnagain


Re: [NNagain] The rise and fall of the 90's telecom bubble

2023-11-13 Thread Sebastian Moeller via Nnagain
Hi Jason,


> On Nov 13, 2023, at 08:54, Livingood, Jason via Nnagain 
>  wrote:
> 
> > Would love to spend some time thinking together about what a smart 
> > manufacturing system would look like in terms of connectivity, latency, 
> > compute availability, anything that occurs to you. I know a guy who does 
> > devops for factories, and he has amazing stories -- might be good to make 
> > that connection as well. 
>  
> One of the L4S (low latency, low loss, scalable throughput) demos that Nokia 
> did at a recent IETF hackathon showed a simulated 5G access network to do low 
> latency remote control of cranes in an industrial port facility. It seemed 
> like one of their points was that you could remotely operate cargo container 
> movements with the crane via a remote workforce over a low delay network 
> connection - even with fairly limited bandwidth (they’d adjust the throughput 
> down to just a few hundred kbps).
>  
> While they did not say much more, I could envision a port operator being able 
> to gain more efficiency by enabling a skilled operator to control cranes at 
> several ports around the world on an as-needed basis (vs. being based in 1 
> port and having some downtime or low utilization of their skills/training), 
> even from the comfort of home.


I would stop doing business with such ports... there clearly are 
accidents (or sabotage/jamming) just waiting to happen using wireless 
connections for such use-cases... Yes, I understand that that is what Nokia 
sells, so everything looks like a nail to them, but really "caveat emptor", 
just because something can be done does not mean it should be done as well... 

Regards
Sebastian

P.S.: Currently in the US for a conference, getting reminded how shitty GSM/LTE 
can be, heck the conference WiFi (with 25K attendees) is more responsive than 
GSM... I am sure 5G might be better, but my phone is LTE only... 


>  
> Jason
>  
> ___
> Nnagain mailing list
> Nnagain@lists.bufferbloat.net
> https://lists.bufferbloat.net/listinfo/nnagain

___
Nnagain mailing list
Nnagain@lists.bufferbloat.net
https://lists.bufferbloat.net/listinfo/nnagain


Re: [NNagain] NN review in the UK

2023-10-31 Thread Sebastian Moeller via Nnagain
Hi Dave,

This morphing is IMHO related to Brexit and an attempt to see how/if regulatory 
divergence from continental Europe can be converted into an economic advantage. 
 

The ofcom positions seem not really all that far from european regulations on 
the fact level, while on a rhetorical level it tries to look business 
friendly... (without changes in the UK law they hardly can do more).

My point is the European NN regulations were never all that strict and business 
stifling as some here seem to presume.




On 31 October 2023 18:37:16 CET, Dave Taht  wrote:
>I am still looking for the history of this morphing...
>
>https://decoded.legal/blog/2023/10/ofcoms-new-guidance-on-open-internet--net-neutrality-including-zero-rating-and-traffic-management/
>
>On Tue, Oct 31, 2023 at 9:33 AM Dave Taht  wrote:
>>
>> This link is working now.
>>
>> https://www.ofcom.org.uk/consultations-and-statements/category-1/net-neutrality-review
>>
>> I had reached out to multiple folk I knew to fix it. It is hugely
>> ironic that we have run into multiple examples of both intentional and
>> unintentional censorship so far in our quest to find truths about
>> network neutrality all around the globe.
>>
>> Annoyed, I set up a server in london, and mirrored the site myself via
>> "wget -m" - a command line utility that lets you make complete copies
>> of websites shipped as part of most operating systems. ... Back in the
>> day when the open internet meant you can copy a website and read it
>> offline, easily...
>>
>> And then I shipped it all to my own laptop (where I can index it
>> myself), via another quite common tool, rsync. It took a while to do
>> that - started the rsync in america, and then finished it at a coffee
>> shop in vancouver... then I read the 5 pdfs and deleted the thing
>> because I needed the disk space.
>>
>> Seeing so many newer folk having missed JPB's observation that the
>> internet is a "copying machine" ... if only more people would point
>> out to those folk these basic tools exist, that cannot be banned, and
>> are genuinely useful
>>
>> OK... so...
>>
>> This now globally(? please test) accessible cloudflare instance for
>> ofcom is now throwing an error 429 (too many requests) so I no longer
>> have that ability to quickly mirror it that I had had only a few days
>> ago. Is this an improvement?
>>
>> Anyway, I can finally get towards commenting on the actual text. But
>> not today. I would like to see various statements written about
>> network neutrality in 2005, 2010, 2015, because it seems to be the
>> definition in the ofcom docs has morphed a lot towards being...
>> "reasonable", whatever that means.
>>
>>
>>
>> On Sat, Oct 28, 2023 at 3:01 AM Sebastian Moeller via Nnagain
>>  wrote:
>> >
>> > Dear All,
>> >
>> > I have been pointed at Ofcom's statement on Net neutrality for October 
>> > 2023:
>> >
>> > https://www.ofcom.org.uk/consultations-and-statements/category-1/net-neutrality-review
>> >
>> > Here is the meat of that statement sans the links at the end (the email 
>> > will be clasified as spam if it contains too many links, I hope the one 
>> > above does not trigger it yet):
>> >
>> > Statement published 26 October 2023
>> >
>> > Net neutrality supports the ‘open internet’, ensuring that users of the 
>> > internet (both consumers and those making and distributing content) are in 
>> > control of what they see and do online – not the broadband or mobile 
>> > providers (otherwise known as internet service providers or ISPs). The net 
>> > neutrality rules make sure that the traffic carried across broadband and 
>> > mobile networks is treated equally and particular content or services are 
>> > not prioritised or slowed down in a way that favours some over others. We 
>> > want to make sure that as technology evolves and more of our lives move 
>> > online, net neutrality continues to support innovation, investment and 
>> > growth, by both content providers and ISPs.
>> >
>> > The current net neutrality rules are set out in legislation. Any changes 
>> > to the rules in future would be a matter for Government and Parliament. 
>> > Ofcom is responsible for monitoring and ensuring compliance with the rules 
>> > and providing guidance on how ISPs should follow them. In 2021 we started 
>> > a review of net neutrality.
>> >
>> > Our review has found that, in general, it has worked well and

Re: [NNagain] NN review in the UK

2023-10-30 Thread Sebastian Moeller via Nnagain
Regards

I wonder somewhat to what degree VF's motivation was closer to its own bottom 
line (so having an additional service dimension to monetize) than trying to 
help achieve its end-users latency desires...

And that is to a degree fine with me as an end-user... an ISP might as well 
bill me (a bit) for proper download traffic shaping on my ingress, as long as 
the attractiveness of that service is not artificially enhanced by making the 
normal service worse... (that is if I can decide to run my own download 
shaping/scheduling/AQM or for similar responsiveness to off-load that to the 
ISP, I am game).

But as I understand, such a service is already permissible under existing EU 
and UK rules (as stated by Ofcom, they can not make new law, all they do is 
clarify how the existing rules are going to be enforced/interpreted by them in 
their role as NRA).

Regards
Sebastian



> On Oct 30, 2023, at 16:12, Mike Conlow via Nnagain 
>  wrote:
> 
> +1. My understanding is the origins of this item in the NN review in the UK 
> is that  ISPs requested it because of lack of clarity around whether "premium 
> quality service" offerings violated NN rules.

[SM] Thanks for that piece of information, that makes a ton of sense 
and explains IMHO the tone of the document... (all the details I looked at are 
such that I might not have picked the precise positions but all seem pretty 
defensible and almost boringly balanced ;) )

Thanks & Regards
Sebastian


> See page 63-64 here. Screenshot below:
> 
> 
> 
> On Mon, Oct 30, 2023 at 10:26 AM Livingood, Jason via Nnagain 
>  wrote:
> On 10/28/23, 06:01, "Nnagain on behalf of Sebastian Moeller via Nnagain" 
>  > For example, people who use high quality virtual reality applications may 
> > want to buy a premium quality service, while users who mainly stream and 
> > browse the internet can buy a cheaper package. Our updated guidance 
> > clarifies that ISPs can offer premium packages, for example offering low 
> > latency, as long as they are sufficiently clear to customers about what 
> > they can expect from the services they buy.
> 
> Sigh. Wish more regulators knew about modern AQMs - we can have our cake and 
> eat it too. The solution above seems to pre-suppose the need for QoS but this 
> isn't a capacity problem. 
> 
> JL
> 
> ___
> Nnagain mailing list
> Nnagain@lists.bufferbloat.net
> https://lists.bufferbloat.net/listinfo/nnagain
> ___
> Nnagain mailing list
> Nnagain@lists.bufferbloat.net
> https://lists.bufferbloat.net/listinfo/nnagain

___
Nnagain mailing list
Nnagain@lists.bufferbloat.net
https://lists.bufferbloat.net/listinfo/nnagain


[NNagain] Net Neutrality report in Germany

2023-10-28 Thread Sebastian Moeller via Nnagain
Dear All,

after my UK post, here the corresponding document for Germany


Here is a link to the national regulatory agency (NRA) Bundesnetzagentur:


https://www.bundesnetzagentur.de/EN/Areas/Telecommunications/Companies/NetNeutrality/start.html

And here a link to the most recent report in english for the period of May 1st 
2022 until April 30th 2023:
https://www.bundesnetzagentur.de/SharedDocs/Downloads/EN/Areas/Telecommunications/Companies/MarketRegulation/NetNeutrality/Net%20Neutrality%20In%20Germany%20Annual%20Report%202022_2023.pdf?__blob=publicationFile=3

All in all this is less comprehensive future-looking than the UK documents 
mentioned in my previous mail, it is however a clear report on the German NRA's 
activities regarding monitoring and enforcement of the net neutrality rules. 
(To spell out the obvious, Germany is still member of the EU and hence is bound 
much stricter to EU regulation 2015/2120 and BEREC's interpretation thereof).

Regards
Sebastian

___
Nnagain mailing list
Nnagain@lists.bufferbloat.net
https://lists.bufferbloat.net/listinfo/nnagain


[NNagain] NN review in the UK

2023-10-28 Thread Sebastian Moeller via Nnagain
Dear All,

I have been pointed at Ofcom's statement on Net neutrality for October 2023: 

https://www.ofcom.org.uk/consultations-and-statements/category-1/net-neutrality-review

Here is the meat of that statement sans the links at the end (the email will be 
clasified as spam if it contains too many links, I hope the one above does not 
trigger it yet):

Statement published 26 October 2023

Net neutrality supports the ‘open internet’, ensuring that users of the 
internet (both consumers and those making and distributing content) are in 
control of what they see and do online – not the broadband or mobile providers 
(otherwise known as internet service providers or ISPs). The net neutrality 
rules make sure that the traffic carried across broadband and mobile networks 
is treated equally and particular content or services are not prioritised or 
slowed down in a way that favours some over others. We want to make sure that 
as technology evolves and more of our lives move online, net neutrality 
continues to support innovation, investment and growth, by both content 
providers and ISPs.

The current net neutrality rules are set out in legislation. Any changes to the 
rules in future would be a matter for Government and Parliament. Ofcom is 
responsible for monitoring and ensuring compliance with the rules and providing 
guidance on how ISPs should follow them. In 2021 we started a review of net 
neutrality.

Our review has found that, in general, it has worked well and supported 
consumer choice as well as enabling content providers to deliver their content 
and services to consumers. However, there are specific areas where we provide 
more clarity in our guidance to enable ISPs to innovate and manage their 
networks more efficiently, to improve consumer outcome.

• ISPs can offer premium quality retail offers: Allowing ISPs to 
provide premium quality retail packages means they can better meet some 
consumers’ needs. For example, people who use high quality virtual reality 
applications may want to buy a premium quality service, while users who mainly 
stream and browse the internet can buy a cheaper package. Our updated guidance 
clarifies that ISPs can offer premium packages, for example offering low 
latency, as long as they are sufficiently clear to customers about what they 
can expect from the services they buy.
• ISPs can develop new ‘specialised services’: New 5G and full fibre 
networks offer the opportunity for ISPs to innovate and develop their services. 
Our updated guidance clarifies when they can provide ‘specialised services’ to 
deliver specific content and applications that need to be optimised, which 
might include real time communications, virtual reality and driverless vehicles.
• ISPs can use traffic management measures to manage their networks: 
Traffic management can be used by ISPs on their networks, so that a good 
quality of service is maintained for consumers. Our updated guidance clarifies 
when and how ISPs can use traffic management, including the different 
approaches they can take and how they can distinguish between different 
categories of traffic based on their technical requirements.
• Most zero-rating offers will be allowed: Zero-rating is where the 
data used by certain websites or apps is not counted towards a customer’s 
overall data allowance. Our updated guidance clarifies that we will generally 
allow these offers, while setting out the limited circumstances where we might 
have concerns.


I note however, that when I try to access that page today I get a cloadflare 
error:
Sorry, you have been blocked
You are unable to access ofcom.squizedge.cloud

Which might indicate that some parts of the network are not acting in good 
faith (or I was just unlucky with my current IP address)

I also note (as Ofcom does itself) that since Brexit the UK is not bound to the 
EU's regulation 2015/2120 (see 
https://eur-lex.europa.eu/legal-content/de/TXT/?uri=CELEX%3A32015R2120 ).

Regards
Sebastian

___
Nnagain mailing list
Nnagain@lists.bufferbloat.net
https://lists.bufferbloat.net/listinfo/nnagain


Re: [NNagain] Brendan Carr: "Six years ago, Americans lived through one of the greatest hoaxes in regulatory history...

2023-10-24 Thread Sebastian Moeller via Nnagain


> On Oct 24, 2023, at 21:21, Dave Taht via Nnagain 
>  wrote:
> 
> On Tue, Oct 24, 2023 at 11:21 AM the keyboard of geoff goodfellow via
> Nnagain  wrote:
>> 
>> ➔➔https://twitter.com/BrendanCarrFCC/status/1716558844384379163
> 
> Leaving aside the rhetoric, I believe the majority of these claims on
> this part of his post:
> 
> https://twitter.com/BrendanCarrFCC/status/1716884139226329512
> 
> to be true. Any one question this?

[SM] I question the inherent claim that this happened in a NN 
regulatory free environment though, e.g. California added its own NN regulation 
2018. A regulation that had to first win the uphill struggle against the FCC 
order foregoing Title II (which also tried, finally unsuccessfully, to argue 
that states have no authority to regulate the internet, while at the same time 
giving up the FCC's will to regulate)... I also got reminded that the FCC only 
went for Title II in ~2015, as courts had reigned in on FCC attempts to 
regulate ISPs under Title I.

This is a bit of the "Preparedness paradox" where a dire situation is 
predicted, everybody and their dog works hard to avoid it happening, the big 
problem is avoided (due to the hard work) and uninformed folks call the initial 
prediction a hoax... think Y2K and similar instances... Carr might have a point 
that initial prediction might have been a tad to bleak, but come on this is how 
politics works: you make your solution look (slightly) better and the 
alternatives (slightly) worse and hope to convince enough so that your view 
prevails, no?


> I do wish that he showed upload speeds, and latency under load, and,
> acknowledged some mistakes, at least, and did not claim perfect
> success.

[SM] But starting out in the first sentence painting the opposing view 
as a "hoax" makes it pretty unlikely that objective and reasoned data and 
analysis will follow... just an observation


> Also individual states had stepped up to institute their own
> rules, and I would love to see a comparison of those stats vs those
> that didn´t.

[SM] Also interesting, how many users ended up with state regulation 
and how many without... (trying to sell access to eyeballs gets tricky if the 
majority of the affluent ones end up in states with NN rules).


> The COVID thing I am most fiercely proud of, as an engineer, is we
> took an internet only capable of postage stamp 5 frame per sec[1]
> videoconferencing to something that the world, as a whole, relied on
> to keep civilization running only 7 years later, in the face of
> terrible odds, lights out environments, scarce equipment supplies, and
> illness. ISPs big and small helped too - Their people climbed towers,
> produced better code, rerouted networks, and stayed up late fighting
> off DDOSes. People at home shared their wifi and knowledge of how to
> make fiddly things on the net work well, over the internet  -
> 
> Nobody handed out medals for keeping the internet running, I do not
> remember a single statement of praise for what we did over that
> terrible time. No one ever looks up after a productive day after a
> zillion productive clicks and says (for one example) "Thank you Paul
> Vixie and Mokapetris for inventing DNS and Evan Hunt(bind)  and Simon
> Kelly(dnsmasq) for shipping dns servers for free that only get it
> wrong once in a while, and then recover so fast you don´t notice" -
> there are just endless complaints from those for whom it is not
> working *right now* the way they expect.

[SM] There are a lot of unsung heroes in most types of engineering.


> There are no nobel prizes for networking.  But the scientists,
> engineers, sysadmins and SREs kept improving things, and are keeping
> civilization running. It is kind of a cause for me - I get very irked
> at both sides whining when if only they could walk a mile in a
> neteng´s shoes. I get respect from my neighbors at least, sometimes
> asked to fix a laptop or set up a router... and I still share my wifi.
> 
> If there was just some way to separate out the ire about other aspects
> of how the internet is going south (which I certainly share), and
> somehow put respect for those in the trenches that work on keeping the
> Net running, back in the public conversation, I would really love to
> hear it.

[SM] +1


> 
> [1] Really great talk on networking by Van Jacobson in 2012, both
> useful for its content, and the kind of quality we could only achieve
> then: https://archive.org/details/video1_20191129
> 
>> --
>> geoff.goodfel...@iconia.com
>> living as The Truth is True
>> 
>> ___
>> Nnagain mailing list
>> Nnagain@lists.bufferbloat.net
>> https://lists.bufferbloat.net/listinfo/nnagain
> 
> 
> 
> -- 
> Oct 30: https://netdevconf.info/0x17/news/the-maestro-and-the-music-bof.html
> Dave Täht CSO, LibreQos
> ___
> Nnagain mailing list
> Nnagain@lists.bufferbloat.net
> 

Re: [NNagain] upgrading old routers to modern, secure FOSS

2023-10-23 Thread Sebastian Moeller via Nnagain
Hi Dave,


> On Oct 23, 2023, at 19:58, Dave Taht via Nnagain 
>  wrote:
> 
> On Mon, Oct 23, 2023 at 10:04 AM Dave Taht  wrote:
>> 
>> I loved that this guy and his ISP burned a couple weeks learning how
>> to build openwrt, built something exactly to the need, *had it work
>> the first time* and are in progress to update in place 200+ routers to
>> better router software, that just works, with videoconferencing, IPv6
>> support, and OTA functionality. No need for a truck roll, and while
>> the available bandwidth deep in these mountains in Mexico is meager,
>> it is now enough for most purposes.
>> 
>> https://blog.nafiux.com/posts/cnpilot_r190w_openwrt_bufferbloat_fqcodel_cake/
> 
> In looking over that blog entry again today I know I overfocus on the
> "bufferbloat" result, and the fact that he could indeed run a
> speedtest while maintaining a good videoconfernce, which I really wish
> more folk tested for. However it fails multiple checkboxes in the test
> results, which others might be more inclined to look at.
> 
> 4k video streaming: Failed. However this network is MORE than capable
> of 1024p streaming. 4k is difficult to discern except on large,
> expensive televisions. It was not all that long ago that 1024p was
> considered good enough, and IMHO, still is.

[SM] With my aging eyes I agree "full HD" aka 1920 by 1080 still looks 
plenty fine to me, even on our biggest screen (43").However the older I get the 
less picky I get, even SD resolution will not keep me from watching things if 
the content is compelling ;)
-> 4K streaming is reported as failure due to insufficient download 
capacity.


> Videoconferencing: Failed. Well, the test is wrong, probably having
> too low a bar for the upload as a cutoff. Videoconferencing needs oh,
> 500kb/sec to work decently, and only facetime tends to try for 4k.
> Having comprehensible voice, with a few video artifacts is ok,
> incomprehensible voice, is not.

[SM] Videoconferencing reported as failure due to insufficient upload 
capacity, I am sure though that 10.6/3.46 Mbps will be enough for decent video 
conferencing for a single seat.

> 
> Low Latency gaming: Failed. The waveform test conflates two things
> that it shouldn't - the effects of bufferbloat (none, in this case),
> and the physical distance to the most local server, which was 70ms,
> where the cutoff is 50ms in this test.

[SM] the cutoff is reported as "95th Percentile Latency < 40 ms" which 
is indeed harsh. 



Here is the expanded list of the grading rules:
We use the following criteria to determine if a particular service will work on 
your Internet connection. Of course, these criteria are far from perfect, but 
we think they’re a good general guideline.
• Web Browsing:
• Download speed > 2 Mbps
• Upload speed > 100 Kbps
• Latency < 500 ms
• Audio Calls:
• Download speed > 100 Kbps
• Upload speed > 100 Kbps
• 95th Percentile Latency < 400 ms
• 4K Video Streaming:
• Download speed > 25 Mbps
• Video Conferencing:
• Download speed > 10 Mbps
• Upload speed > 5 Mbps
• 95th Percentile Latency < 400 ms
• Low Latency Gaming:
• Download speed > 10 Mbps
• Upload speed > 3 Mbps
• 95th Percentile Latency < 40 ms



> I wish that the city-dwellers of BEAD so in love with fiber would
> insert 70ms of rural delay into all their testing.

[SM] In fiber 70ms RTT is good for 70 *100 = 7000 Km, that is a lot of 
latency, sure there are other delays other than propagation delay, but I wish 
we could wire up more rural ares with better topologies that avoid 7000 Km 
detours... here however the issue might well be more cloudflare sparsity in MX, 
they only mention Maxico City and Queretaro... Maxico is quite large, but even 
then 70ms indicates clear potential.

BUT I also think that we should be able to build an internet infrastructure 
that can cope decently with such delays!


> If someone would go
> to all these enormous conferences about BEAD, and do that, the need
> for cdns and uIXPs would become dramatically apparent in what they are
> building out.
> 
> https://blog.cloudflare.com/tag/latency/
> 
>> 
>> I have no idea how many of this model routers were sold or are still
>> deployed (?), but the modest up front cost of this sort of development
>> dwarves that of deployment. Ongoing maintenance is a problem, but at
>> least they are in a position now to rapidly respond to CVEs and other
>> problems when they happen, having "seized control of the methods of
>> computation" again.
>> 
>> OpenWrt is known to run on 1700 different models, already, (with easy
>> ports to obscure ones like this box) - going back over a decade in
>> some cases.
>> 
>> Another favorite story of mine was the ISP in New Zealand that
>> 

Re: [NNagain] nn announced

2023-10-19 Thread Sebastian Moeller via Nnagain
Hi Dave,

for the actual text see:
https://docs.fcc.gov/public/attachments/DOC-397309A1.pdf

Regards
Sebastian


> On Oct 19, 2023, at 19:58, Dave Taht via Nnagain 
>  wrote:
> 
> https://broadbandbreakfast.com/2023/10/fcc-moves-to-reinstate-net-neutrality-keeps-rules-open-for-comment/
> 
> I would like to find the actual remarks, before everyone brings out
> their own old knives, covered with ancient blood.
> 
> -- 
> Oct 30: https://netdevconf.info/0x17/news/the-maestro-and-the-music-bof.html
> Dave Täht CSO, LibreQos
> ___
> Nnagain mailing list
> Nnagain@lists.bufferbloat.net
> https://lists.bufferbloat.net/listinfo/nnagain

___
Nnagain mailing list
Nnagain@lists.bufferbloat.net
https://lists.bufferbloat.net/listinfo/nnagain


Re: [NNagain] Small ISP Carve Out

2023-10-18 Thread Sebastian Moeller via Nnagain
Hi Mr. Photons

> On Oct 18, 2023, at 09:08, le berger des photons via Nnagain 
>  wrote:
> 
> but an ISP and its clients should be able to agree together on anything which 
> they feel works for them.  The only rule that I see that should apply is that 
> there should be proper disclosure.  This rule already exists.  It is contract 
> law.  All of this regulation would make it impossible for an Isp  and its 
> customers to do something that they all WANT to do. 

[SM] At the heart of the NN dispute is, that some ISPs clearly did 
something only they wanted to do and which their customers disagreed with. This 
does not apply to all ISPs and all customers, but there clearly was abuse and 
hence reactive regulation... if the market players do not behave well enough by 
themselves, they risk the referee stepping in, so business as usual.


> 
> That would be like them deciding to dump their waste oil all over the ground 
> IF they had their own planet.  Though that would be perhaps stupid,  all of 
> the affected people agree to it,  they should be able to do it.  It's THEIR 
> planet!

[SM] This approach might work for agreements between equal partners, 
but history/experience shows that if one side has considerably more leverage it 
is likely to abuse that leverage. There is a reason why most human societies 
implement some "fairness" rules and try to enforce those. (Often "fairness" is 
restricted to small subsets of the population, but still the principle itself 
seems universal or terran).

Regards
Sebastian



> 
> On Tue, Oct 17, 2023 at 4:45 PM Livingood, Jason via Nnagain 
>  wrote:
> “Small Broadband Providers Urge FCC to Leave Them Out of Some Net Neutrality 
> Rules” See 
> https://broadbandbreakfast.com/2023/10/small-broadband-providers-urge-fcc-to-leave-them-out-of-some-net-neutrality-rules/.
>  My personal opinion is any rules should apply to all providers. After all, 
> my locally-owned small car mechanic does not get to opt out of EPA rules for 
> used motor oil disposal since they are small and have 4 employees and small 
> organic farms don’t get to opt out of food safety rules or labeling.
> 
>  
> 
> JL
> 
>  
> 
>  
> 
> ___
> Nnagain mailing list
> Nnagain@lists.bufferbloat.net
> https://lists.bufferbloat.net/listinfo/nnagain
> ___
> Nnagain mailing list
> Nnagain@lists.bufferbloat.net
> https://lists.bufferbloat.net/listinfo/nnagain

___
Nnagain mailing list
Nnagain@lists.bufferbloat.net
https://lists.bufferbloat.net/listinfo/nnagain


Re: [NNagain] NN and freedom of speech, and whether there is worthwhile good-faith discussion in that direction

2023-10-17 Thread Sebastian Moeller via Nnagain
Hi Richard,


> On Oct 16, 2023, at 20:04, Dick Roy  wrote:
> 
> Good points all, Sebastien.  How to "trade-off" a fixed capacity amongst many 
> users is ultimately a game theoretic problem when users are allowed to make 
> choices, which is certainly the case here.  Secondly, any network that can 
> and does generate "more traffic" (aka overhead such as ACKs NACKs and 
> retries) reduces the capacity of the network, and ultimately can lead to the 
> "user" capacity going to zero!  Such is life in the fast lane (aka the 
> internet).
> 
> Lastly, on the issue of low-latency real-time experience, there are many 
> applications that need/want such capabilities that actually have a net 
> benefit to the individuals involved AND to society as a whole.  IMO, 
> interactive gaming is NOT one of those.

[SM] Yes, gaming is one obvious example of a class of uses that work 
best with low latency and low jitter, not necessarily an example for a use-case 
worthy enough to justify the work required to increase the responsiveness of 
the internet. Other examples are video conferences, VoIP, in extension of both 
musical collaboration over the internet, and surprising to some even plain old 
web browsing (it often needs to first read a page before being able to follow 
links and load resources, and every read takes at best a single RTT). None of 
these are inherently beneficial or detrimental to individuals or society, but 
most can be used to improve the status quo... I would argue that in the last 4 
years the relevance of interactive use-cases has been made quite clear to a lot 
of folks...


>  OK, so now you know I don't engage in these time sinks with no redeeming 
> social value.:)

[SM] Duly noted ;)

> Since it is not hard to argue that just like power distribution, information 
> exchange/dissemination is "in the public interest", the question becomes "Do 
> we allow any and all forms of information exchange/dissemination over what is 
> becoming something akin to a public utility?"  FWIW, I don't know the answer 
> to this question! :)

[SM] This is an interesting question and one (only) tangentially 
related to network neutrality... it is more related to freedom of speech and 
limits thereof. Maybe a question for another mailing list? Certainly one 
meriting a topic change...


Regards
Sebastian

> 
> Cheers,
> 
> RR
> 
> -Original Message-
> From: Sebastian Moeller [mailto:moell...@gmx.de] 
> Sent: Monday, October 16, 2023 10:36 AM
> To: dick...@alum.mit.edu; Network Neutrality is back! Let´s make the 
> technical aspects heard this time!
> Subject: Re: [NNagain] transit and peering costs projections
> 
> Hi Richard,
> 
> 
>> On Oct 16, 2023, at 19:01, Dick Roy via Nnagain 
>>  wrote:
>> 
>> Just an observation:  ANY type of congestion control that changes 
>> application behavior in response to congestion, or predicted congestion 
>> (ENC), begs the question "How does throttling of application information 
>> exchange rate (aka behavior) affect the user experience and will the user 
>> tolerate it?" 
> 
>   [SM] The trade-off here is, if the application does not respond (or 
> rather if no application would respond) we would end up with congestion 
> collapse where no application would gain much of anything as the network 
> busies itself trying to re-transmit dropped packets without making much head 
> way... Simplistic game theory application might imply that individual 
> applications could try to game this, and generally that seems to be true, but 
> we have remedies for that available..
> 
> 
>> 
>> Given any (complex and packet-switched) network topology of interconnected 
>> nodes and links, each with possible a different capacity and 
>> characteristics, such as the internet today, IMO the two fundamental 
>> questions are:
>> 
>> 1) How can a given network be operated/configured so as to maximize 
>> aggregate throughput (i.e. achieve its theoretical capacity), and
>> 2) What things in the network need to change to increase the throughput (aka 
>> parameters in the network with the largest Lagrange multipliers associated 
>> with them)?
> 
>   [SM] The thing is we generally know how to maximize (average) 
> throughput, just add (over-)generous amounts of buffering, the problem is 
> that this screws up the other important quality axis, latency... We ideally 
> want low latency and even more low latency variance (aka jitter) AND high 
> throughput... Turns out though that above a certain throughput threshold* 
> many users do not seem to care all that much for more throughput as long as 
> interactive use cases are sufficiently responsive... but high responsiveness 
> requires low latency and low jitter... This is actually a good thing, as that 
> means we do not necessarily aim for 100% utilization (almost requiring deep 
> buffers and hence resulting in compromised latency) but can get away with say 
> 80-90% where shallow buffers will do (or 

Re: [NNagain] transit and peering costs projections

2023-10-16 Thread Sebastian Moeller via Nnagain
Hi Richard,


> On Oct 16, 2023, at 19:01, Dick Roy via Nnagain 
>  wrote:
> 
> Just an observation:  ANY type of congestion control that changes application 
> behavior in response to congestion, or predicted congestion (ENC), begs the 
> question "How does throttling of application information exchange rate (aka 
> behavior) affect the user experience and will the user tolerate it?" 

[SM] The trade-off here is, if the application does not respond (or 
rather if no application would respond) we would end up with congestion 
collapse where no application would gain much of anything as the network busies 
itself trying to re-transmit dropped packets without making much head way... 
Simplistic game theory application might imply that individual applications 
could try to game this, and generally that seems to be true, but we have 
remedies for that available..


> 
> Given any (complex and packet-switched) network topology of interconnected 
> nodes and links, each with possible a different capacity and characteristics, 
> such as the internet today, IMO the two fundamental questions are:
> 
> 1) How can a given network be operated/configured so as to maximize aggregate 
> throughput (i.e. achieve its theoretical capacity), and
> 2) What things in the network need to change to increase the throughput (aka 
> parameters in the network with the largest Lagrange multipliers associated 
> with them)?

[SM] The thing is we generally know how to maximize (average) 
throughput, just add (over-)generous amounts of buffering, the problem is that 
this screws up the other important quality axis, latency... We ideally want low 
latency and even more low latency variance (aka jitter) AND high throughput... 
Turns out though that above a certain throughput threshold* many users do not 
seem to care all that much for more throughput as long as interactive use cases 
are sufficiently responsive... but high responsiveness requires low latency and 
low jitter... This is actually a good thing, as that means we do not 
necessarily aim for 100% utilization (almost requiring deep buffers and hence 
resulting in compromised latency) but can get away with say 80-90% where 
shallow buffers will do (or rather where buffer filling stays shallow, there is 
IMHO still value in having deep buffers for rare events that need it).



*) This is not a hard physical law so the exact threshold is not set in stone, 
but unless one has many parallel users, something in the 20-50 Mbps range is 
plenty and that is only needed in the "loaded" direction, that is for pure 
consumers the upload can be thinner, for pure producers the download can be 
thinner.


> 
> I am not an expert in this field,

   [SM] Nor am I, I come from the wet-ware side of things so not even soft- 
or hard-ware ;)


> however it seems to me that answers to these questions would be useful, 
> assuming they are not yet available!
> 
> Cheers,
> 
> RR
> 
> 
> 
> -Original Message-
> From: Nnagain [mailto:nnagain-boun...@lists.bufferbloat.net] On Behalf Of 
> rjmcmahon via Nnagain
> Sent: Sunday, October 15, 2023 1:39 PM
> To: Network Neutrality is back! Let´s make the technical aspects heard this 
> time!
> Cc: rjmcmahon
> Subject: Re: [NNagain] transit and peering costs projections
> 
> Hi Jack,
> 
> Thanks again for sharing. It's very interesting to me.
> 
> Today, the networks are shifting from capacity constrained to latency 
> constrained, as can be seen in the IX discussions about how the speed of 
> light over fiber is too slow even between Houston & Dallas.
> 
> The mitigations against standing queues (which cause bloat today) are:
> 
> o) Shrink the e2e bottleneck queue so it will drop packets in a flow and 
> TCP will respond to that "signal"
> o) Use some form of ECN marking where the network forwarding plane 
> ultimately informs the TCP source state machine so it can slow down or 
> pace effectively. This can be an earlier feedback signal and, if done 
> well, can inform the sources to avoid bottleneck queuing. There are 
> couple of approaches with ECN. Comcast is trialing L4S now which seems 
> interesting to me as a WiFi test & measurement engineer. The jury is 
> still out on this and measurements are needed.
> o) Mitigate source side bloat via TCP_NOTSENT_LOWAT
> 
> The QoS priority approach per congestion is orthogonal by my judgment as 
> it's typically not supported e2e, many networks will bleach DSCP 
> markings. And it's really too late by my judgment.
> 
> Also, on clock sync, yes your generation did us both a service and 
> disservice by getting rid of the PSTN TDM clock ;) So IP networking 
> devices kinda ignored clock sync, which makes e2e one way delay (OWD) 
> measurements impossible. Thankfully, the GPS atomic clock is now 
> available mostly everywhere and many devices use TCXO oscillators so 
> it's possible to get clock sync and use oscillators that can minimize 
> drift. I pay $14 for a Rpi4 GPS chip with pulse 

Re: [NNagain] transit and peering costs projections

2023-10-15 Thread Sebastian Moeller via Nnagain
Hi Jack,

> On Oct 15, 2023, at 21:59, Jack Haverty via Nnagain 
>  wrote:
> 
> The "VGV User" (Voice, Gaming, Videoconferencing) cares a lot about latency.  
>  It's not just "rewarding" to have lower latencies; high latencies may make 
> VGV unusable.   Average (or "typical") latency as the FCC label proposes 
> isn't a good metric to judge usability.  A path which has high variance in 
> latency can be unusable even if the average is quite low.   Having your voice 
> or video or gameplay "break up" every minute or so when latency spikes to 500 
> msec makes the "user experience" intolerable.
> 
> A few years ago, I ran some simple "ping" tests to help a friend who was 
> trying to use a gaming app.  My data was only for one specific path so it's 
> anecdotal.  What I saw was surprising - zero data loss, every datagram was 
> delivered, but occasionally a datagram would take up to 30 seconds to arrive. 
>  I didn't have the ability to poke around inside, but I suspected it was an 
> experience of "bufferbloat", enabled by the dramatic drop in price of memory 
> over the decades.
> 
> It's been a long time since I was involved in operating any part of the 
> Internet, so I don't know much about the inner workings today. Apologies for 
> my ignorance
> 
> There was a scenario in the early days of the Internet for which we struggled 
> to find a technical solution.  Imagine some node in the bowels of the 
> network, with 3 connected "circuits" to some other nodes.  On two of those 
> inputs, traffic is arriving to be forwarded out the third circuit.  The 
> incoming flows are significantly more than the outgoing path can accept.
> 
> What happens?   How is "backpressure" generated so that the incoming flows 
> are reduced to the point that the outgoing circuit can handle the traffic?
> 
> About 45 years ago, while we were defining TCPV4, we struggled with this 
> issue, but didn't find any consensus solutions.  So "placeholder" mechanisms 
> were defined in TCPV4, to be replaced as research continued and found a good 
> solution.
> 
> In that "placeholder" scheme, the "Source Quench" (SQ) IP message was 
> defined; it was to be sent by a switching node back toward the sender of any 
> datagram that had to be discarded because there wasn't any place to put it.
> 
> In addition, the TOS (Type Of Service) and TTL (Time To Live) fields were 
> defined in IP.
> 
> TOS would allow the sender to distinguish datagrams based on their needs.  
> For example, we thought "Interactive" service might be needed for VGV 
> traffic, where timeliness of delivery was most important.  "Bulk" service 
> might be useful for activities like file transfers, backups, et al.   
> "Normal" service might now mean activities like using the Web.
> 
> The TTL field was an attempt to inform each switching node about the 
> "expiration date" for a datagram.   If a node somehow knew that a particular 
> datagram was unlikely to reach its destination in time to be useful (such as 
> a video datagram for a frame that has already been displayed), the node 
> could, and should, discard that datagram to free up resources for useful 
> traffic.  Sadly we had no mechanisms for measuring delay, either in transit 
> or in queuing, so TTL was defined in terms of "hops", which is not an 
> accurate proxy for time.   But it's all we had.
> 
> Part of the complexity was that the "flow control" mechanism of the Internet 
> had put much of the mechanism in the users' computers' TCP implementations, 
> rather than the switches which handle only IP. Without mechanisms in the 
> users' computers, all a switch could do is order more circuits, and add more 
> memory to the switches for queuing.  Perhaps that led to "bufferbloat".
> 
> So TOS, SQ, and TTL were all placeholders, for some mechanism in a future 
> release that would introduce a "real" form of Backpressure and the ability to 
> handle different types of traffic.   Meanwhile, these rudimentary mechanisms 
> would provide some flow control. Hopefully the users' computers sending the 
> flows would respond to the SQ backpressure, and switches would prioritize 
> traffic using the TTL and TOS information.
> 
> But, being way out of touch, I don't know what actually happens today.  
> Perhaps the current operators and current government watchers can answer?:
> 
> 1/ How do current switches exert Backpressure to  reduce competing traffic 
> flows?  Do they still send SQs?

[SM] As far as i can tell SQ is considered a "failed" experiment at 
least over the open internet, as anybody can manufacture such quench messages 
and hence these pose an excellent DOS vector. In controlled environments 
however this idea keeps coming back (as it has the potential for faster 
signaling than piggy-backing a signal onto the forward packets and expect the 
receiver to reflect the signals back to the sender). But instead over the 
internet we have the receivers detect either packet drops or explicit signals 

Re: [NNagain] Internet Education for Non-technorati?

2023-10-14 Thread Sebastian Moeller via Nnagain
Hi Bob,


> On Oct 13, 2023, at 19:20, rjmcmahon  wrote:
> 
> Hi Sebastian,
> 
> It was the ISP tech support over the phone. Trying to help install a home 
> network over the phone w/o a technician isn't easy.

[SM] Ah, okay. I would never even think about calling my ISP when 
considering changes to my home network (for one, I would rather McGywer this, 
and also my ISP does not really offer that as a servicedsdw), I guess different 
service offerings in different countries.


> In many U.S. states, smoke detectors are required to be no more that 30' 
> apart, must be AC powered, battery backed up and must communicate with one 
> another. The smoke sensor needs to be replaced every ten years max.

[SM] Intersting! Over here detectors are also mandatory (but no 
distance or networking requirements, it is special rooms like bed rooms that 
need to have one). Also over here no AC requirement.


> It's a good place to install remote radio heads, or even full blown APs, for 
> both internet access points and for life support sensors.

[SM] I agree, and with an AC requirement powering such APs/radio heads 
is not rocket science either, heck in a first iteration one might even use PLC 
to bring data to the APs...


> 10G NRE spends stopped over a decade ago. Early adopters aren't likely going 
> to wire 10G over copper in their homes.

[SM] Over here active 2.5 Gbps ethernet are just becoming cheap enough 
for enthusiasts to switch over to, and 2.5 has the advantage of operating well 
even over most cat5 wiring (few homes I know will push anywhere close to the 
typical 100m copper ethernet limit, most will be fine with < 30m).


> 100G only goes 4 meters so copper really isn't an option for future proof 
> comm cable throughout buildings.

[SM] Indeed, but I am not 100% sure what use-case would justify going 
100Gbps in a typical home? Sure if one switches to fiber wiring and 100Gbps is 
only marginally more expensive than 1 or 10 Gbps why not? 

> Fiber to WiFi seems straight forward to me.

[SM] This might be related to your professional background though? ;) 
Just kidding, I think you are simply a few years ahead of the rest of us, as 
you know what is in the pipeline.


> People don't want to be leashed to plugs so the last meters have to be 
> wireless.

[SM] Yes and no. People did not bother about wiring office desks or 
even smart TVs, but smart phones and tablets are a different kettle of fish, as 
are laptops, that might be operated wired on the desk but wireless in the rest 
of the house. I also note that more and more laptops come without built in 
ethernet (personally I detest that, an rj45 jack is not that thick that a 
laptop body can not be planned around that, leaving some more room for e.g. 
NVMe sockets or simplify cooling a bit, ultra-thin is IMHO not really in the 
end-users' interest, but I digress).


> We need to standardized to the extent that we can on one wireless tech 
> (similar to Ethernet for wired) and a proposal is to use 802.11 since that's 
> selling in volume, driven by mobile hand sets.

[SM] Sure 802.11 is likely to stay by virtue of being relatively 
ubiquitous and by being generally already good enough for many use cases (with 
road-maps for tackling more demanding use-cases, and I very much include your 
fiwi proposal here).



> 
> Bob
>> Hi Bob,
>>> On Oct 12, 2023, at 17:55, Robert McMahon via Nnagain 
>>>  wrote:
>>> Hi David,
>>> The vendors I know don't roll their own os code either. The make their own 
>>> release still mostly based from Linux and they aren't tied to the openwrt 
>>> release process.
>>> I think GUIs on CPEs are the wrong direction. Consumer network equipment 
>>> does best when it's plug and play. Consumers don't have all the skills 
>>> needed to manage an in home packet network that includes wifi.
>>  [SM] That is both true, and (currently?) unachievable. To run a
>> network connected to the internet securely requires to make a number
>> of policy decisions trading-off the required/desired connectivity
>> versus the cost in security (either cost as effort of maintaining
>> security or cost in an increase in attack surface).
>>  The in-side the home situation, has IMHO drastically improved with
>> the availability of off-the-shelf mesh network gear from commercial
>> vendors, with easy to follow instructions and/or apps to find decent
>> AP placement.
>>  For structured wiring, I would agree that requires both an unusual
>> skill set (even though doing structured wiring itself is not hard,
>> just doing it in a way that blends into an apartment without signaling
>> DIY-ness is more involved).
>>> I recently fixed a home network for my inlaws. It's a combo of structured 
>>> wire and WiFi APs. I purchased the latest equipment from Amazon vs use the 
>>> ISP provided equipment. I can do this reasonably well because I'm familiar 
>>> with the chips inside.
>>> The online tech 

Re: [NNagain] Internet Education for Non-technorati?

2023-10-13 Thread Sebastian Moeller via Nnagain
Hi Bob,


> On Oct 13, 2023, at 06:31, rjmcmahon via Nnagain 
>  wrote:
> 
> Hi David,
> 
> I think we're looking at different parts of the elephant. I perceive huge 
> advances in WiFi (phy, dsp, radios, fems, etc.) and residential gateway chips 
> of late. Not sure the state of chips used by the openwrt folks here,

[SM] The core OpenWrt developers seem to be mostly software folks (that 
are occasionally hired-by/cooperating with hardware companies) so in a sense 
OpenWrt uses those "chips" that are available in (cheapish) WiFi APs/routers 
available on the market where the manufacturer is either opensource friendly 
(some NDAs seem to be acceptable at least to some of the developers) or where 
folks are eager enough to reverse engineer stuff. That leaves some large 
vendors pretty much out of the OpenWrt ecosystem... e.g. broadcom has a 
reputation as being opensource unfriendly and hence has a lot of SoC/chips that 
are not supported by the opensource OpenWrt mainline I guess there might be 
vendor-private SDKs for broadcom chips that are based on OpenWrt, but I am 
purely speculating... (I think there is mainline support for some/most? 
ethernet chips, and some mostly oder WiFi, but modern WiFi or stuff like DSL 
seems not supported).


> though they may be lagging a bit - not sure.
> 
> https://investors.broadcom.com/news-releases/news-release-details/broadcom-announces-availability-second-generation-wi-fi-7
> 
> Broadcom’s Wi-Fi 7 ecosystem product portfolio includes the BCM6765, 
> BCM47722, and BCM4390.
> 
> The BCM6765 is optimized for the residential Wi-Fi access point market. Key 
> features include:
> ...
> The BCM47722 is an enterprise access point platform SoC supporting Wi-Fi 7, 
> Bluetooth Low Energy, and 802.15.4 protocols. Key features include:
> ...
> The BCM4390 is a highly-integrated Wi-Fi 7 and Bluetooth 5 combo chip 
> optimized for mobile handset applications. Key features include:
> ...

[SM] Yeah, these are not supported by OpenWrt yet, and likely never 
will, unless Broadcom changes its stance towards coperation with opensource 
developers in that section of the market.

Regards
Sebastian


> 
> Bob
>> On Thu, 12 Oct 2023, rjmcmahon via Nnagain wrote:
>>> I looked at openwrt packages and iperf 2 is  at version 2.1.3 which is a 
>>> few years old.
>>> The number of CPE/AP systems to test against is quite large. Then throwing 
>>> in versions for backwards compatibility testing adds yet another vector.
>> for the market as a whole, yes, it's a hard problem. But for an
>> individual manfacturer, they only have to work with their equipment,
>> not all the others. The RF side isn't changing from release to release
>> (and usually the firmware for the Wifi isn't changing), so that
>> eliminates a lot of the work. They need to do more smoke testing of
>> new releases than a full regression/performance test. Some
>> incompatibility creeping in is the most likely problem, not a subtle
>> performance issue.
>> For the Scale conference, we have a Pi tied to a couple relays hooked
>> to the motherboard of the router we use and it's tied in to our github
>> repo, so every PR gets auto-flashed to the router and simple checks
>> done. Things like this should be easy to setup and will catch most
>> issues.
>> David Lang
>>> Since it's performance related, statistical techniques are required against 
>>> multiple metrics to measure statistically the same or not. Finally with 
>>> WiFi, one needs to throw in some controlled, repeatable RF variability 
>>> around the d-matrices (range) & h-matrices (frequency responses in both 
>>> phase and amplitudes per the MIMO spatial streams.)
>>> I can see why vendors (& system integrators) might be slow to adopt the 
>>> latest if there is not some sort of extensive qualification ahead of that 
>>> adoption.
>>> Bob
>>> PS. Iperf 2 now has 2.5+ million downloads (if sourceforge is to be 
>>> believed.) My wife suggested I write a book titled, "How to create software 
>>> with 2.5M downloads, a zero marginal cost to produce, and get paid zero 
>>> dollars!!" I suspect many openwrt & other programmers could add multiple 
>>> chapters to such a book.
 On Thu, Oct 12, 2023 at 9:04 AM rjmcmahon via Nnagain
  wrote:
> Sorry, my openwrt information seems to be incorrect and more vendors use
> openwrt then I realized. So, I really don't know the numbers here.
 There are not a lot of choices in the market. On the high end, like
 eero, we are seeing Debian derived systems, also some chromeOS
 devices. Lower end there is "buildroot", and forked openwrts like
 Meraki.
 So the whole home router and cpe market has some, usually obsolete,
 hacked up, and unmaintained version of openwrt at its heart, on
 everything from SFPs to the routers and a lot of iOt, despite many
 advancements and security patches in the main build.
 It would be my earnest hope, with a clear upgrade path, downstream

Re: [NNagain] Internet Education for Non-technorati?

2023-10-13 Thread Sebastian Moeller via Nnagain
Hi Bob,


> On Oct 12, 2023, at 17:55, Robert McMahon via Nnagain 
>  wrote:
> 
> Hi David, 
> 
> The vendors I know don't roll their own os code either. The make their own 
> release still mostly based from Linux and they aren't tied to the openwrt 
> release process. 
> 
> I think GUIs on CPEs are the wrong direction. Consumer network equipment does 
> best when it's plug and play. Consumers don't have all the skills needed to 
> manage an in home packet network that includes wifi.

[SM] That is both true, and (currently?) unachievable. To run a network 
connected to the internet securely requires to make a number of policy 
decisions trading-off the required/desired connectivity versus the cost in 
security (either cost as effort of maintaining security or cost in an increase 
in attack surface).
The in-side the home situation, has IMHO drastically improved with the 
availability of off-the-shelf mesh network gear from commercial vendors, with 
easy to follow instructions and/or apps to find decent AP placement.
For structured wiring, I would agree that requires both an unusual 
skill set (even though doing structured wiring itself is not hard, just doing 
it in a way that blends into an apartment without signaling DIY-ness is more 
involved).


> I recently fixed a home network for my inlaws. It's a combo of structured 
> wire and WiFi APs. I purchased the latest equipment from Amazon vs use the 
> ISP provided equipment. I can do this reasonably well because I'm familiar 
> with the chips inside.
> 
> The online tech support started with trepidation as he was concerned that the 
> home owner, i.e me, wasn't as skilled as the ISP technicians. He suggested we 
> schedule that but I said we were good to go w/o one. 

[SM] What "online tech support"? From the AP vendor or from the ISP? 
The latter might have a script recommending ISP technicians more for commercial 
considerations than technical ones...


> He asked to speak to my father in law when we were all done. He told him, 
> "You're lucky to have a son in law that know what he's doing. My techs aren't 
> as good, and I really liked working with him too."
> 
> I say this not to brag, as many on this list could do the equivalent, but to 
> show that we really need to train lots of technicians on things like RF and 
> structured wiring. Nobody should be "lucky" to get a quality in home network. 
>  We're not lucky to have a flush toilet anymore. This stuff is too important 
> to rely on luck.

[SM] Mmmh, that got me thinking, maybe we should think about always 
running network wiring parallel to electric cables so each power socket could 
easily house an ethernet plug as well... (or one per room to keep the cost 
lower and avoid overly much "dark" copper)? Sort of put this into the building 
codes/best current practice documents... (I understand starting now, will still 
only solve the issue over many decades, but at least we would be making some 
inroads; and speaking of decades, maybe putting fiber there instead of copper 
might be a more future-oriented approach)?


> 
> Bob
> On Oct 11, 2023, at 3:58 PM, David Lang  wrote:
> On Wed, 11 Oct 2023, rjmcmahon wrote:
> 
>  I don't know the numbers but a guess is that a majority of SoCs with WiFi 
>  radios aren't based on openwrt.
> 
> From what I've seen, the majority of APs out there are based on OpenWRT or 
> one 
> of the competing open projects, very few roll their own OS from scratch
> 
>  I think many on this list use openwrt but 
>  that may not be representative of the actuals. Also, the trend is less sw in 
>  a CPU forwarding plane and more hw, one day, linux at the CPEs may not be 
>  needed at all (if we get to remote radio heads - though this is highly 
>  speculative.)
> 
> that is countered by the trend to do more (fancier GUI, media center, etc) 
> The 
> vendors all want to differentiate themselves, that's hard to do if it's baked 
> into the chips
> 
>  From my experience, sw is defined by the number & frequency of commits, and 
>  of timeliness to issues more than a version number or compile date. So the 
>  size and quality of the software staff can be informative.
> 
>  I'm more interested in mfg node process then the mfg location & date as the 
>  node process gives an idea if the design is keeping up or not. Chips 
> designed 
>  in 2012 are woefully behind and consume too much energy and generate too 
> much 
>  heat. I think Intel provides this information on all its chips as an example.
> 
> I'm far less concerned about the chips than the software. Security holes are 
> far 
> more likely in the software than the chips. The chips may limit the max 
> performance of the devices, but the focus of this is on the security, not the 
> throughput or the power efficiency (I don't mind that extra info, but what 
> makes 
> some device unsafe to use isn't the age of the chips, but the age of the 
> software)
> 
> David Lang
> 
>  Bob
>  On 

Re: [NNagain] Internet Education for Non-technorati?

2023-10-11 Thread Sebastian Moeller via Nnagain
Hi Bob,


> On Oct 11, 2023, at 20:49, rjmcmahon via Nnagain 
>  wrote:
> 
> Yes, EDCAs are multidimensional. The coupling of EDCA to a DSCP field, Access 
> Class (AC) or MAC queue is just an engineering thing. The EDCA is really just 
> for an upcoming access arbitration and doesn't have to be held constant. And 
> the values are under control of the WiFi BSS manager.
> 
> What's a WiFi BSS manager one might ask? It's an unfilled role that the 
> standards engineers assumed would occur, yet are networking roles are under 
> staffed all over the planet. The default EDCAs are just some made up numbers 
> that have no simulation or other backing - though many think they're gold or 
> something - which they're not.
> 
> There are so many things at play just for WiFi performance yet alone e2e. I 
> wouldn't know where to start for a consumer label. Even marketing terms like 
> WiFi 6, 6e and 7 seem to mostly add confusion.
> 
> Then engineers design for the tests because what else can they do? And the 
> tests struggle to represent any kind of reality. Labels are likely going to 
> have a similar affect.

[SM] And this is why it matters how such labels are to be enforced... 
if the capacity numbers can be checked by end users against reference servers 
in a adifferent AS than any good-faith effort of engineers to make the test 
work sufficiently well will also help with general internet access. With 
good-faith I mean I exclude stunts like ISPs disabling the per user traffic 
shaper for the duration of a detected test, or treat test traffic with higher 
priority...

> Even the basics of capacity and latency are not understood by consumers.

[SM] There are efforts under way to make end users more conscious about 
latency issues though, like apple's RPM.


> The voice engineers created mean opinion scores which I don't think consumers 
> ever cared about.

[SM] Why should they if they can directly judge the quality of their 
VoIP calls? Sure engineers need something easier to come by than asking end 
users about perceived quality and hence invented a system to essentially 
predict user judgements based on a few measurable parameters. But as far as i 
understand MOS is a simulation of end users that is better suited to the normal 
engineering process than doing psychoacoustic experiments with end-users, no?

> Then we talk about quality of experience (QoE) as if it were a mathematical 
> term, which it isn't.

[SM] Measuring experinence, aka subjective perception, is simply a hard 
problem.

Regards
Sebastian


> 
> Bob
>> -Original Message-
>> From: Nnagain [mailto:nnagain-boun...@lists.bufferbloat.net] On Behalf
>> Of rjmcmahon via Nnagain
>> Sent: Wednesday, October 11, 2023 11:18 AM
>> To: Network Neutrality is back! Let´s make the technical aspects
>> heard this time!
>> Cc: rjmcmahon; Nick Feamster
>> Subject: Re: [NNagain] Internet Education for Non-technorati?
>> I've added many metrics around latency and one way delays (OWD) in
>> iperf
>> 2. There is no single type of latency, nor are the measurements
>> scalars.
>> (Few will understand violin plots or histograms on labels)
>> On top of that, a paced flow will have a different e2e latency
>> histogram
>> than an as fast as possible (AFAP) flow. They also drive different
>> WiFi
>> behaviors. Hence, it's not just a simple arrival rate and service time
>> anymore, even for queuing analysis. (Though Little's Law is pretty
>> cool
>> and useful for displacement ratings) Throw in BSS managed EDCAs and
>> all
>> bets are off.
>> _[RR] Wouldn’t the issue of EDCAs (i.e.different queues for
>> different priority classes with different tx parameters for each),
>> just make the analysis (more) “multidimensional”?  Might it be
>> possible to model such scenarios as N different collocated
>> bridges/routers), one for each access category?  Does any of what I
>> just said make any sense in this context? __J __J_
>> _ _
>> _RR_
>> Bob
>>> I think y'all are conflating two different labels here. The
>> nutrition
>>> label was one effort, now being deploye, the other is cybersecurity,
>>> now being discussed.
>>> On the nutrition front...
>>> We successfully fought against "packet loss" being included on the
>>> nutrition label, but as ghu is my witness, I have no idea if a
>> formal
>>> method for declaring "typical latency" was ever formally derived.
>> https://www.fcc.gov/document/fcc-requires-broadband-providers-display-labels-help-consumers
>>> On Wed, Oct 11, 2023 at 10:39 AM David Bray, PhD via Nnagain
>>>  wrote:
 I was at a closed-door event discussing these labels about two
>> weeks
 ago (right before the potential government shutdown/temporarily
 averted for now) - and it was non-attribution, so I can only
>> describe
 my comments:
 (1) the labels risk missing the reality that the Internet and
 cybersecurity are not steady state, which begs the question how
>> will
 they be updated

Re: [NNagain] Internet Education for Non-technorati?

2023-10-11 Thread Sebastian Moeller via Nnagain
HI Dave,


> On Oct 11, 2023, at 20:06, Dave Taht via Nnagain 
>  wrote:
> 
> I think y'all are conflating two different labels here. The nutrition
> label was one effort, now being deploye, the other is cybersecurity,
> now being discussed.
> 
> On the nutrition front...
> We successfully fought against "packet loss" being included on the
> nutrition label,

[SM] Not wanting to be contrarian, but I consider a number for random 
packet loss relevant. The Matthis equation allows to predict the maximal 
achievable TCP rate over a link with random packet loss and so this has clear 
bearing on a link's utility. I do agree that the TCP packet loss during an 
actual speedtest is much less interesting, but random packet loss over a quiet 
link is an important characteristic... I learned this the hard way when my ISP 
started to route my access link occasionally over a bad router with ~1% random 
packet loss (I managed to get my ISPs attention and over the course of a few 
months they managed to isolate and fix the issue, but I never got a post-mortem 
of what exactly had happened, all I know is that they dragged one of their 
hardware vendors into the diagnostics).


> but as ghu is my witness, I have no idea if a formal
> method for declaring "typical latency" was ever formally derived.

[SM] The easiest really is: set up reference servers outside of all 
eye-ball ISP AS and distribute a measurement client that will measure against 
those servers... if you want to improve upon this, locate measurement servers 
into the AS of the most important transit providers and randomly select one for 
each test (and report the endpoint location as part of the results).

Best Regards
Sebastian


> 
> https://www.fcc.gov/document/fcc-requires-broadband-providers-display-labels-help-consumers
> 
> On Wed, Oct 11, 2023 at 10:39 AM David Bray, PhD via Nnagain
>  wrote:
>> 
>> I was at a closed-door event discussing these labels about two weeks ago 
>> (right before the potential government shutdown/temporarily averted for now) 
>> - and it was non-attribution, so I can only describe my comments:
>> 
>> (1) the labels risk missing the reality that the Internet and cybersecurity 
>> are not steady state, which begs the question how will they be updated
>> (2) the labels say nothing about how - even if the company promises to keep 
>> your data private and secure - how good their security practices are 
>> internal to the company? Or what if the company is bought in 5 years?
>> (3) they use QR-codes to provide additional info, yet we know QR-codes can 
>> be sent to bad links so what if someone replaces a label with a bad link 
>> such that the label itself becomes an exploit?
>> 
>> I think the biggest risks is these we be rolled out, some exploit will occur 
>> that the label didn't consider, consumers will be angry they weren't 
>> "protected" and now we are even in worse shape because the public's trust 
>> has gone further down hill, they angry at the government, and the private 
>> sector feels like the time and energy they spent on the labels was for 
>> naught?
>> 
>> There's also the concern about how do startups roll-out such a label for 
>> their tech in the early iteration phase? How do they afford to do the extra 
>> work for the label vs. a big company (does this become a regulatory moat?)
>> 
>> And let's say we have these labels. Will only consumers with the money to 
>> purchase the more expensive equipment that has more privacy and security 
>> features buy that one - leaving those who cannot afford privacy and security 
>> bad alternatives?
>> 
>> On Wed, Oct 11, 2023 at 1:31 PM Jack Haverty via Nnagain 
>>  wrote:
>>> 
>>> A few days ago I made some comments about the idea of "educating" the
>>> lawyers, politicians, and other smart, but not necessarily technically
>>> adept, decision makers.  Today I saw a news story about a recent FCC
>>> action, to mandate "nutrition labels" on Internet services offered by ISPs:
>>> 
>>> https://cordcuttersnews.com/fcc-says-comcast-spectrum-att-must-start-displaying-the-true-cost-and-speed-of-their-internet-service-starting-april-2024/
>>> 
>>> This struck me as anecdotal, but a good example of the need for
>>> education.  Although it's tempting and natural to look at existing
>>> infrastructures as models for regulating a new one, IMHO the Internet
>>> does not work like the Food/Agriculture infrastructure does.
>>> 
>>> For example, the new mandates require ISPs to "label" their products
>>> with "nutritional" data including "typical" latency, upload, and
>>> download speeds.   They have until April 2024 to figure it out. I've
>>> never encountered an ISP who could answer such questions - even the ones
>>> I was involved in managing.  Marketing can of course create an answer,
>>> since "typical" is such a vague term.  Figuring out how to attach the
>>> physical label to their service product may be a problem.
>>> 
>>> Such labels may not be very 

Re: [NNagain] Internet Education for Non-technorati?

2023-10-11 Thread Sebastian Moeller via Nnagain
Hi Jack,


> On Oct 11, 2023, at 19:31, Jack Haverty via Nnagain 
>  wrote:
> 
> A few days ago I made some comments about the idea of "educating" the 
> lawyers, politicians, and other smart, but not necessarily technically adept, 
> decision makers.  Today I saw a news story about a recent FCC action, to 
> mandate "nutrition labels" on Internet services offered by ISPs:
> 
> https://cordcuttersnews.com/fcc-says-comcast-spectrum-att-must-start-displaying-the-true-cost-and-speed-of-their-internet-service-starting-april-2024/
> 
> This struck me as anecdotal, but a good example of the need for education.  
> Although it's tempting and natural to look at existing infrastructures as 
> models for regulating a new one, IMHO the Internet does not work like the 
> Food/Agriculture infrastructure does.
> 
> For example, the new mandates require ISPs to "label" their products with 
> "nutritional" data including "typical" latency, upload, and download speeds.  
>  They have until April 2024 to figure it out. I've never encountered an ISP 
> who could answer such questions - even the ones I was involved in managing.  
> Marketing can of course create an answer, since "typical" is such a vague 
> term.  Figuring out how to attach the physical label to their service product 
> may be a problem.

[SM] There are typically several ways to skin this specific cat ;) One 
is e.g. for the regulator to supply their own reference platform against which 
the contractually agree upon rates/latency/random loss numbers are measured. In 
the EU the BEREC summarizes its recommendations e.g. here:
https://www.berec.europa.eu/sites/default/files/files/document_register_store/2022/6/BoR_%2822%29_72_NN_regulatory_assessment_methodology_final.pdf
where it is especially recommended to measure against servers outside of the 
ISP' networks... which makes a ton of sense for a product called internet 
access service, and not ISP-intranet access service ;)
Reading this document makes it clear that perfect is the enemy of the good 
and/or achievable in this matter.


> Such labels may not be very helpful to the end user struggling to find an ISP 
> that delivers the service needed for some interactive use (audio or video 
> conferencing, gaming, home automation, etc.)

[SM] Sure. Now if the applicable law is amended to:
a) allow the ISP to freely specify the rate numbers they promise to customers 
(in the different plans)
b) actually hold them accountable to deliver on these promised rates

the whole thing starts to make some sense... (In Germany, the only regulatory 
area where I looked close enough, the law gives end users the right to 
immediate cancelation or to reduce the payment to be in proportion to the ratio 
of achieved rate versus contracted rate). And all of the ISP essentially follow 
the law, none went bankrupt because of this or lost all of their customers as 
far as I know... The point is such a scheme, while conceptually a bit unclean, 
can actually work pretty well in practice.


> Performance on the Internet depends on where the two endpoints are, the 
> physical path to get from one to the other, as well as the hardware, 
> software, current load, and other aspects of each endpoint, all outside the 
> ISPs' control or vision.   Since the two endpoints can be on different ISPs, 
> perhaps requiring one or more additional internediate ISPs, specifying a 
> "typical" performance from all Points A to all Points B is even more 
> challenging.

[SM] Sure, and since the product is internet access, ideally the test 
servers would be located all over the network in diverse ASs, but short of such 
an unobtainable perfect system it seems an acceptable fudge to simply create a 
reference server system that is not hosted by any eye-ball ISP and is well 
connected to all major transit suppliers and/or important local IXs.


> Switching to the transportation analogy, one might ask your local bus or rail 
> company what their typical time is to get from one city to another.   If the 
> two cities involved happen to be on their rail or bus network, perhaps you 
> can get an answer, but it will still depend on where the two endpoints are.  
> If one or both cities are not on their rail network, the travel time might 
> have to include use of other "networks" - bus, rental car, airplane, ship, 
> etc.   How long does it typically take for you to get from any city on the 
> planet to any other city on the planet?

[SM] We already hold transport companies accountable for extreme 
delays... (ever got abandoned on an airport somewhere between your start and 
end point for an additional over-night stay?)


> IMHO, rules and regulations for the Internet need to reflect how the Internet 
> actually works.  That's why I suggested a focus on education for the decision 
> makers.

[SM] Sure education does work. however for the problem at hand it might 
make sense to look at already deployed "solutions" to 

Re: [NNagain] The non-death of DSL

2023-10-10 Thread Sebastian Moeller via Nnagain
Hi Bob.

On 10 October 2023 02:13:18 CEST, Robert McMahon  
wrote:
>Hi Sebastian,
>
>The NRE per chip starts at $100M. It's multiplr semiconductors that now define 
>a networks and data centers capabilitied. A small municipal overbuilder is not 
>a market maker.

[SM] Sure, a small outfit is essentially forced to use off the shelf 
components, though they might innovate a bit on the software side, like 
libreqos. But for one of the biggest current challenges, getting fiber out to 
all residences/businesses is that really an issue?


>
>So yes, an overbuilder that can't fund ASIC NRE needs to be intimately aware 
>of both market dynamics and the state of engineering, of today, tomorrow and 
>the next 20-30 years as that's typically the life of the municipal bonds.

[SM] In a dark fiber model, this will not matter too much, no? For a lighted 
fiber approach 20-30 years require 2-4 technology generations which seems hard 
to predict... then again the biggest cost is likely the fiber access network, 
the active tech might by financable out of the cash flow?


>
>Investors aren't govt. bond holders and investor owned companies can take more 
>risk. If low latency offerings don't increase ARPU, the investors lose. If it 
>works, they win. Big difference.

[SM] Building that fiber plant seems like a pretty save bet to me, allowing for 
longterm financing. Interestigly over here some insurrances got into the FTTH 
build-out game, obviously considering it a viable long term investment, though 
population density is higher here than in the US likely affecting cost and 
amortisation periods...

>
>⁣Bob
>
>On Oct 9, 2023, 12:40 AM, at 12:40 AM, Sebastian Moeller  
>wrote:
>>Hi Bob,
>>
>>> On Oct 8, 2023, at 22:44, rjmcmahon  wrote:
>>>
>>> Yeah, I get it. I think we're just too early for a structural
>>separation model in comm infrastructure.
>>
>>[SM] I see one reason why we should not wait, and that is the
>>future-proofness of the eventually reached FTTH-deployment...
>>
>>
>>>
>>> I think when we get to mix & match DSP/optics and point to point
>>fiber in the OSPs, as done in data centers, it may change. But today
>>it's PON at best which implies a communal decision process vs
>>individual one.
>>
>>[SM] There are IMHO two components to the AON (Point to
>>MultiPoint/PtMP)/PON (Point to Point/PtP) debate:
>>a) PONs are cheaper to operate, as they require less power (on the ISP
>>side) and space (depending on where the passive splitters are located).
>>b) Structural PONs with splitters out in the field (to realize space
>>saving in the CO) are less flexible, one can always operate a PtMP
>>plant as PON, but converting a structural PON to AON likely requires
>>putting new fibers into the ground.
>>
>>The first part is something I am not too concerned about, the coming
>>FTTH access network is going to operate for decades, so I could not
>>care less about what active technology is going to be used in the next
>>decade (I assume that ISPs tryto keep the same tech operational for ~ a
>>decade, but for PON that might be too pessimistic), the second part is
>>different though... micro-economics favor PtMP with splitters in the
>>field (lower up front cost* AND less potential for regulatory
>>intervention**) while the macro-economic perspective makes PtP more
>>attractive (offering more flexibility over the expected life time of
>>multiple decades).
>>
>>
>>
>>*) One big item, the cost for actually deploying the fiber to is not
>>all that sensitive, if you put fiber into the ground the traditional
>>way, typically the cost of the earth works dominates over e.g. the cost
>>of the individual fibers (not that this would stop bean-counter types
>>to still minimize the number of fiber cables...).
>>
>>**) With PtP the potential exists that a regulator (likely not the FCC)
>>could force an ISP to offer dark-fibers to end customers at wholesale
>>prices, with PtMP the ISP having build out likely will stay in control
>>of the active tech in each segment (might be forced to offer bitstream
>>access***) so potential competitors will not be able to offer
>>better/faster technology on the shared fiber. That is some of the PONs
>>are backward compatible and in theory on the same PON tree one ISP
>>might be operating GPON while another ISP might theoreticallu offer
>>XGS-PON on the same segment, but I think this is a rather theoretical
>>construct unlikely to happen quantitatively...
>>
>>***) Not only control of the tech, but offering bitstream access likely
>>means a larger wholesale price as well.
>>
>>
>>> Communal actions, as seen in both LUS and Glasgow, can take decades
>>and once done, are slow to change.
>>
>>  [SM] LUS already offer symmetric 10G links... they do not seem to be
>>lagging behind, the main criticism seems to be that they are somewhat
>>more expensive than the big ISPs, which is not all that surprising
>>given that they will not be able to leverage scale effects all that
>>much simply by being small... Also 

Re: [NNagain] The non-death of DSL

2023-10-09 Thread Sebastian Moeller via Nnagain
Hi Bob,

> On Oct 8, 2023, at 22:44, rjmcmahon  wrote:
> 
> Yeah, I get it. I think we're just too early for a structural separation 
> model in comm infrastructure.

[SM] I see one reason why we should not wait, and that is the future-proofness 
of the eventually reached FTTH-deployment...


> 
> I think when we get to mix & match DSP/optics and point to point fiber in the 
> OSPs, as done in data centers, it may change. But today it's PON at best 
> which implies a communal decision process vs individual one.

[SM] There are IMHO two components to the AON (Point to MultiPoint/PtMP)/PON 
(Point to Point/PtP) debate:
a) PONs are cheaper to operate, as they require less power (on the ISP side) 
and space (depending on where the passive splitters are located).
b) Structural PONs with splitters out in the field (to realize space saving in 
the CO) are less flexible, one can always operate a PtMP plant as PON, but 
converting a structural PON to AON likely requires putting new fibers into the 
ground.

The first part is something I am not too concerned about, the coming FTTH 
access network is going to operate for decades, so I could not care less about 
what active technology is going to be used in the next decade (I assume that 
ISPs tryto keep the same tech operational for ~ a decade, but for PON that 
might be too pessimistic), the second part is different though... 
micro-economics favor PtMP with splitters in the field (lower up front cost* 
AND less potential for regulatory intervention**) while the macro-economic 
perspective makes PtP more attractive (offering more flexibility over the 
expected life time of multiple decades).



*) One big item, the cost for actually deploying the fiber to is not all that 
sensitive, if you put fiber into the ground the traditional way, typically the 
cost of the earth works dominates over e.g. the cost of the individual fibers 
(not that this would stop bean-counter types to still minimize the number of 
fiber cables...).

**) With PtP the potential exists that a regulator (likely not the FCC) could 
force an ISP to offer dark-fibers to end customers at wholesale prices, with 
PtMP the ISP having build out likely will stay in control of the active tech in 
each segment (might be forced to offer bitstream access***) so potential 
competitors will not be able to offer better/faster technology on the shared 
fiber. That is some of the PONs are backward compatible and in theory on the 
same PON tree one ISP might be operating GPON while another ISP might 
theoreticallu offer XGS-PON on the same segment, but I think this is a rather 
theoretical construct unlikely to happen quantitatively...

***) Not only control of the tech, but offering bitstream access likely means a 
larger wholesale price as well.


> Communal actions, as seen in both LUS and Glasgow, can take decades and once 
> done, are slow to change.

[SM] LUS already offer symmetric 10G links... they do not seem to be 
lagging behind, the main criticism seems to be that they are somewhat more 
expensive than the big ISPs, which is not all that surprising given that they 
will not be able to leverage scale effects all that much simply by being 
small... Also a small ISP likely can not afford a price war with a much larger 
company (that can afford to serve below cost in areas it competes with smaller 
ISPs in an attempt to drive those smaller ones out of the market, after which 
prices likely increase again).


> The decision process time vs tech timelines exacerbate this. Somebody has to 
> predict the future - great for investors & speculators, not so for regulators 
> looking backwards.

[SM] I am not convinced that investors/speculators actually do a much 
better job predicting the future, just look at how much VC is wadted on 
hare-brained schemes like NFTs/crypto currencies and the like? 

> Also, engineering & market cadence matching is critical and neither LUS nor 
> Glasgow solved that.

[SM] But do they need to solve that? Would it not be enough to simply 
keep offering something that in their service area is considered good enough by 
the customers? Whether they do or do not, I can not tell.


Regards
Sebastian


> 
> Bob
>> Hi Bob,
>>> On Oct 8, 2023, at 21:27, rjmcmahon  wrote:
>>> Hi Sebastian,
>>> Here's a good link on Glasgow, KY likely the first U.S. muni network 
>>> started around 1994. It looks like a one and done type investment. Their 
>>> offering was competitive for maybe a decade and now seems to have fallen 
>>> behind for the last few decades.
>>> https://www.glasgowepb.com/internet-packages/
>>> https://communitynets.org/content/birth-community-broadband-video
>>  [SM] Looks like they are using DOCSIS and are just about to go fiber;
>> not totally unexpected, it takes awhile to amortize the cost of say a
>> CMTS to go DOCSIS and only after that period you make some profit, so
>> many ISPs will be tempted to operate the active gear a bit longer
>> 

Re: [NNagain] The non-death of DSL

2023-10-09 Thread Sebastian Moeller via Nnagain
Hi Bob,


> On Oct 8, 2023, at 22:18, rjmcmahon  wrote:
> 
> Tragedy of the commons occurs because the demand & free price for the common 
> resources outstrips supply. Free cow grazing in Boston Commons only worked 
> for 70 cows and then collapsed.

[SM] Here is the thing, if the carrying capacity is/was 70 the local 
regulator would have needed to make sure that at no time there were more than 
70 cows and come up with a schedule... so from my vantage point that was 
insufficient regulation and/or enforcement... 

> Over fishing in multiple places today are killing off a "wild" food supply.

[SM] Same thing ;)

> The regulator tries to manage the demand while keeping prices artificially 
> low, typically for political/populism reasons, vs finding ways to increase or 
> substitute supply and create incentives for investment. In the U.S., they 
> seem to ultimately give up (regulatory capture is a form of resignation) and 
> let so-called privatization occur (barbed wire ranches throughout Texas vs 
> free roaming) which also allows ownership & market forces to come into play, 
> even if imperfectly.

[SM] We had a large helping of this over here as well during the 90ies 
neo-liberal "revolution" where European states privatized previously state 
owned property on the theory that on private hands this property would generate 
more income for all. It turns out the "all" in the promise was not the same all 
initially hoped for... in some cases these privatizations worked out OK-ish in 
others not so much... I am old enough to remember the less than perfect sides 
of our old Bundespost monopoly telco but I also see what is going wrong in the 
new shiny world of private telcos... (it was easier to steer a nationally owned 
telco in a macro-economic sensible direction, with private owned companies 
often micro-economics get in the way ;) )


> I do like the idea of a benevolent and all wise regulator that can move 
> society forward.
> I just don't see it in the U.S. We seem to struggle with a functional 
> Congress that can govern and and ethically based SCOTUS which are not nearly 
> as nuanced as technology and the ongoing digital transformation.

[SM] Yes, given the apparent disfunction and vitriol between the two 
sides on the last decade getting things done for the future efficiently and 
bipartisanly looks a bit bleak... nasty as from my perspective the US system is 
essentially "designed/evolved" to operate with two opposed parties that still 
manage to get things done together by compromising.


> Today, the FCC can only regulate decaying affiliate broadcast news and stays 
> silent about "news" distortions despite an insurrection that still threatens 
> the Republic.
>  Sorry to lose confidence in them but we need to see the world as it is.

[SM] I am with you here, the US media landscape looks quite hellish 
from over here (not that we do not have our own issues with increasing 
polarization in our society). Yet, what can the FCC do if Congress does not 
agree on what to do here... 

> 
> https://www.fcc.gov/media/radio/public-and-broadcasting
> 
> News Distortion.  The Commission often receives complaints concerning 
> broadcast journalism, such as allegations that stations have aired inaccurate 
> or one-sided news reports or comments, covered stories inadequately, or 
> overly dramatized the events that they cover.  For the reasons noted 
> previously, the Commission generally will not intervene in these cases 
> because it would be inconsistent with the First Amendment to replace the 
> journalistic judgment of licensees with our own.  However, as public 
> trustees, broadcast licensees may not intentionally distort the news.  The 
> FCC has stated that “rigging or slanting the news is a most heinous act 
> against the public interest.”  The Commission will investigate a station for 
> news distortion if it receives documented evidence of rigging or slanting, 
> such as testimony or other documentation, from individuals with direct 
> personal knowledge that a licensee or its management engaged in the 
> intentional falsification of the news.  Of particular concern would be 
> evidence of the direction to employees from station management to falsify the 
> news.  However, absent such a compelling showing, the Commission will not 
> intervene.

[SM] I guess as noted the first amendment to the constitution is a 
pretty big issue here, making it hard to interject in cases that are not clear 
beyond a reasonable doubt... IMHO the real solution is making sure people are 
well-educated enough to see though the cheap attempts of manipulating opinions, 
but that might be hoping too much, and certainly is not a short term solution...

Sebastian


> 
> Bob
> 
>> Hi Bob,
>> thanks for the interesting discussion, I am learning a lot! I am
>> unsure whether the following is too direct
>>> On Oct 8, 2023, at 18:37, Robert McMahon  wrote:
>>> Hi Sebastian,
>>> 

Re: [NNagain] The non-death of DSL

2023-10-08 Thread Sebastian Moeller via Nnagain
Hi Bob,



> On Oct 8, 2023, at 21:27, rjmcmahon  wrote:
> 
> Hi Sebastian,
> 
> Here's a good link on Glasgow, KY likely the first U.S. muni network started 
> around 1994. It looks like a one and done type investment. Their offering was 
> competitive for maybe a decade and now seems to have fallen behind for the 
> last few decades.
> 
> https://www.glasgowepb.com/internet-packages/
> https://communitynets.org/content/birth-community-broadband-video

[SM] Looks like they are using DOCSIS and are just about to go fiber; 
not totally unexpected, it takes awhile to amortize the cost of say a CMTS to 
go DOCSIS and only after that period you make some profit, so many ISPs will be 
tempted to operate the active gear a bit longer longer after break even, as 
with new active gear revenue will likely not generate surplus. The challenge is 
to decide when to upgrade...

My preferred model however is not necessarily having a communal ISP that sells 
internet access services (I am not against that), but have a communal built-out 
of the access network and centralize the lines (preferably fiber) in a few 
large enough local IXs, so internet access providers only need to bring their 
head-ends and upstream links to those locations to be able to offer services. 
In the beginning it makes probably sense to also offer some sort of GPON/XGSPON 
bit stream access to reduce the up-front cost for ISPs that expect to serve 
only a small portion of customers in such an IX, but that is pure 
speculation The real idea is to keep those things that will result in a 
natural monopoly to form in communal hands (that already manage other such 
monopoly infrastructure anyway) and then try to use the fact that there is no 
local 800lb Gorilla ISP owning most lines to try to create a larger pool of 
competing ISPs to light up the fiber infrastructure... That is I am fine with a 
market solution, if we can assure the market to be big enough to actually 
deliver on its promises.


> LUS is similar if this article is to be believed. 
> https://thecurrentla.com/2023/column-lus-fiber-has-lost-its-edge/

[SM] The article notices that comparing things is hard... as the offers 
differ considerably from what alternate ISPs offer (e.g. LUS offers symmetric 
capacity for down- and upload) and the number compared seems to be the 
advertised price, which IIRC in the US is considerably smaller than what one 
happens to actually pay month per month due to additional fees and stuff... (in 
Germany prices for end-customers typically are "all inclusive prices", the 
amount of VAT/tax is shown singled out in the receipts, but the number we 
operate on is typically the final price, but then we have almost no local taxes 
that could apply).


> The LUS NN site says there is no congestion on their fiber (GPON) so they 
> don't need AQM or other congestion mgmt mechanisms which I find suspect. 
> https://www.lusfiber.com/net-neutrality

[SM] Actually intriguing, would I live in their area I would try them 
out, then I could report on the details here :) 
Browsing their documentation I am not a big fan of their volume limits though, 
I consider these to be absurd measures of control(absurd in that they are 
too loosely coupled with the relevant measure for the actual cost).

> This may demonstrate that technology & new requirements are moving too 
> quickly for municipal approaches.

[SM] That might well be true. I have no insight any more on how this 
affects commercial ISPs in the US either (I only tried two anyway sonic and 
charter)

> 
> Bob
>> Hi Sebastian,
>> The U.S. of late isn't very good with regulatory that motivates
>> investment into essential comm infrastructure. It seems to go the
>> other way, regulatory triggers under investment, a tragedy of the
>> commons.
>> The RBOCs eventually did overbuild. They used wireless and went to
>> contract carriage, and special access rate regulation has been
>> removed. The cable cos did HFC and have always been contract carriage.
>> And they are upgrading today.
>> The tech companies providing content & services are doing fine too and
>> have enough power to work things out with the ISPs directly.
>> The undeserved areas do need support. The BEAD monies may help. I
>> think these areas shouldn't be relegated to DSL.
>> Bob
>> On Oct 8, 2023, at 2:38 AM, Sebastian Moeller  wrote:
>>> Hi Bob,
>>> On 8 October 2023 00:13:07 CEST, rjmcmahon via Nnagain
>>>  wrote:
 Everybody abandoned my local loop. Twisted pair from multiple
 decades ago into antiquated, windowless COs with punch blocks,
 with no space nor latency advantage for colocated content &
 compute, seems to have killed it off.
>>> [SM] Indeed, throughput for DSL is inversely proportional to loop
>>> length, so providing 'acceptable' capacity requires sufficiently
>>> short wire runs from DSLAM to CPE, and that in turn means moving
>>> DSLAMs closer to the end users... which in a densely 

Re: [NNagain] The non-death of DSL

2023-10-08 Thread Sebastian Moeller via Nnagain
Hi Bob,

thanks for the interesting discussion, I am learning a lot! I am unsure whether 
the following is too direct


> On Oct 8, 2023, at 18:37, Robert McMahon  wrote:
> 
> Hi Sebastian,
> 
> The U.S. of late isn't very good with regulatory that motivates investment 
> into essential comm infrastructure. It seems to go the other way, regulatory 
> triggers under investment, a tragedy of the commons.

[SM] My personal take on "tragedy of the commons" is that this is an 
unfortunate framing that tries to muddy the waters. What "tragedy of the 
commons" boils down to in insufficient or insufficiently enforced regulation. 
The tragic part is that we theoretically already know how to avoid that...

> 
> The RBOCs eventually did overbuild. They used wireless and went to contract 
> carriage, and special access rate regulation has been removed.

[SM] Clearly sub-optimal regulation at play here that leaves obvious 
lucrative alternate pathways outside of the regulated component... the solution 
clearly would have been to put wireless under regulation as well (either 
immediately or as a pre-declared response to insufficient fiwed wire access 
plant maintenance and built-out). Then again easy to say now...

> The cable cos did HFC and have always been contract carriage.

[SM] At least in Germany without good justification, once an access 
network is large enough to stymie growth of competitors by sheer size it needs 
to be put under regulations (assuming we actually desire competition in the 
internet access market*). Letting such players escape regulation is doubly 
problematic:
a) it results in anti-competitive market consolidation in the hands of those 
players.
b) it puts the other (incumbent) players subject to regulatory action at a 
clear disadvantage.

*) IMHO we will never get meaningful infrastructure competition in the access 
network though, too few players to land us anyway outside of monopoly/oligopoly 
regime...

> And they are upgrading today.
> 
> The tech companies providing content & services are doing fine too and have 
> enough power to work things out with the ISPs directly. 

[SM] Yes and no, few ISPs if any are willing to try to strong arm 
Google/Facebook/Apple/... but smaller players do fall pray to sufficiently 
large ISPs playing games to sell access to their eye-balls (see e.g. the 
carefully and competently managed under-peering Deutsche Telekom (DT) does with 
the other T1-ISPs to "encourage" all content providers to also buy direct 
access t the Deutsche Telekom, technically billed as "transit", but far above 
alternative transit that few content providers will use this nominal transit to 
reach anything but Telekom eye-balls, but I digress. However DT did not invent 
that technique but learned from AT and Verizon*).


*) Only a few ISPs can really pull this off, as you need to be essentially 
transit-free yourself, otherwise your own Transit provider will allow others to 
reach you over typically not congested links. But as SwissCom and Deutsche 
Telekom demonstrated in the past, if you then collude with your Transit 
provider you might still be able to play such games. Side-note in Germany DT is 
forced by law to allow resellers on its copper plant so end-customers unhappy 
with DT's peering policy can actually change ISP and some do, but not enough to 
hinder DT from trying this approach.
In addition DT together with other European ex-monopoly telecoms lobbies the EU 
commission hard to force big tech to pay for access network build out in 
Germany... Now, I do have sympathies for appropriately taxing big tech in those 
countries they generate revenue, but not to line the coffers of telecoms for a 
service they were already paid for by their end-customers.


> 
> The undeserved areas do need support.

[SM] I fully agree! We should give all regions and access links the 
same equitable starting point to participate in the digital society.


> The BEAD monies may help. I think these areas shouldn't be relegated to DSL.

[SM] My take here is that FTTH is inevitable as the next step sooner or 
later. But for today's needs DSL would do just fine... except for rural areas 
moving outdoor DSLAMs close enough to the customers to allow acceptable access 
capacity is likely almost as expensive (if not more expensive due to the active 
DSLAM tech) as not stopping with the fiber at the potential outdoor DSLAM 
location, but putting it all the way to the end-customers.
However dark fibers in the ground are only half the problem, we still should 
allow for meaningful competition over these fibers in offering internet access 
services, as one thig we know about the free market is, it works better the 
more different players we have on the supply and demand side. (For internet 
access the demand side is not the problem, but the supply side is where we need 
to take steps to get over what Rosenworcel described as only 20% of US 
households have 

Re: [NNagain] The non-death of DSL

2023-10-08 Thread Sebastian Moeller via Nnagain
Hi Dave,

On 8 October 2023 02:07:50 CEST, Dave Taht via Nnagain 
 wrote:
>I had found henning shulzerine's projections as to the death of POTs

[SM] One argument was that POTs switches were getting hard to come by, but I 
find this hard to believe that generally stated, hard to come by at prices 
competitive with IP gear might be closer to reality, and clearly the more ISPs 
switched to VoIP the smaller the market for POTs gear and hence the incentive 
to develop new generations of switches...


>very compelling when he presented at ietf 86 back in 2013. I cannot
>find the video, but there are all sorts of great charts and data here
>worth reflecting about and updating.
>
>https://docs.google.com/presentation/d/1TG0f18_ySAb4rJtC2SGeYPoY2oMdh0NS/edit?usp=sharing=107942175615993706558=true=true
>
>Unfortunately he presently has a gig with the NTIA and probably cannot
>participate here in the current contexts (although I like to think all
>that we will end up discussing will impact multiple agencies, NIST,
>and FEMA, for example)
>
>Still looking for better DSL data

[SM] I found a site claiming ~80% coverage with DSL in the US but no 
information about actually usage or distribution into capacity tiers or 
technologies. Making that reference not even worth posting


>
>On Sat, Oct 7, 2023 at 2:22 PM Dave Taht  wrote:
>>
>> I have a lot to unpack from this:
>>
>> https://docs.fcc.gov/public/attachments/DOC-397257A1.pdf
>>
>> the first two on my mind from 2005 are: "FCC adopted its first open
>> internet policy" and "Competitiveness"  As best as I recall, (and
>> please correct me), this led essentially to the departure of all the
>> 3rd party DSL providers from the field. I had found something
>> referencing this interpretation that I cannot find right now, but I do
>> clearly remember all the DSL services you could buy from in the early
>> 00s, and how few you can  buy from now. Obviously there are many other
>> possible root causes.
>>
>> DSL continued to get better and evolve, but it definately suffers from
>> many reports of degraded copper quality, but does an estimate exist
>> for how much working DSL is left?
>>
>> Q0) How much DSL is in the EU?
>> Q1) How much DSL is left in the USA?
>> Q2) What form is it? (VDSL, etc?)
>>
>> Did competition in DSL vanish because of or not of an FCC related order?
>>
>> --
>> Oct 30: https://netdevconf.info/0x17/news/the-maestro-and-the-music-bof.html
>> Dave Täht CSO, LibreQos
>
>
>
>-- 
>Oct 30: https://netdevconf.info/0x17/news/the-maestro-and-the-music-bof.html
>Dave Täht CSO, LibreQos
>___
>Nnagain mailing list
>Nnagain@lists.bufferbloat.net
>https://lists.bufferbloat.net/listinfo/nnagain

-- 
Sent from my Android device with K-9 Mail. Please excuse my brevity.
___
Nnagain mailing list
Nnagain@lists.bufferbloat.net
https://lists.bufferbloat.net/listinfo/nnagain


Re: [NNagain] The non-death of DSL

2023-10-08 Thread Sebastian Moeller via Nnagain
Hi Bob,

On 8 October 2023 00:13:07 CEST, rjmcmahon via Nnagain 
 wrote:
>Everybody abandoned my local loop. Twisted pair from multiple decades ago into 
>antiquated, windowless COs with punch blocks, with no space nor latency 
>advantage for colocated content & compute, seems to have killed it off. 

[SM] Indeed, throughput for DSL is inversely proportional to loop length, so 
providing  'acceptable' capacity requires sufficiently short wire runs from 
DSLAM to CPE, and that in turn means moving DSLAMs closer to the end users... 
which in a densely populated area works well, but in a less densely populated 
area becomes costly fast. And doing so will only make sense if you get enough 
customers on such an 'outdoor DSLAM' so might work for the first to built out, 
but becomes prohibitively unattractive for other ISP later. However terminating 
the loops in the field clears up lots of spaces in the COs... not that anybody 
over here moved much compute into these... (there exist too many COs to make 
that an attractive proposition in spite of all the hype about moving compute to 
the edge). As is a few well connected data centers for compute seem to work 
well enough...

I suspect in some towns one can buy out the local loop copper with just a 
promise of maintenance. 

[SM] A clear sign of regulatory failure to me, maintenance of the copper plant 
inherited from Bell should never have been left to the ISPs to decide about... 


The whole CLEC open the loop to competitive access seems to have failed per 
costs, antiquated technology, limited colocation, an outdated waveguide 
(otherwise things like CDDI would have won over Cat 5), and market reasons. The 
early ISPs didn't collocate, they bought T1s and E1s and connected the TDM to 
statistical multiplexing - no major investment there either.
>
>The RBOCs, SBC (now AT) & and VZ went to contract carriage and wireless 
>largely because of the burdens of title II per regulators not being able to 
>create an investment into the OSPs. The 2000 blow up was kinda real.

[SM] Again, I see no fault in title 2 here, but in letting ISPs of the hook on 
maintaining their copper plant or replace it with FTTH...



>
>She starts out by complaining about trying to place her WiFi in the right 
>place. That's like trying to share a flashlight. She has access to the FCC 
>technology group full of capable engineers.  They should have told her to 
>install some structured wire, place more APs, set the carrier and turn down 
>the power. 

[SM] I rather read this more as an attempt to built a report with the audience 
over a shared experience  and less as a problem report ;)


My wife works in the garden now using the garden AP SSID with no issues. My 
daughter got her own carrier too per here Dad dedicating a front end module for 
her distance learning needs. I think her story to justify title II regulation 
is a bit made up.

[SM] Hmm, while covid19 lockdown wasn't the strongest example, I agree, I see 
no good argument for keeping essential infrastructure like internet access in 
private hands without appropriate oversight. Especially given the numbers for 
braodband choice for customers, clearly the market is not going to solve the 
issues at hand.


>
>Also, communications have been essential back before the rural free delivery 
>of mail in 1896. Nothing new here other than hyperbole to justify a 5 member 
>commission acting as the single federal regulator over 140M households and 33M 
>businesses, almost none of which have any idea about the complexities of the 
>internet.

[SM] But the access network is quite different than the internet's core, so not 
being experts on the core seems acceptable, no? And even 5 members is clearly 
superior to no oversight at all?

 I'm not buying it and don't want to hand the keys to the FCC who couldn't 
protect journalism nor privacy. Maybe start there, looking at what they didn't 
do versus blaming contract carriage for a distraction?

[SM] I can speak to the FCC as regulatory agency, but over here IMHO the 
national regulatory agency does a decent job arbitrating between the interests 
of both sides.


>
>https://about.usps.com/who/profile/history/rural-free-delivery.htm#:~:text=On%20October%201%2C%201896%2C%20rural,were%20operating%20in%2029%20states.
>
>Bob
>> My understanding, though I am not 100% certain, is that the baby bells
>> lobbied to have the CLEC equal access provisions revoked/gutted.
>> Before this, the telephone companies were required to provide access
>> to the "last mile" of the copper lines and the switches at wholesale
>> costs. Once the equal access provisions were removed, the telephone
>> companies started charging the small phone and DSL providers close to
>> the retail price for access. The CLEC DSL providers could not stay in
>> business when they charged a customer $35 / month for Internet service
>> while the telephone company charged the DSL ISP $35 / month for
>> access.
>> 
>> 
>> 
>> 
>>   On Sat, 

Re: [NNagain] The non-death of DSL

2023-10-08 Thread Sebastian Moeller via Nnagain
Hi Dave,


> On Oct 7, 2023, at 23:22, Dave Taht via Nnagain 
>  wrote:
> 
> I have a lot to unpack from this:
> 
> https://docs.fcc.gov/public/attachments/DOC-397257A1.pdf

Thanks for the link, I think this contains solid arguments for the FCC's 
current position. I for one am convinced that internet access is a game served 
well by having referees with "teeth".


> the first two on my mind from 2005 are: "FCC adopted its first open
> internet policy" and "Competitiveness"  As best as I recall, (and
> please correct me), this led essentially to the departure of all the
> 3rd party DSL providers from the field. I had found something
> referencing this interpretation that I cannot find right now, but I do
> clearly remember all the DSL services you could buy from in the early
> 00s, and how few you can  buy from now. Obviously there are many other
> possible root causes.

Since in other markets, introduction of NN/open internet regulations did not 
kill local loop unbundling this is IMHO not a strict consequence of sch 
regulations, but might be related to the exact process and scope of those 
regulations.


> 
> DSL continued to get better and evolve,

No shit, with sufficient short links G.fast offers capacity in the gigabit 
range, and up to 500m VDSL2 can deliver 100/40 Mbps...

> but it definately suffers from
> many reports of degraded copper quality,

For sure, once the cables are bad interference increases and achievable 
capacity drops quickly, and stability takes a hit.


> but does an estimate exist
> for how much working DSL is left?
> 
> Q0) How much DSL is in the EU?

This differs wildly by country, but here are some numbers for 2021 (which hence 
will likely over estimate the number of DSL links somewhat):
https://de.statista.com/statistik/daten/studie/303187/umfrage/anteil-der-dsl-anschluesse-an-allen-breitbandanschluessen-in-laendern-der-eu/


> Q1) How much DSL is left in the USA?

As option or as actually booked contract?


> Q2) What form is it? (VDSL, etc?)

I have no authoritative answer to Q1-3, but I can answer a Q4 (Amount of active 
access links per technology in Germany in 2022) you did not ask
see 
https://www.brekoverband.de/site/assets/files/37980/breko_marktanalyse_2023-1.pdf
 slide 11:
FTTH/B:  3,400,000  : mostly PtMP GPON, a bit PtP AON ethernet, and some 
VDSL2 and G.fast (for in house distribution for some FTTB links)
HFC: 8,700,000  : mix of DOCSIS 3.0 and DOCSIS 3.1, speeds up to 1000/50
VDSL:   19,500,000  : ITU G.993.5, VDSL2 with Vectoring and ITU G.998.4 
G.INP, profile 17a (up to 100/40 Mbps) or 35b (up to 250/40 Mbps)
ADSL:5,200,000  : ITU G.992.5, stuck on ATM/AAL5, gross speeds up to 
24/3.5, marketed speeds up to 16/3.5 Mbps

So for 2022 100 * (19.5+5.2)/(3.4+8.7+19.5+5.2) = 67.12% DSL (of around 37 
million access links).



Germany is straggling behind in the FTTH roll-out compared to most other EU 
countries (see 
https://www.ispreview.co.uk/index.php/2023/04/2023-full-fibre-country-ranking-sees-uk-coverage-accelerate-vs-eu39.html),
 but except for the 5.2 million links still on ADSL most users have access to 
adequate access rates to participate in the digital society (~78% of access 
links have booked rates >= 30 Mbps, see breko_marktanalyse_2023).
I would guess that Germany is only partially representative for Europe as a 
whole, as I see a clear interaction between the incumbent's tech replacement 
cycle and the state of FTTH deployment. IMHO Deutsche Telekom started its last 
modernization a bit too early to jump on the FTTH train and hence opted for 
upgrading ADSL/non vectoring VDSL2 to vectoring VDSL2 to allow speeds of 100 
Mbps to counter the DOCSIS thread (sure DOCSIS was always faster so the goal 
was IMHO not to fall behind too much).

I also note that the incumbent in Germany is forced by regulation to virtual 
local loop unbundling, which now a days in practice typically means competitors 
buy bit stream access (BSA)#, both the BSA and the incumbent's DSL prices are 
ex-ante regulated, that is they need regulatory acceptance before coming into 
effect. The regulator aims at setting these prices such that the wholesale 
prices reflects the estimated cost of building/maintaining the copper 
infrastructure and the incumbent's prices leave room for competitors to 
undercut the incumbent's prices while still making a surplus. (The ex 
monopolist incumbent still is the single largest ISP, neither the fact that the 
resellers are generally cheaper, nor the fact that DOCSIS is generally both 
cheaper and faster managed to change that*). I personally think that this 
regulation works pretty well, my only beef with it is that the regulatory 
agency seems unwilling to accept that the largest DOCSIS ISP (Vodafone) is also 
too large and should be submitted to a similar regulatory regime, but I digress.

All that said, Germany is on track to replacing the copper access network with 
FTTH in the next decade (it has proven to be one