RE: constant FEC errors juniper mpc10e 400g

2024-04-22 Thread Vasilenko Eduard via NANOG
Assume that some carrier has 10k FBB subscribers in a particular municipality 
(without any hope of considerably increasing this number).
2Mbps is the current average per household in the busy hour, pretty uniform 
worldwide.
You could multiply it by 8/7 if you like to add wireless -> not much would 
change.
2*2*10GE (2*10GE on the ring in every direction) is 2 times than needed to 
carry 10k subscribers.
The optical ring may be less than 20 municipalities - it is very common.
Hence, the upgrade of old extremely cheap 10GE DWDM systems (for 40 lambdas) 
makes sense for some carriers.
It depends on the population density and the carrier market share.
10GE for the WAN side would not be dead in the next 5 years because 2Mbps per 
household would not grow very fast in the future - this logistic curve is close 
to a plateau.
PS: It is probably not the case for Africa where new subscribers are connected 
to the Internet at a fast rate.
Ed/
-Original Message-
From: NANOG  On Behalf Of 
Tarko Tikan
Sent: Saturday, April 20, 2024 19:19
To: nanog@nanog.org
Subject: Re: constant FEC errors juniper mpc10e 400g

hey,

> That said, I don't expect any subsea cables getting built in the next 
> 3 years and later will have 10G as a product on the SLTE itself... it 
> wouldn't be worth the spectrum.

10G wavelengths for new builds died about 10 years ago when coherent 100G 
became available, submarine or not. Putting 10G into same system is not really 
feasible at all.

--
tarko



RE: One Can't Have It Both Ways Re: Streamline the CG-NAT Re: EzIP Re: IPv4 address block

2024-01-14 Thread Vasilenko Eduard via NANOG
+1

From: NANOG [mailto:nanog-bounces+vasilenko.eduard=huawei@nanog.org] On 
Behalf Of Brett O'Hara
Sent: Saturday, January 13, 2024 1:04 PM
To: Forrest Christian (List Account) 
Cc: Chen, Abraham Y. ; NANOG 
Subject: Re: One Can't Have It Both Ways Re: Streamline the CG-NAT Re: EzIP Re: 
IPv4 address block

Ok you've triggered me on your point 2.  I'll address the elephant in the room.

IPv4 is never ever going away.

Right now consumer services are mostly (mobile, wireless, landline, wide 
generalization) are IPv6 capable.  Most consumer applications are ipv6 capable, 
Google, Facebook, etc.There is light at the very end of the tunnel that 
suggests that one day we won't have to deploy CGNAT444 for our consumers to get 
to content, we may only have to do NAT64 for them to get to the remaining Ipv4 
Internet.  We're still working hard on removing our reliance on genuine ipv4 
ranges to satisfy our customer needs, It's still a long way off, but it's 
coming.

Here's the current problem.  Enterprise doesn't need ipv6 or want ipv6.  You 
might be able to get away with giving CGNAT to your consumers, but your 
enterprise customer will not accept this. How will they terminate their remote 
users?  How will they do B2B with out inbound NAT?  Yes, there are solutions, 
but if you don't need to, why?  They pay good money, why can't they have real 
ipv4?  All their internal networks are IPv4 rfc1918.  They are happy with NAT.  
Their application service providers are ipv4 only. Looking at the services I 
access for work things like SAP, SerivceNow, Office386, Sharepoint, Okta, 
Dayforce, Xero, and I'm sure many more, none can not be accessed on ipv6 
alone..  Their internal network lifecycle is 10+ years.  They have no interest 
in trying new things or making new technology work without a solid financial 
reason and there is none for them implementing ipv6.   And guess where all the 
IP addresses we're getting back from our consumers are going?  Straight to our 
good margin enterprise customers and their application service providers.  
Consumer CGNAT isn't solving problems, it's creating more.

The end of IPv4 isn't nigh, it's just privileged only.

PS When you solve that problem in 50 years time, I'll be one of those old 
fogey's keeping an IPv4 service alive as an example of "the old Internet" for 
those young whippersnappers to be amazed by.

Regards,
   Brett



On Sat, Jan 13, 2024 at 7:31 PM Forrest Christian (List Account) 
mailto:li...@packetflux.com>> wrote:
A couple of points:

1) There is less work needed to support IPv6 than your proposed solution.  I'm 
not taking about 230/4.  I'm talking about your EzIP overlay.

2) Assume that Google decided that they would no longer support IPv4 for any of 
their services at a specific date a couple of years in the future.  That is,  
you either needed an IPv6 address or you couldn't reach Google, youtube, Gmail 
and the rest of the public services.  I bet that in this scenario every eyeball 
provider in the country all of a sudden would be extremely motivated to deploy 
IPv6, even if the IPv4 providers end up natting their IPv4 customers to IPv6.  
I really expect something like this to be the next part of the end game for 
IPv4.

Or stated differently: at some point someone with enough market power is going 
to basically say "enough is enough" and make the decision for the rest of us 
that IPv4 is effectively done on the public internet.   The large tech 
companies all have a history of sunsetting things when it becomes a bigger 
problem to support than it's worth.  Try getting a modern browser that works on 
32 bit windows.   Same with encryption protocols, Java in the browser,  
Shockwave and flash, and on and on.

I see no reason why IPv4 should be any different.

On Fri, Jan 12, 2024, 3:42 PM Abraham Y. Chen 
mailto:ayc...@avinta.com>> wrote:

Hi, Forrest:

0)You put out more than one topic, all at one time. Allow me to address 
each briefly.

1)   "  The existence of that CG-NAT box is a thorn in every provider's side 
and every provider that has one wants to make it go away as quickly as 
possible.   ":

The feeling and desire are undeniable facts. However, the existing 
configuration was evolved from various considerations through a long time. 
There is a tremendous inertia accumulated on it. There is no magic bullet to 
get rid of it quickly. We must study carefully to evolve it further 
incrementally. Otherwise, an even bigger headache or disaster will happen.

2)"  The quickest and most straightforward way to eliminate the need for 
any CG-NAT is to move to a bigger address space.  ":

The obvious answer was IPv6. However, its performance after near two 
decades of deployment has not been convincing. EzIP is an alternative, 
requiring hardly any development, to address this need immediately.

3)   "  Until the cost (or pain) to stay on IPv4 is greater than the cost to 
move,  we're going to see continued resistance to doing so.   

RE: Stealthy Overlay Network Re: 202401100645.AYC Re: IPv4 address block

2024-01-12 Thread Vasilenko Eduard via NANOG
Public side of the NAT would need a huge IPv4 Public pool.
Replacing Private pool to something bigger is a very corner case.
Mobile Carriers identify subscribers not by the IP, they could easy tolerate 
many overlapping 10/8 even on one Mobile Core.
Huge private pool 240/4 is needed only for Cloud providers that have many 
micro-services.
Nothing to dispute here. The people that need it already well aware about it.
Eduard
From: Abraham Y. Chen [mailto:ayc...@avinta.com]
Sent: Friday, January 12, 2024 5:39 AM
To: Vasilenko Eduard 
Cc: nanog@nanog.org; KARIM MEKKAOUI ; Chen, Abraham Y. 

Subject: Stealthy Overlay Network Re: 202401100645.AYC Re: IPv4 address block
Importance: High

Hi, Vasilenko:

1)... These “multi-national conglo” has enough influence on the IETF to not 
permit it.":

As classified by Vint Cerf, 240/4 enabled EzIP is an overlay network that 
may be deployed stealthily (just like the events reported by the RIPE-LAB). So, 
EzIP deployment does not need permission from the IETF.

Regards,


Abe (2024-01-11 21:38 EST)




On 2024-01-11 01:17, Vasilenko Eduard wrote:
> It has been known that multi-national conglomerates have been using it 
> without announcement.
This is an assurance that 240/4 would never be permitted for Public Internet. 
These “multi-national conglo” has enough influence on the IETF to not permit it.
Ed/
From: NANOG [mailto:nanog-bounces+vasilenko.eduard=huawei@nanog.org] On 
Behalf Of Abraham Y. Chen
Sent: Wednesday, January 10, 2024 3:35 PM
To: KARIM MEKKAOUI 
Cc: nanog@nanog.org; Chen, Abraham Y. 

Subject: 202401100645.AYC Re: IPv4 address block
Importance: High

Hi, Karim:

1)If you have control of your own equipment (I presume that your business 
includes IAP - Internet Access Provider, since you are asking to buy IPv4 
blocks.), you can get a large block of reserved IPv4 address for free by 
disabling the program codes in your current facility that has been disabling 
the use of 240/4 netblock. Please have a look at the below whitepaper. Utilized 
according to the outlined disciplines, this is a practically unlimited 
resources. It has been known that multi-national conglomerates have been using 
it without announcement. So, you can do so stealthily according to the proposed 
mechanism which establishes uniform practices, just as well.

https://www.avinta.com/phoenix-1/home/RevampTheInternet.pdf

2)Being an unorthodox solution, if not controversial, please follow up with 
me offline. Unless, other NANOGers express their interests.


Regards,


Abe (2024-01-10 07:34 EST)



On 2024-01-07 22:46, KARIM MEKKAOUI wrote:
Hi Nanog Community

Any idea please on the best way to buy IPv4 blocs and what is the price?

Thank you

KARIM




[https://s-install.avcdn.net/ipm/preview/icons/icon-envelope-tick-round-orange-animated-no-repeat-v1.gif]

Virus-free.www.avast.com






RE: 202401100645.AYC Re: IPv4 address block

2024-01-10 Thread Vasilenko Eduard via NANOG
> It has been known that multi-national conglomerates have been using it 
> without announcement.
This is an assurance that 240/4 would never be permitted for Public Internet. 
These “multi-national conglo” has enough influence on the IETF to not permit it.
Ed/
From: NANOG [mailto:nanog-bounces+vasilenko.eduard=huawei@nanog.org] On 
Behalf Of Abraham Y. Chen
Sent: Wednesday, January 10, 2024 3:35 PM
To: KARIM MEKKAOUI 
Cc: nanog@nanog.org; Chen, Abraham Y. 
Subject: 202401100645.AYC Re: IPv4 address block
Importance: High

Hi, Karim:

1)If you have control of your own equipment (I presume that your business 
includes IAP - Internet Access Provider, since you are asking to buy IPv4 
blocks.), you can get a large block of reserved IPv4 address for free by 
disabling the program codes in your current facility that has been disabling 
the use of 240/4 netblock. Please have a look at the below whitepaper. Utilized 
according to the outlined disciplines, this is a practically unlimited 
resources. It has been known that multi-national conglomerates have been using 
it without announcement. So, you can do so stealthily according to the proposed 
mechanism which establishes uniform practices, just as well.

https://www.avinta.com/phoenix-1/home/RevampTheInternet.pdf

2)Being an unorthodox solution, if not controversial, please follow up with 
me offline. Unless, other NANOGers express their interests.


Regards,


Abe (2024-01-10 07:34 EST)



On 2024-01-07 22:46, KARIM MEKKAOUI wrote:
Hi Nanog Community

Any idea please on the best way to buy IPv4 blocs and what is the price?

Thank you

KARIM




[https://s-install.avcdn.net/ipm/preview/icons/icon-envelope-tick-round-orange-animated-no-repeat-v1.gif]

Virus-free.www.avast.com




RE: maximum ipv4 bgp prefix length of /24 ?

2023-09-29 Thread Vasilenko Eduard via NANOG
Well, it depends.
The question below was evidently related to business.
IPv6 does not have yet a normal way of multihoming for PA prefixes.
If IETF (and some OTTs) would win blocking NAT66,
Then /48 propoisiton is the proposition for PA (to support multihoming).
Unfortunately, it is at least a 10M global routing table as it has been shown 
by Brian Carpenter.
Reminder, The IPv6 scale on all routers is 2x smaller (if people would use DHCP 
and longer than/64 then the scale would drop 2x additionally).
Hence, /48 proposition may become 20x worse for scale than proposed initially 
in this thread.
Eduard
From: NANOG [mailto:nanog-bounces+vasilenko.eduard=huawei@nanog.org] On 
Behalf Of Owen DeLong via NANOG
Sent: Friday, September 29, 2023 7:11 AM
To: VOLKAN SALİH 
Cc: nanog@nanog.org
Subject: Re: maximum ipv4 bgp prefix length of /24 ?

Wouldn’t /48s be a better solution to this need?

Owen



On Sep 28, 2023, at 14:25, VOLKAN SALİH 
mailto:volkan.salih...@gmail.com>> wrote:


hello,

I believe, ISPs should also allow ipv4 prefixes with length between /25-/27 
instead of limiting maximum length to /24..

I also believe that RIRs and LIRs should allocate /27s which has 32 IPv4 
address. considering IPv4 world is now mostly NAT'ed, 32 IPv4s are sufficient 
for most of the small and medium sized organizations and also home office 
workers like youtubers, and professional gamers and webmasters!

It is because BGP research and experiment networks can not get /24 due to high 
IPv4 prices, but they have to get an IPv4 prefix to learn BGP in IPv4 world.

What do you think about this?

What could be done here?

Is it unacceptable; considering most big networks that do full-table-routing 
also use multi-core routers with lots of RAM? those would probably handle /27s 
and while small networks mostly use default routing, it should be reasonable to 
allow /25-/27?

Thanks for reading, regards..



RE: what is acceptible jitter for voip and videoconferencing?

2023-09-22 Thread Vasilenko Eduard via NANOG
Hi Dave,
You did not tell: is it interactive? Because we could use big buffers and 
convert jitter to latency (some STBs have sub-second buffers).
Then jitter would effectively become Zero (more precise: not a problem), and we 
deal only with latency consequences.
Hence, your question is not about jitter, it is about latency.

By all 5 (or 6?) senses, the Human is a 25ms resolution machine (limited by the 
animal part of our brain: limbic system). Anything faster is “real-time”. Even 
echo cancellation is not needed – we hear echo but cannot split signals.
Dog has 2x better resolution, cat is 3x better. They probably hate cheap 
monitor pictures (PAL/SECAM had 50Hz, NTSC had 60Hz).
25ms is for everything round trip. 8ms is wasted just for visualization on the 
best screen (120Hz).
The typical budget left for the networking part (speed of light in the fiber) 
is about 5ms one way (1000km or do you prefer miles?).
Maybe worse, depends on the rendering in GPU (3ms?), processing in the app 
(3ms?), sensor of the initial signal (1ms?), and so on.
The worst problem is that the jitter buffer would be substructed from the same 
25ms budget☹
Hence, it is easy to consume (by the jitter buffer) the 10ms that we typically 
have for networking and come to the situation when we left just with 1ms that 
pushe us to install MEC (distributed servers to every municipality).
Accounting for jitter buffer, it is pretty hard to be “real-time” for humans.
Hint: “Pacing” is the solution. The application should send packets with equal 
intervals. It is very much adopted by all OTTs.
By the way, “pacing” has many other positive effects on networking.

The next level is about our reaction (possibility to click). That is 150ms for 
some people, 250ms on average.
Hence, gaming is pretty affected by 50ms one-way latency because 2*50ms is 
becoming comparable to 150ms – it affects the gaming experience. In addition to 
seeing the dealy, we lose the time – the enemy would shoot us first.

The next level (for non-interactive applications) is limited only by the memory 
that you could devote to the jitter buffer.
The cinema would be fine even with a 5s jitter buffer. Except for zipping time, 
but it is a different story.

Eduard
From: NANOG [mailto:nanog-bounces+vasilenko.eduard=huawei@nanog.org] On 
Behalf Of Dave Taht
Sent: Wednesday, September 20, 2023 3:12 AM
To: NANOG 
Subject: what is acceptible jitter for voip and videoconferencing?

Dear nanog-ers:

I go back many, many years as to baseline numbers for managing voip networks, 
including things like CISCO LLQ, diffserv, fqm prioritizing vlans, and running
voip networks entirely separately... I worked on codecs, such as oslec, and 
early sip stacks, but that was over 20 years ago.

The thing is, I have been unable to find much research (as yet) as to why my 
number exists. Over here I am taking a poll as to what number is most correct 
(10ms, 30ms, 100ms, 200ms),

https://www.linkedin.com/feed/update/urn:li:ugcPost:7110029608753713152/

but I am even more interested in finding cites to support various viewpoints, 
including mine, and learning how slas are met to deliver it.

--
Oct 30: https://netdevconf.info/0x17/news/the-maestro-and-the-music-bof.html
Dave Täht CSO, LibreQos


RE: Routed optical networks

2023-05-11 Thread Vasilenko Eduard via NANOG
t;mailto:ed...@ieee.org>]
Sent: Thursday, May 11, 2023 12:48 PM
To: Vasilenko Eduard 
mailto:vasilenko.edu...@huawei.com>>
Cc: Dave Taht mailto:dave.t...@gmail.com>>; Phil Bedard 
mailto:bedard.p...@gmail.com>>; NANOG 
mailto:nanog@nanog.org>>
Subject: Re: Routed optical networks

Eduard, academics cite the VNI (and the Sandvine Global reports).

Do you know of alternative sources that show traffic growth data you're more 
comfortable with?

Cheers,

Etienne

On Thu, May 11, 2023 at 9:34 AM Vasilenko Eduard 
mailto:vasilenko.edu...@huawei.com>> wrote:
But it is speculation, not a trend yet.
I remember 10y ago every presentation started from the claim that 100B of IoT 
would drive XXX traffic. It did not happen.
Now we see presentations that AI would be talking to AI that generates  
traffic.
Maybe some technology would push traffic next S-curve, maybe not. It is still 
speculation.

The traffic growth was stimulated (despite all VNIs) by 1) new subscribers, 2) 
video quality for subscribers. Nothing else yet.
It is almost finished for both trends. We are close to the plateau of these 
S-curves.
For some years (2013-2020) I was carefully looking at numbers for many 
countries: it was always possible to split CAGR for these 2 components. The 
video part was extremely consistent between countries. The subscriber part was 
100% proportional to subscriber CAGR.
Everything else up to now was “marketing” to say it mildly.

Reminder: nothing in nature could grow indefinitely. The limit always exists. 
It is only a question of when.

PS: Of course, marketing people could draw you any traffic growth – it depends 
just on the marketing budget.

Eduard
From: Dave Taht [mailto:dave.t...@gmail.com<mailto:dave.t...@gmail.com>]
Sent: Tuesday, May 9, 2023 11:41 PM
To: Vasilenko Eduard 
mailto:vasilenko.edu...@huawei.com>>
Cc: Phil Bedard mailto:bedard.p...@gmail.com>>; 
Etienne-Victor Depasquale mailto:ed...@ieee.org>>; NANOG 
mailto:nanog@nanog.org>>
Subject: Re: Routed optical networks

Up until this moment I was feeling that my take on the decline of traffic 
growth was somewhat isolated, in that I have long felt that we are nearing the 
top of the S curve of the data we humans can create and consume. About the only 
source of future traffic growth I can think of comes from getting more humans 
online, and that is a mere another doubling.

On the other hand, predictions such as 640k should be enough for everyone did 
not pan out.

On the gripping hand, there has been an explosion of LLM stuff of late, with 
enormous models being widely distributed in just the past month:

https://lwn.net/Articles/930939/

Could the AIs takeoff lead to a resumption of traffic growth? I still don´t 
think so...


On Thu, May 4, 2023 at 10:59 PM Vasilenko Eduard via NANOG 
mailto:nanog@nanog.org>> wrote:
Disclaimer: Metaverse has not changed Metro traffic yet. Then …

I am puzzled when people talk about 400GE and Tbps in the Mero context.
For historical reasons, Metro is still about 2*2*10GE (one “2” for redundancy, 
another “2” for capacity) in the majority of cases worldwide.
How many BRASes serve more than 4/1.5=27k users in the busy hour?
It means that 50GE is the best interface now for the majority of cases. 
2*50GE=100Gbps is good room for growth.
Of course, exceptions could be. I know BRAS that handles 86k subscribers (do 
not recommend anybody to push the limits – it was so painful).

We have just 2 eyes and look at video content about 22h per week (on average). 
Our eyes do not permit us to see resolution better than particular for chosen 
distance (4k for typical TV, HD for smartphones, and so on). Color depth 10bits 
is enough for the majority, 12bits is sure enough for everybody. 120 frames/sec 
is enough for everybody. It would never change – it is our genetics.
Fortunately for Carriers, the traffic has a limit. You have probably seen that 
every year traffic growth % is decreasing. The Internet is stabilizing and 
approaching the plateau.
How much growth is still awaiting us? 1.5? 1.4? It needs separate research. The 
result would be tailored for whom would pay for the research.
IMHO: It is not mandatory that 100GE would become massive in the metro. (I know 
that 100GE is already massive in the DC CLOS)

Additionally, who would pay for this traffic growth? It also limits traffic at 
some point.
I hope it would happen after we would get our 22h/4k/12bit/120hz.

Now, you could argue that Metaverse would jump and multiply traffic by an 
additional 2x or 3x. Then 400GE may be needed.
Sorry, but it is speculation yet. It is not a trend like the current 
(declining) traffic growth.

Ed/
From: NANOG 
[mailto:nanog-bounces+vasilenko.eduard<mailto:nanog-bounces%2Bvasilenko.eduard>=huawei@nanog.org<mailto:huawei@nanog.org>]
 On Behalf Of Phil Bedard
Sent: Thursday, May 4, 2023 8:32 PM
To: Etienne-Victor Depasquale mailto:ed...@ieee.org>&

RE: Routed optical networks

2023-05-11 Thread Vasilenko Eduard via NANOG
Hi Jared,
Could I make a conclusion from your comments: "only Carrier itself understand 
the traffic - see many examples in the text".
I would very agree to this.
Eduard
-Original Message-
From: Jared Mauch [mailto:ja...@puck.nether.net] 
Sent: Thursday, May 11, 2023 3:16 PM
To: Etienne-Victor Depasquale 
Cc: Vasilenko Eduard ; NANOG 
Subject: Re: Routed optical networks



> On May 11, 2023, at 7:45 AM, Etienne-Victor Depasquale via NANOG 
>  wrote:
> 
> To clarify the table I linked to in the previous email:
> 
> Cisco estimates IP traffic exchanged over the access network by both 
> businesses and consumers with:
> 
> • endpoints over managed networks and • endpoints over unmanaged 
> networks (“Internet traffic”).
> 
> Both the mobile access network and the fixed access network are considered. 
> 
> Cisco considers IP traffic over managed networks to be characterized by 
> passage through a single service provider. 
> Without explicitly referring to quality of service (QoS), the 
> implication is clearly that the traffic is controlled to meet the QoS 
> demanded by the service level agreement (SLA).
> 
> In contrast, “Internet traffic” crosses provider domains; typically, 
> this traffic is delivered on the basis of providers’ best effort.
> These two kinds of traffic complement one another and collectively are 
> referred to as total global IP traffic.


I think there’s a lot of problems here.  While places like my employer will 
periodically disclose our traffic numbers, and DDoS providers, mitigation 
platforms and otherwise will disclose the peaks they see, much of this data is 
a bit opaque, and tools like AI that do in-metro or cross-metro 
datacenter-datacenter remote DMA type activities, those all count differently.

We have seen a continued trend of the privatization of traffic and localization 
of that over time.  I’ve watched all the big carriers retreat from their global 
network reaches to be more of regionalized networks.  A decade ago you would 
have seen European national incumbents peering and with market in Asia, and the 
complete global networks continue to shrink.

Meanwhile you have a mix of the content and cloud providers continue to build 
their business-purpose networks expanding into markets that the uppercase 
Internet may not need to reach.

You can look at the proposals in the EU about fees, and I have dual thoughts on 
this which are MY OWN and don’t represent my employer or otherwise, but if you 
read this post from Petra Arts - 
https://blog.cloudflare.com/eu-network-usage-fees/ - it speaks around major 
interconnection points like Frankfurt, which are important but double as 
problematic.  The number of people that need to go to the near market (eg: 
Chicago, while I’m in Detroit area) for good connectivity is an issue, 
meanwhile there’s a robust need to keep traffic within the state of Michigan 
and a halfway decent ecosystem for that via Detroit IX - (disclaimer, I’m on 
the board).  There need to be some aggregation points, so not everyone needs to 
be in Detroit, but also not everyone needs to be in Frankfurt - and content 
localization needs to continue to happen, but is also very regionalized in 
popularity.

How to do this all and not have it all route via Chicago or Frankfurt is a 
challenge, but also not everyone will be in Berlin, Munich or these other 
markets.  This is where having a robust optical network capability (or 
backbone) can come into play, that you can deliver deeper from those hub 
points, but at the same time, I’ve been in meetings where companies have their 
own challenges accepting that content in those downstream locations as their 
network was also built to get to/from the major hub cities, or IP space wasn’t 
allocated in a way that can provide consistent routing results or behaviors.  
(This is where IPv6 can be super helpful, it gives the chance to possibly 
Greenfield, aka not screw it up - at least initially).

There’s huge volumes of IP traffic exchanged, but the largest volumes are being 
moved over private interconnects or a localized IX to those eyeball networks 
with the historical global backbones playing more of the long-distance carrier 
role, which is critical as you want a path to deliver those bits, without it 
following the ITU-style sender pays model, as the majority of IP traffic is 
actually requested by the customer of the end-user network.  (All of it if you 
remove network scans, ddos, web bots/crawlers).

Most networks have no SLA once things cross an unpaid boundary (SFI, or even 
private peering) - and if they are a customer and that path is congested, it’s 
up to the customer to upgrade that path.

- Jared (many hats)




RE: Routed optical networks

2023-05-11 Thread Vasilenko Eduard via NANOG
I was carefully looking at numbers for many 
countries: it was always possible to split CAGR for these 2 components. The 
video part was extremely consistent between countries. The subscriber part was 
100% proportional to subscriber CAGR.
Everything else up to now was “marketing” to say it mildly.

Reminder: nothing in nature could grow indefinitely. The limit always exists. 
It is only a question of when.

PS: Of course, marketing people could draw you any traffic growth – it depends 
just on the marketing budget.

Eduard
From: Dave Taht [mailto:dave.t...@gmail.com<mailto:dave.t...@gmail.com>]
Sent: Tuesday, May 9, 2023 11:41 PM
To: Vasilenko Eduard 
mailto:vasilenko.edu...@huawei.com>>
Cc: Phil Bedard mailto:bedard.p...@gmail.com>>; 
Etienne-Victor Depasquale mailto:ed...@ieee.org>>; NANOG 
mailto:nanog@nanog.org>>
Subject: Re: Routed optical networks

Up until this moment I was feeling that my take on the decline of traffic 
growth was somewhat isolated, in that I have long felt that we are nearing the 
top of the S curve of the data we humans can create and consume. About the only 
source of future traffic growth I can think of comes from getting more humans 
online, and that is a mere another doubling.

On the other hand, predictions such as 640k should be enough for everyone did 
not pan out.

On the gripping hand, there has been an explosion of LLM stuff of late, with 
enormous models being widely distributed in just the past month:

https://lwn.net/Articles/930939/

Could the AIs takeoff lead to a resumption of traffic growth? I still don´t 
think so...


On Thu, May 4, 2023 at 10:59 PM Vasilenko Eduard via NANOG 
mailto:nanog@nanog.org>> wrote:
Disclaimer: Metaverse has not changed Metro traffic yet. Then …

I am puzzled when people talk about 400GE and Tbps in the Mero context.
For historical reasons, Metro is still about 2*2*10GE (one “2” for redundancy, 
another “2” for capacity) in the majority of cases worldwide.
How many BRASes serve more than 4/1.5=27k users in the busy hour?
It means that 50GE is the best interface now for the majority of cases. 
2*50GE=100Gbps is good room for growth.
Of course, exceptions could be. I know BRAS that handles 86k subscribers (do 
not recommend anybody to push the limits – it was so painful).

We have just 2 eyes and look at video content about 22h per week (on average). 
Our eyes do not permit us to see resolution better than particular for chosen 
distance (4k for typical TV, HD for smartphones, and so on). Color depth 10bits 
is enough for the majority, 12bits is sure enough for everybody. 120 frames/sec 
is enough for everybody. It would never change – it is our genetics.
Fortunately for Carriers, the traffic has a limit. You have probably seen that 
every year traffic growth % is decreasing. The Internet is stabilizing and 
approaching the plateau.
How much growth is still awaiting us? 1.5? 1.4? It needs separate research. The 
result would be tailored for whom would pay for the research.
IMHO: It is not mandatory that 100GE would become massive in the metro. (I know 
that 100GE is already massive in the DC CLOS)

Additionally, who would pay for this traffic growth? It also limits traffic at 
some point.
I hope it would happen after we would get our 22h/4k/12bit/120hz.

Now, you could argue that Metaverse would jump and multiply traffic by an 
additional 2x or 3x. Then 400GE may be needed.
Sorry, but it is speculation yet. It is not a trend like the current 
(declining) traffic growth.

Ed/
From: NANOG 
[mailto:nanog-bounces+vasilenko.eduard<mailto:nanog-bounces%2Bvasilenko.eduard>=huawei@nanog.org<mailto:huawei@nanog.org>]
 On Behalf Of Phil Bedard
Sent: Thursday, May 4, 2023 8:32 PM
To: Etienne-Victor Depasquale mailto:ed...@ieee.org>>; NANOG 
mailto:nanog@nanog.org>>
Subject: Re: Routed optical networks

It’s not necessarily metro specific although the metro networks could lend 
themselves to overall optimizations.

The adoption of ZR/ZR+ IPoWDM currently somewhat corresponds with your adoption 
of 400G since today they require a QDD port.   There are 100G QDD ports but 
that’s not all that popular yet.   Of course there is work to do something 
similar in QSFP28 if the power can be reduced to what is supported by an 
existing QSFP28 port in most devices.   In larger networks with higher speed 
requirements and moving to 400G with QDD, using the DCO optics for connecting 
routers is kind of a no-brainer vs. a traditional muxponder.   Whether that’s 
over a ROADM based optical network or not, especially at metro/regional 
distances.

There are very large deployments of IPoDWDM over passive DWDM or dark fiber for 
access and aggregation networks where the aggregate required bandwidth doesn’t 
exceed the capabilities of those optics.  It’s been done at 10G for many years. 
 With the advent of pluggable EDFA amplifiers, you can even build links up to 
120km* (perfect dark fiber)  carrying

RE: Routed optical networks

2023-05-11 Thread Vasilenko Eduard via NANOG
I did investigate traffic for every Carrier while dealing with it as a 
consultant (repeated many dozens of times).
I have seen over a decade how traffic growth dropped year-over-year (from 60% 
to 25% in 2019 when I dropped this activity in 2020 – covid blocked travel).
Sometimes I talk to old connections and they confirm that it is even less now.
In rear cases, It is typically possible to find this information on the public 
Internet (I remember the case when Google disclosed traffic for Pakistan at the 
conference with the explanation that 30% is attributed to new subscribers, and 
an additional +30% is to more heavy content per subscriber).
But mostly, it was confidential information from a discussion with Carriers – 
they all know very well their traffic growth.
In general, traffic stat is pretty confidential. I did not have the motivation 
to aggregate it.

Sandvine is not representative of global traffic because DPI is installed 
mostly for Mobiles. But Mobile subscriber is 10x less than fixed on traffic – 
it is not the biggest source. Moreover, Mobiles would look better growing 
because the limiting factor was on technology (5G proposed more than 4G, 4G 
proposed much more than 3G) – it would probably would less disruptive in the 
future.
Fixed Carriers do not pay DPI premiums. And rarely share their traffic 
publicly. Sandvine could not see it.

VNI is claiming so many things. Please show where exactly they show traffic 
growth (I am not interested in prediction speculations). Is it possible to 
understand CAGR for the 5 last years? Is it declining or growing? (traffic 
itself is for sure still growing)

Of course, the disruption could come at any year and add a new S-curve 
(Metaverse?). But disruption is by definition not predictable.

PS: Everything above and below in this thread is just my personal opinion.

Eduard
From: Etienne-Victor Depasquale [mailto:ed...@ieee.org]
Sent: Thursday, May 11, 2023 12:48 PM
To: Vasilenko Eduard 
Cc: Dave Taht ; Phil Bedard ; NANOG 

Subject: Re: Routed optical networks

Eduard, academics cite the VNI (and the Sandvine Global reports).

Do you know of alternative sources that show traffic growth data you're more 
comfortable with?

Cheers,

Etienne

On Thu, May 11, 2023 at 9:34 AM Vasilenko Eduard 
mailto:vasilenko.edu...@huawei.com>> wrote:
But it is speculation, not a trend yet.
I remember 10y ago every presentation started from the claim that 100B of IoT 
would drive XXX traffic. It did not happen.
Now we see presentations that AI would be talking to AI that generates  
traffic.
Maybe some technology would push traffic next S-curve, maybe not. It is still 
speculation.

The traffic growth was stimulated (despite all VNIs) by 1) new subscribers, 2) 
video quality for subscribers. Nothing else yet.
It is almost finished for both trends. We are close to the plateau of these 
S-curves.
For some years (2013-2020) I was carefully looking at numbers for many 
countries: it was always possible to split CAGR for these 2 components. The 
video part was extremely consistent between countries. The subscriber part was 
100% proportional to subscriber CAGR.
Everything else up to now was “marketing” to say it mildly.

Reminder: nothing in nature could grow indefinitely. The limit always exists. 
It is only a question of when.

PS: Of course, marketing people could draw you any traffic growth – it depends 
just on the marketing budget.

Eduard
From: Dave Taht [mailto:dave.t...@gmail.com<mailto:dave.t...@gmail.com>]
Sent: Tuesday, May 9, 2023 11:41 PM
To: Vasilenko Eduard 
mailto:vasilenko.edu...@huawei.com>>
Cc: Phil Bedard mailto:bedard.p...@gmail.com>>; 
Etienne-Victor Depasquale mailto:ed...@ieee.org>>; NANOG 
mailto:nanog@nanog.org>>
Subject: Re: Routed optical networks

Up until this moment I was feeling that my take on the decline of traffic 
growth was somewhat isolated, in that I have long felt that we are nearing the 
top of the S curve of the data we humans can create and consume. About the only 
source of future traffic growth I can think of comes from getting more humans 
online, and that is a mere another doubling.

On the other hand, predictions such as 640k should be enough for everyone did 
not pan out.

On the gripping hand, there has been an explosion of LLM stuff of late, with 
enormous models being widely distributed in just the past month:

https://lwn.net/Articles/930939/

Could the AIs takeoff lead to a resumption of traffic growth? I still don´t 
think so...


On Thu, May 4, 2023 at 10:59 PM Vasilenko Eduard via NANOG 
mailto:nanog@nanog.org>> wrote:
Disclaimer: Metaverse has not changed Metro traffic yet. Then …

I am puzzled when people talk about 400GE and Tbps in the Mero context.
For historical reasons, Metro is still about 2*2*10GE (one “2” for redundancy, 
another “2” for capacity) in the majority of cases worldwide.
How many BRASes serve more than 4/1.5=27k users in the busy hour?
It

RE: Routed optical networks

2023-05-11 Thread Vasilenko Eduard via NANOG
But it is speculation, not a trend yet.
I remember 10y ago every presentation started from the claim that 100B of IoT 
would drive XXX traffic. It did not happen.
Now we see presentations that AI would be talking to AI that generates  
traffic.
Maybe some technology would push traffic next S-curve, maybe not. It is still 
speculation.

The traffic growth was stimulated (despite all VNIs) by 1) new subscribers, 2) 
video quality for subscribers. Nothing else yet.
It is almost finished for both trends. We are close to the plateau of these 
S-curves.
For some years (2013-2020) I was carefully looking at numbers for many 
countries: it was always possible to split CAGR for these 2 components. The 
video part was extremely consistent between countries. The subscriber part was 
100% proportional to subscriber CAGR.
Everything else up to now was “marketing” to say it mildly.

Reminder: nothing in nature could grow indefinitely. The limit always exists. 
It is only a question of when.

PS: Of course, marketing people could draw you any traffic growth – it depends 
just on the marketing budget.

Eduard
From: Dave Taht [mailto:dave.t...@gmail.com]
Sent: Tuesday, May 9, 2023 11:41 PM
To: Vasilenko Eduard 
Cc: Phil Bedard ; Etienne-Victor Depasquale 
; NANOG 
Subject: Re: Routed optical networks

Up until this moment I was feeling that my take on the decline of traffic 
growth was somewhat isolated, in that I have long felt that we are nearing the 
top of the S curve of the data we humans can create and consume. About the only 
source of future traffic growth I can think of comes from getting more humans 
online, and that is a mere another doubling.

On the other hand, predictions such as 640k should be enough for everyone did 
not pan out.

On the gripping hand, there has been an explosion of LLM stuff of late, with 
enormous models being widely distributed in just the past month:

https://lwn.net/Articles/930939/

Could the AIs takeoff lead to a resumption of traffic growth? I still don´t 
think so...


On Thu, May 4, 2023 at 10:59 PM Vasilenko Eduard via NANOG 
mailto:nanog@nanog.org>> wrote:
Disclaimer: Metaverse has not changed Metro traffic yet. Then …

I am puzzled when people talk about 400GE and Tbps in the Mero context.
For historical reasons, Metro is still about 2*2*10GE (one “2” for redundancy, 
another “2” for capacity) in the majority of cases worldwide.
How many BRASes serve more than 4/1.5=27k users in the busy hour?
It means that 50GE is the best interface now for the majority of cases. 
2*50GE=100Gbps is good room for growth.
Of course, exceptions could be. I know BRAS that handles 86k subscribers (do 
not recommend anybody to push the limits – it was so painful).

We have just 2 eyes and look at video content about 22h per week (on average). 
Our eyes do not permit us to see resolution better than particular for chosen 
distance (4k for typical TV, HD for smartphones, and so on). Color depth 10bits 
is enough for the majority, 12bits is sure enough for everybody. 120 frames/sec 
is enough for everybody. It would never change – it is our genetics.
Fortunately for Carriers, the traffic has a limit. You have probably seen that 
every year traffic growth % is decreasing. The Internet is stabilizing and 
approaching the plateau.
How much growth is still awaiting us? 1.5? 1.4? It needs separate research. The 
result would be tailored for whom would pay for the research.
IMHO: It is not mandatory that 100GE would become massive in the metro. (I know 
that 100GE is already massive in the DC CLOS)

Additionally, who would pay for this traffic growth? It also limits traffic at 
some point.
I hope it would happen after we would get our 22h/4k/12bit/120hz.

Now, you could argue that Metaverse would jump and multiply traffic by an 
additional 2x or 3x. Then 400GE may be needed.
Sorry, but it is speculation yet. It is not a trend like the current 
(declining) traffic growth.

Ed/
From: NANOG 
[mailto:nanog-bounces+vasilenko.eduard<mailto:nanog-bounces%2Bvasilenko.eduard>=huawei@nanog.org<mailto:huawei@nanog.org>]
 On Behalf Of Phil Bedard
Sent: Thursday, May 4, 2023 8:32 PM
To: Etienne-Victor Depasquale mailto:ed...@ieee.org>>; NANOG 
mailto:nanog@nanog.org>>
Subject: Re: Routed optical networks

It’s not necessarily metro specific although the metro networks could lend 
themselves to overall optimizations.

The adoption of ZR/ZR+ IPoWDM currently somewhat corresponds with your adoption 
of 400G since today they require a QDD port.   There are 100G QDD ports but 
that’s not all that popular yet.   Of course there is work to do something 
similar in QSFP28 if the power can be reduced to what is supported by an 
existing QSFP28 port in most devices.   In larger networks with higher speed 
requirements and moving to 400G with QDD, using the DCO optics for connecting 
routers is kind of a no-brainer vs. a traditional muxponder.   Whether that’s 
over a ROADM based opti

RE: Routed optical networks

2023-05-05 Thread Vasilenko Eduard via NANOG
There are places in the world (like Middle East) where telephony system did not 
exist historically.
Then no copper, then no ducts.
Then new fiber is very difficult.

But much more places where the Telephony system did exist.
Ed/
From: Mike Hammett [mailto:na...@ics-il.net]
Sent: Friday, May 5, 2023 4:50 PM
To: Vasilenko Eduard 
Cc: Mark Tinka ; nanog@nanog.org
Subject: Re: Routed optical networks

Incumbents are great at momentum. They're not great at innovation, customer 
experience, etc. They only reason most incumbents are still relevant is due to 
their prior market size.

Around here, the incumbent telcos still have lead-sheathed cables in the 
ground, not removing anything. Often, things are abandoned in place, unless 
there's a good enough reason to remove it.

I'm placing my own ducts into the ground and putting my own fiber in it. I 
still put in DWDM between my facilities to minimize the consumption of 
resources. A couple of hundred bucks for a DWDM optic is cheaper than a strand 
between two locations.


-
Mike Hammett
Intelligent Computing Solutions<http://www.ics-il.com/>
[http://www.ics-il.com/images/fbicon.png]<https://www.facebook.com/ICSIL>[http://www.ics-il.com/images/googleicon.png]<https://plus.google.com/+IntelligentComputingSolutionsDeKalb>[http://www.ics-il.com/images/linkedinicon.png]<https://www.linkedin.com/company/intelligent-computing-solutions>[http://www.ics-il.com/images/twittericon.png]<https://twitter.com/ICSIL>
Midwest Internet Exchange<http://www.midwest-ix.com/>
[http://www.ics-il.com/images/fbicon.png]<https://www.facebook.com/mdwestix>[http://www.ics-il.com/images/linkedinicon.png]<https://www.linkedin.com/company/midwest-internet-exchange>[http://www.ics-il.com/images/twittericon.png]<https://twitter.com/mdwestix>
The Brothers WISP<http://www.thebrotherswisp.com/>
[http://www.ics-il.com/images/fbicon.png]<https://www.facebook.com/thebrotherswisp>[http://www.ics-il.com/images/youtubeicon.png]<https://www.youtube.com/channel/UCXSdfxQv7SpoRQYNyLwntZg>

From: "Vasilenko Eduard via NANOG" mailto:nanog@nanog.org>>
To: "Mark Tinka" mailto:mark@tinka.africa>>, 
nanog@nanog.org<mailto:nanog@nanog.org>
Sent: Wednesday, May 3, 2023 4:10:19 AM
Subject: RE: Routed optical networks
> Yeah, you sound like an equipment vendor whose main customers are incumbent 
> telco's in a few rich markets :-).
You are right. My message was pretty much geared toward incumbents.
But the majority of the access/aggregation is in their possessions, isn’t it?
They typically have ducts that were huge for copper that is already extracted.
One more fiber cable would be easy.

Agree that for competitive carriers DWDM would be more often needed.
Even for competitive carriers, it makes sense to evaluate the cost to put fiber 
into the duct of incumbents.
Especially because in some countries the price would be regulated.
It would solve the problem forever – no need for the DWDM speed upgrade.

I am calling to just not forget to evaluate this option too. Reminder: dark 
fiber is the best technical solution, for sure.

Ed/
From: Mark Tinka [mailto:mark@tinka.africa]
Sent: Wednesday, May 3, 2023 11:39 AM
To: Vasilenko Eduard 
mailto:vasilenko.edu...@huawei.com>>; 
nanog@nanog.org<mailto:nanog@nanog.org>
Subject: Re: Routed optical networks


On 5/3/23 08:20, Vasilenko Eduard wrote:
I would risk to say a little more on this.
Indeed, maybe the situation (in many countries) when the Carrier sells a lot of 
TDM services.
But in general, packet services are enough these days for many carriers/regions.

There aren't enough TDM services to warrant DWDM, nowadays.

The reason for DWDM is mainly being driven by Ethernet, and IP.

At any reasonable scale, it's actually pretty hard to buy a TDM service, in 
most markets.



Additionally, I am sure that in many countries/Metro it is cheaper to lay down 
a new fiber than to provision DWDM, even if it is a pizza box.

I disagree. Existing fibre may be cheap because it was laid down a decade or 
more ago, en masse, by several operators. So the market would be experiencing a 
glut, not because it is cheap to open up the roads and plant more fibre, but 
because there is so much of it to begin with.

At worst, there is still enough duct space that the operator can blow more 
fibre. But when that duct gets full, and there are no more free ducts 
available, or another route needs to get built for whatever reason, it is a 
rather costly affair to open up the roads and trunk some fibre, in any market.

So no, DWDM is not more expensive, if you are delivering services at scale. It 
is actually cheaper. It is only more expensive if you are small scale, because 
in some markets, the fibre glut means you can buy dark fibre for cheaper than 
you can light it with DWDM. But this is a situation unique to small operators, 

RE: Routed optical networks

2023-05-05 Thread Vasilenko Eduard via NANOG
Hi Mark,
Thanks a lot for many of your valuable comments I almost always agree.


1.   I agree that 50GE has not got the same popularity as 100GE. Many 
vendors did ignore it for some time. Looks like not many ignore it now.

2.   Even in your example for 40km, 100GE is about twice more expensive 
than 50GE.

3.   Hence, I have to google for the cheaper proposition in 10km. For 
obvious reasons, I could not reference my employer (my assumption about the 
cost is based on this comparison).
https://opticswave.com/collections/50g-qsfp28
https://www.compufox.com/50G_QSFP28_Transceivers_s/3036.htm
https://www.genuinemodules.com/033030600050
Looks like I have found twice cheaper in public information.

4.   Pay attention that bidirectional is available too.

5.   The public price is not what you get in the real tender. We are 
talking about big networks, hence, big tenders.

Eduard
From: Mark Tinka [mailto:mark@tinka.africa]
Sent: Friday, May 5, 2023 1:17 PM
To: Vasilenko Eduard ; nanog@nanog.org
Subject: Re: Routed optical networks


On 5/5/23 10:54, Vasilenko Eduard wrote:

50GE is better just because it is half of the cost of 100GE and it is enough 
now for the great majority of cases. Money is very important these days for 
this industry. 100GE single mode is more expensive than the best router port 
itself. Routers have been deprecated 10x for the decade (almost 100x for 2 
decades). Pluggable optics is not that much deprecated.

Not sure where your pricing is coming from, but if I look at Flexoptix's 50Gbps 
QSFP28 optics pricing, I am getting:

  *   EUR724 @ 10km.
  *   EUR1,246 @ 40km.

They are also selling an SFP56 LR for EUR925.

Juxtapose that against 100Gbps pricing:

  *   EUR473 @ 10km.
  *   EUR1,300 @ 25km.
  *   EUR1,500 @ 30km.
  *   EUR2,600 @ 40km.
  *   EUR3,925 @ 80km.

Doesn't immediately seem to me that 50Gbps is cheaper than 100Gbps. There also 
don't seem to be as many deployments of 50Gbps in the metro (same could be said 
for 25Gbps and 40Gbps), but others on the list can chime in with what they are 
seeing/doing.



I do not think that content provider guys call their DCI “Metro”, not very 
often.

Well, whatever they call it, the concept is the same - move lots of traffic 
across town between data centres.



I agree that 100GE for DCI is the minimum, 400GE is probably already needed in 
some places.
IMHO: it is a different story. Very interested too.

Most content providers have no choice but to run DWDM, for even very short 
spans between data centres. That is because it is just cheaper and simpler to 
pack Tbps of capacity in DWDM for the price than you can in a router. And 
besides, most routers don't need to carry Tbps of traffic in a single line 
card, which would be a waste of a fibre pair over that distance.

In such cases, better to use DWDM and drop capacity on individual routers 
and/or line cards as you see fit.




PS: By the way, even if some ISP has 50% of revenue from Enterprise services 
(it is probably the biggest number, typically 30%-40%), it is still just 5% 
compare to residential traffic. Traffic to enterprises is still sold 4x-10x 
(depending on the country).

That is why residential Access networks tend to be 2nd class citizens :-).



Hence, Enterprise does not make sense to mention in the traffic discussion. It 
is a “rounding error”.
Enterprise business created a huge demand for oversubscribed ports to connect 
Enterprises. And QoS/QoE. Not traffic.

Well, not all operators that offer enterprise services also do consumer 
broadband, or vice versa. To a network doing only one or the other, whatever 
traffic they are carrying means the world to them. It's not ours to decide what 
is high or low traffic... that priviledge always remains with the network 
operator.

Mark.


RE: Routed optical networks

2023-05-05 Thread Vasilenko Eduard via NANOG
You are right, my “Metro” definition is about ISP/Carriers.
Mobile or Fixed, despite pure Mobiles would like to call it MBH – it has much 
less traffic (Mobile subscribers would always have 7x less than fixed).
It is still the place where the majority of port capacity lives. Because all 
content and cache are after this link.

50GE is better just because it is half of the cost of 100GE and it is enough 
now for the great majority of cases. Money is very important these days for 
this industry. 100GE single mode is more expensive than the best router port 
itself. Routers have been deprecated 10x for the decade (almost 100x for 2 
decades). Pluggable optics is not that much deprecated.

I do not think that content provider guys call their DCI “Metro”, not very 
often.
I agree that 100GE for DCI is the minimum, 400GE is probably already needed in 
some places.
IMHO: it is a different story. Very interested too.

PS: By the way, even if some ISP has 50% of revenue from Enterprise services 
(it is probably the biggest number, typically 30%-40%), it is still just 5% 
compare to residential traffic. Traffic to enterprises is still sold 4x-10x 
(depending on the country).
Hence, Enterprise does not make sense to mention in the traffic discussion. It 
is a “rounding error”.
Enterprise business created a huge demand for oversubscribed ports to connect 
Enterprises. And QoS/QoE. Not traffic.

Eduard
From: NANOG [mailto:nanog-bounces+vasilenko.eduard=huawei@nanog.org] On 
Behalf Of Mark Tinka
Sent: Friday, May 5, 2023 11:13 AM
To: nanog@nanog.org
Subject: Re: Routed optical networks


On 5/5/23 07:57, Vasilenko Eduard via NANOG wrote:
Disclaimer: Metaverse has not changed Metro traffic yet. Then …

I am puzzled when people talk about 400GE and Tbps in the Mero context.
For historical reasons, Metro is still about 2*2*10GE (one “2” for redundancy, 
another “2” for capacity) in the majority of cases worldwide.
How many BRASes serve more than 4/1.5=27k users in the busy hour?
It means that 50GE is the best interface now for the majority of cases. 
2*50GE=100Gbps is good room for growth.
Of course, exceptions could be. I know BRAS that handles 86k subscribers (do 
not recommend anybody to push the limits – it was so painful).

We have just 2 eyes and look at video content about 22h per week (on average). 
Our eyes do not permit us to see resolution better than particular for chosen 
distance (4k for typical TV, HD for smartphones, and so on). Color depth 10bits 
is enough for the majority, 12bits is sure enough for everybody. 120 frames/sec 
is enough for everybody. It would never change – it is our genetics.
Fortunately for Carriers, the traffic has a limit. You have probably seen that 
every year traffic growth % is decreasing. The Internet is stabilizing and 
approaching the plateau.
How much growth is still awaiting us? 1.5? 1.4? It needs separate research. The 
result would be tailored for whom would pay for the research.
IMHO: It is not mandatory that 100GE would become massive in the metro. (I know 
that 100GE is already massive in the DC CLOS)

Additionally, who would pay for this traffic growth? It also limits traffic at 
some point.
I hope it would happen after we would get our 22h/4k/12bit/120hz.

Now, you could argue that Metaverse would jump and multiply traffic by an 
additional 2x or 3x. Then 400GE may be needed.
Sorry, but it is speculation yet. It is not a trend like the current 
(declining) traffic growth.

So, it depends on what "metro" means to you.

For an ISP selling connectivity to enterprise customers, it can be a bunch of 
Metro-E routers deployed in various commercial buildings within a city. For a 
content provider, it could be DCI. For a telco, it could interconnecting their 
Active-E/GPON/DSLAM/CMTS network.

Whatever the case, the need for 100Gbps is going to be driven by the cost of 
optics over the distance required. Some operators run 2x 10Gbps for 
resilience/redundancy, while some others run 4x 10Gbps for the same. It all 
depends on the platform you are using. At some point, that capacity runs out, 
especially when you account for fibre outages, and you need something larger on 
one side of the ring mainly to provide sufficient bandwidth during failure 
events on the other side of the ring, and not necessarily because you are 
growing by that much.

Also, if the optics are available and are reasonably priced, why muck around 
with 40Gbps when you can just go straight to 100Gbps? The equipment usually can 
support either.

I'm unaware of any popularity around 50Gbps interfaces, but I also probably 
don't pay too much attention to such nuance :-).

So, it's not that we are seeing organic growth that justifies 100Gbps over 
anything smaller. It's more that the optics are available, they are cheap, they 
can go the distance, and the routers/switches can do the speed. At least, for 
us anyway, that is what is driving the next phase of our Metro-E network... 
going str

RE: Routed optical networks

2023-05-04 Thread Vasilenko Eduard via NANOG
Disclaimer: Metaverse has not changed Metro traffic yet. Then ...

I am puzzled when people talk about 400GE and Tbps in the Mero context.
For historical reasons, Metro is still about 2*2*10GE (one "2" for redundancy, 
another "2" for capacity) in the majority of cases worldwide.
How many BRASes serve more than 4/1.5=27k users in the busy hour?
It means that 50GE is the best interface now for the majority of cases. 
2*50GE=100Gbps is good room for growth.
Of course, exceptions could be. I know BRAS that handles 86k subscribers (do 
not recommend anybody to push the limits - it was so painful).

We have just 2 eyes and look at video content about 22h per week (on average). 
Our eyes do not permit us to see resolution better than particular for chosen 
distance (4k for typical TV, HD for smartphones, and so on). Color depth 10bits 
is enough for the majority, 12bits is sure enough for everybody. 120 frames/sec 
is enough for everybody. It would never change - it is our genetics.
Fortunately for Carriers, the traffic has a limit. You have probably seen that 
every year traffic growth % is decreasing. The Internet is stabilizing and 
approaching the plateau.
How much growth is still awaiting us? 1.5? 1.4? It needs separate research. The 
result would be tailored for whom would pay for the research.
IMHO: It is not mandatory that 100GE would become massive in the metro. (I know 
that 100GE is already massive in the DC CLOS)

Additionally, who would pay for this traffic growth? It also limits traffic at 
some point.
I hope it would happen after we would get our 22h/4k/12bit/120hz.

Now, you could argue that Metaverse would jump and multiply traffic by an 
additional 2x or 3x. Then 400GE may be needed.
Sorry, but it is speculation yet. It is not a trend like the current 
(declining) traffic growth.

Ed/
From: NANOG [mailto:nanog-bounces+vasilenko.eduard=huawei@nanog.org] On 
Behalf Of Phil Bedard
Sent: Thursday, May 4, 2023 8:32 PM
To: Etienne-Victor Depasquale ; NANOG 
Subject: Re: Routed optical networks

It's not necessarily metro specific although the metro networks could lend 
themselves to overall optimizations.

The adoption of ZR/ZR+ IPoWDM currently somewhat corresponds with your adoption 
of 400G since today they require a QDD port.   There are 100G QDD ports but 
that's not all that popular yet.   Of course there is work to do something 
similar in QSFP28 if the power can be reduced to what is supported by an 
existing QSFP28 port in most devices.   In larger networks with higher speed 
requirements and moving to 400G with QDD, using the DCO optics for connecting 
routers is kind of a no-brainer vs. a traditional muxponder.   Whether that's 
over a ROADM based optical network or not, especially at metro/regional 
distances.

There are very large deployments of IPoDWDM over passive DWDM or dark fiber for 
access and aggregation networks where the aggregate required bandwidth doesn't 
exceed the capabilities of those optics.  It's been done at 10G for many years. 
 With the advent of pluggable EDFA amplifiers, you can even build links up to 
120km* (perfect dark fiber)  carrying tens of terabits of traffic without any 
additional active optical equipment.

It's my personal opinion we aren't to the days yet of where we can simply build 
an all packet network with no photonic switching that carries all services, but 
eventually (random # of years) it gets there for many networks.  There are also 
always going to be high performance applications for transponders where 
pluggable optics aren't a good fit.

Carrying high speed private line/wavelength type services as well is a 
different topic than interconnecting IP devices.

Thanks,
Phil


From: NANOG 
mailto:nanog-bounces+bedard.phil=gmail@nanog.org>>
 on behalf of Etienne-Victor Depasquale via NANOG 
mailto:nanog@nanog.org>>
Date: Monday, May 1, 2023 at 2:30 PM
To: NANOG mailto:nanog@nanog.org>>
Subject: Routed optical networks
Hello folks,

Simple question: does "routed optical networks" have a clear meaning in the 
metro area context, or not?

Put differently: does it call to mind a well-defined stack of technologies in 
the control and data planes of metro-area networks?

I'm asking because I'm having some thoughts about the clarity of this term, in 
the process of carrying out a qualitative survey of the results of the 
metro-area networks survey.

Cheers,

Etienne

--
Ing. Etienne-Victor Depasquale
Assistant Lecturer
Department of Communications & Computer Engineering
Faculty of Information & Communication Technology
University of Malta
Web. https://www.um.edu.mt/profile/etiennedepasquale


RE: Routed optical networks

2023-05-04 Thread Vasilenko Eduard via NANOG
> The economics are such these days that in many circumstances, bean counters 
> don't want to hear about payback in years, they want to hear it in quarters.  
> Short term financial thinking is dominant.
True. The industry is on decline. On the way to other utilities.
But then any project is a challenge. Not just fiber that may be cheaper for 
Metro than DWDM.
Eduard
From: Tom Beecher [mailto:beec...@beecher.cc]
Sent: Thursday, May 4, 2023 3:26 PM
To: Vasilenko Eduard 
Cc: Denis Fondras ; nanog@nanog.org
Subject: Re: Routed optical networks

Well, ISP is typically plan something for a year. It is more than enough for 
both.

s/more/should be/

The economics are such these days that in many circumstances, bean counters 
don't want to hear about payback in years, they want to hear it in quarters.  
Short term financial thinking is dominant.

On Thu, May 4, 2023 at 6:59 AM Vasilenko Eduard via NANOG 
mailto:nanog@nanog.org>> wrote:
Well, ISP is typically plan something for a year. It is more than enough for 
both.

Funny, that with the current lead times for electronics, Fiber could be faster.
Of course, it is a temporary glitch.

Ed/
-Original Message-
From: NANOG 
[mailto:nanog-bounces+vasilenko.eduard<mailto:nanog-bounces%2Bvasilenko.eduard>=huawei@nanog.org<mailto:huawei@nanog.org>]
 On Behalf Of Denis Fondras
Sent: Thursday, May 4, 2023 12:41 PM
To: nanog@nanog.org<mailto:nanog@nanog.org>
Subject: Re: Routed optical networks

Le Wed, May 03, 2023 at 06:20:48AM +, Vasilenko Eduard via NANOG a écrit :
>
> Additionally, I am sure that in many countries/Metro it is cheaper to lay 
> down a new fiber than to provision DWDM, even if it is a pizza box. The 
> colored interface is still very expensive.
> Of course, there are some Cities (not “towns”) where it is very expensive or 
> maybe even impossible to lay down a new fiber.
> Yes, in the majority of cases, it is cheaper to lay down fiber.
>

You may also take into account the time to deliver.
Laying fiber takes much more time than plugging a colored optic.


RE: Routed optical networks

2023-05-04 Thread Vasilenko Eduard via NANOG
I had an experience in one big PTT. Fiber was easy in the majority of Metro 
places.
Even faster than DWDM or router commissioning.
It is just 1 PTT. Hence, an example could not be counted.
Eduard
-Original Message-
From: NANOG [mailto:nanog-bounces+vasilenko.eduard=huawei@nanog.org] On 
Behalf Of Mark Tinka
Sent: Thursday, May 4, 2023 2:11 PM
To: nanog@nanog.org
Subject: Re: Routed optical networks



On 5/4/23 12:58, Vasilenko Eduard via NANOG wrote:

> Well, ISP is typically plan something for a year. It is more than enough for 
> both.

The real world is much less certain, especially in these economic times.


> Funny, that with the current lead times for electronics, Fiber could be 
> faster.
> Of course, it is a temporary glitch.

Even in the same market, no two lays of fibre can be guaranteed to be completed 
in the same time.

Mark.



RE: Routed optical networks

2023-05-04 Thread Vasilenko Eduard via NANOG
Well, ISP is typically plan something for a year. It is more than enough for 
both.

Funny, that with the current lead times for electronics, Fiber could be faster.
Of course, it is a temporary glitch.

Ed/
-Original Message-
From: NANOG [mailto:nanog-bounces+vasilenko.eduard=huawei@nanog.org] On 
Behalf Of Denis Fondras
Sent: Thursday, May 4, 2023 12:41 PM
To: nanog@nanog.org
Subject: Re: Routed optical networks

Le Wed, May 03, 2023 at 06:20:48AM +, Vasilenko Eduard via NANOG a écrit :
> 
> Additionally, I am sure that in many countries/Metro it is cheaper to lay 
> down a new fiber than to provision DWDM, even if it is a pizza box. The 
> colored interface is still very expensive.
> Of course, there are some Cities (not “towns”) where it is very expensive or 
> maybe even impossible to lay down a new fiber.
> Yes, in the majority of cases, it is cheaper to lay down fiber.
> 

You may also take into account the time to deliver.
Laying fiber takes much more time than plugging a colored optic.



RE: Routed optical networks

2023-05-03 Thread Vasilenko Eduard via NANOG
> Yeah, you sound like an equipment vendor whose main customers are incumbent 
> telco's in a few rich markets :-).
You are right. My message was pretty much geared toward incumbents.
But the majority of the access/aggregation is in their possessions, isn’t it?
They typically have ducts that were huge for copper that is already extracted.
One more fiber cable would be easy.

Agree that for competitive carriers DWDM would be more often needed.
Even for competitive carriers, it makes sense to evaluate the cost to put fiber 
into the duct of incumbents.
Especially because in some countries the price would be regulated.
It would solve the problem forever – no need for the DWDM speed upgrade.

I am calling to just not forget to evaluate this option too. Reminder: dark 
fiber is the best technical solution, for sure.

Ed/
From: Mark Tinka [mailto:mark@tinka.africa]
Sent: Wednesday, May 3, 2023 11:39 AM
To: Vasilenko Eduard ; nanog@nanog.org
Subject: Re: Routed optical networks


On 5/3/23 08:20, Vasilenko Eduard wrote:
I would risk to say a little more on this.
Indeed, maybe the situation (in many countries) when the Carrier sells a lot of 
TDM services.
But in general, packet services are enough these days for many carriers/regions.

There aren't enough TDM services to warrant DWDM, nowadays.

The reason for DWDM is mainly being driven by Ethernet, and IP.

At any reasonable scale, it's actually pretty hard to buy a TDM service, in 
most markets.




Additionally, I am sure that in many countries/Metro it is cheaper to lay down 
a new fiber than to provision DWDM, even if it is a pizza box.

I disagree. Existing fibre may be cheap because it was laid down a decade or 
more ago, en masse, by several operators. So the market would be experiencing a 
glut, not because it is cheap to open up the roads and plant more fibre, but 
because there is so much of it to begin with.

At worst, there is still enough duct space that the operator can blow more 
fibre. But when that duct gets full, and there are no more free ducts 
available, or another route needs to get built for whatever reason, it is a 
rather costly affair to open up the roads and trunk some fibre, in any market.

So no, DWDM is not more expensive, if you are delivering services at scale. It 
is actually cheaper. It is only more expensive if you are small scale, because 
in some markets, the fibre glut means you can buy dark fibre for cheaper than 
you can light it with DWDM. But this is a situation unique to small operators, 
not large ones.



The colored interface is still very expensive.

This only matters for the line side.

For client-facing, it's not a drama. And you typically buy more optics for the 
client side than you do the line side.



Of course, there are some Cities (not “towns”) where it is very expensive or 
maybe even impossible to lay down a new fiber.
Yes, in the majority of cases, it is cheaper to lay down fiber.

I think what you mean to say is that in the majority of cases where there is 
fibre glut, and dark fibre is a market option, buying fibre is cheaper than 
lighting it with DWDM. This is true. But I think that on a global scale, this 
is the exception, not the rule.

In general, you are not likely to be able to buy dark fibre, cheaply or 
otherwise, if you look at all markets in the world.



Hence, the importance of DWDM for the Metro is overestimated.

Again, only if you are small scale.

If you are a large scale operator with as many IP/Ethernet customers as you 
have Transport, DWDM is essential.




Use only routers. Provision enough fiber. Have always 1 router hop to the 
aggregation (hub-spoke topology), no routers chaining in the ring.
If fiber is not enough – then use normal DWDM with an external transponder. 
Routers would be still in hub-spoke topology.

Yeah, you sound like an equipment vendor whose main customers are incumbent 
telco's in a few rich markets :-).

The life of the average operator, around the world, is far less glamorous.

Mark.


RE: Routed optical networks

2023-05-03 Thread Vasilenko Eduard via NANOG
> At that scale, DWDM in the metro will make sense
I would risk to say a little more on this.
Indeed, maybe the situation (in many countries) when the Carrier sells a lot of 
TDM services.
But in general, packet services are enough these days for many carriers/regions.

Additionally, I am sure that in many countries/Metro it is cheaper to lay down 
a new fiber than to provision DWDM, even if it is a pizza box. The colored 
interface is still very expensive.
Of course, there are some Cities (not “towns”) where it is very expensive or 
maybe even impossible to lay down a new fiber.
Yes, in the majority of cases, it is cheaper to lay down fiber.

Hence, the importance of DWDM for the Metro is overestimated.

Use only routers. Provision enough fiber. Have always 1 router hop to the 
aggregation (hub-spoke topology), no routers chaining in the ring.
If fiber is not enough – then use normal DWDM with an external transponder. 
Routers would be still in hub-spoke topology.

Ed/
From: NANOG [mailto:nanog-bounces+vasilenko.eduard=huawei@nanog.org] On 
Behalf Of Mark Tinka
Sent: Wednesday, May 3, 2023 7:09 AM
To: nanog@nanog.org
Subject: Re: Routed optical networks


On 5/2/23 07:28, Vasilenko Eduard via NANOG wrote:
The incumbent carrier typically has enough fiber strands to avoid any colored 
interfaces (that are 3x expensive compare to gray) in the Metro.
Metro ring typically has 8-10 nodes (or similar). 16-20 strands of fiber were 
not possible to construct anyway – any cable is bigger.
It is the same cost to lay down fiber on 16 strands or 32.
Hence, PTT just does not need DWDM in Metro, not at all. Hence, the DWDM 
optimization that you are talking about below is not needed too.

This may or may not always be the case. Especially for large carriers, where 
there could be a requirement to sell some of those dark fibre pairs to large 
customers (think the content folk coming into town, e.t.c.), they may no longer 
have the priviledge of having plenty of free fibre in the metro. Or if they 
did, the rate of traffic expansion means they burn through those fibre pairs 
pretty quick.

10Gbps isn't a lot nowadays, and 100Gbps may start to push the limits depending 
on the size of the operator, the scope of the Metro-E ring and the level of 
service that needs to be maintained during a re-route (two available paths in 
the ring could balance 100Gbps of traffic, but if one half of that ring breaks, 
the remaining path may need to carry a lot more than 100Gbps, and then packets 
start to fall flat on the floor).

At that scale, DWDM in the metro will make sense, at least more sense than 
400G-ZR, at the moment.




If you rent a single pair of fiber then you need colored interfaces to 
multiplex 8-10 nodes into 1 pair on the ring.
Then the movement of transponders from DWDM into the router would eliminate 2 
gray interfaces on every node (4 per link): one on the router side, and another 
on the DWDM side.
Overall, it is about a 25% cost cut of the whole “router+DWDM”.

Some operators would also be selling Transport services in or along the metro, 
and customers paying for that may require that they do not cross a router 
device.



It is still 2x more expensive compare to using additional fiber strands on YOUR 
fiber.

There are plenty of DWDM pizza boxes that cost next to nothing. At scale, the 
price of these is not a stumbling block. And certainly, the price of these 
would be far lower than a router line card.




By the way, about “well-defined stack of technologies”:
NMS (polished by SDN our days) should be cross-layer: it should manage at the 
same time: ROADM/OADM in DWDM and colored laser in Router.
It is a vendor lock up to now (no multi-vendor). Hence, 25% cost savings would 
go to the vendor that has such NMS, not to the carrier.
Technology still does not make sense because no multivendor support between the 
NMS of one vendor and the router or DWDM of another.
Looking at the NMS history, it would probably never be multi-vendor. For that 
reason, I am pessimistic about the future of the colored interfaces in routers 
(and alien lambdas in DWDM). Despite a potential 25% cost advantage in 
eliminating gray interfaces.

OpenROADM is a good initiative. But it seems it's to be to Transport equipment 
vendors what IPv6 and DNSSEC is to the IP world :-).

Mark.


RE: Routed optical networks

2023-05-01 Thread Vasilenko Eduard via NANOG
Hi Etienne,
It depends on who is the owner of the fiber.

The incumbent carrier typically has enough fiber strands to avoid any colored 
interfaces (that are 3x expensive compare to gray) in the Metro.
Metro ring typically has 8-10 nodes (or similar). 16-20 strands of fiber were 
not possible to construct anyway – any cable is bigger.
It is the same cost to lay down fiber on 16 strands or 32.
Hence, PTT just does not need DWDM in Metro, not at all. Hence, the DWDM 
optimization that you are talking about below is not needed too.

If you rent a single pair of fiber then you need colored interfaces to 
multiplex 8-10 nodes into 1 pair on the ring.
Then the movement of transponders from DWDM into the router would eliminate 2 
gray interfaces on every node (4 per link): one on the router side, and another 
on the DWDM side.
Overall, it is about a 25% cost cut of the whole “router+DWDM”.
It is still 2x more expensive compare to using additional fiber strands on YOUR 
fiber.

By the way, about “well-defined stack of technologies”:
NMS (polished by SDN our days) should be cross-layer: it should manage at the 
same time: ROADM/OADM in DWDM and colored laser in Router.
It is a vendor lock up to now (no multi-vendor). Hence, 25% cost savings would 
go to the vendor that has such NMS, not to the carrier.
Technology still does not make sense because no multivendor support between the 
NMS of one vendor and the router or DWDM of another.
Looking at the NMS history, it would probably never be multi-vendor. For that 
reason, I am pessimistic about the future of the colored interfaces in routers 
(and alien lambdas in DWDM). Despite a potential 25% cost advantage in 
eliminating gray interfaces.

PS: "routed optical networks" is proprietary marketing. Nobody understands what 
you mean. I did google to understand.

Eduard
From: NANOG [mailto:nanog-bounces+vasilenko.eduard=huawei@nanog.org] On 
Behalf Of Etienne-Victor Depasquale via NANOG
Sent: Monday, May 1, 2023 9:29 PM
To: NANOG mailto:nanog@nanog.org>>
Subject: Routed optical networks

Hello folks,

Simple question: does "routed optical networks" have a clear meaning in the 
metro area context, or not?

Put differently: does it call to mind a well-defined stack of technologies in 
the control and data planes of metro-area networks?

I'm asking because I'm having some thoughts about the clarity of this term, in 
the process of carrying out a qualitative survey of the results of the 
metro-area networks survey.

Cheers,

Etienne

--
Ing. Etienne-Victor Depasquale
Assistant Lecturer
Department of Communications & Computer Engineering
Faculty of Information & Communication Technology
University of Malta
Web. https://www.um.edu.mt/profile/etiennedepasquale


RE: A straightforward transition plan (was: Re: V6 still not supported)

2023-01-11 Thread Vasilenko Eduard via NANOG
The comment looks outdated: Who cares now about ATM?
But all wireless (including WiFi) emulate broadcast in a very unsatisfactory 
way.
Hence, the requirement is still very accurate.

-Original Message-
From: NANOG [mailto:nanog-bounces+vasilenko.eduard=huawei@nanog.org] On 
Behalf Of Masataka Ohta
Sent: Thursday, January 12, 2023 7:32 AM
To: nanog@nanog.org
Subject: Re: A straightforward transition plan (was: Re: V6 still not supported)

Randy Bush wrote:

> three of the promises of ipng which ipv6 did not deliver
>o compatibility/transition,
>o security, and
>o routing & renumbering

You miss a promise of

o ND over ATM/NBMA

which caused IPv6 lack a notion of link broadcast.

Masataka Ohta




RE: Alternative Re: ipv4/25s and above Re: 202211232221.AYC

2022-11-28 Thread Vasilenko Eduard via NANOG
Big OTTs installed caches all over the world.
Big OTTs support IPv6.
Hosts prefer IPv6.
Hence, traffic becomes IPv6 to big OTTs.
It is not visible for IXes. IXes statistics on IPv6 are not representative.
Ed/
-Original Message-
From: Abraham Y. Chen [mailto:ayc...@avinta.com] 
Sent: Sunday, November 27, 2022 12:35 AM
To: Chris Welti 
Cc: NANOG ; b...@theworld.com; Vasilenko Eduard 

Subject: Re: Alternative Re: ipv4/25s and above Re: 202211232221.AYC

Hi, Chris:

1) "... public fabric ... private dedicated circuits ... heavily biased
...":   You brought up an aspect that I have no knowledge about. 
However, you did not clarify how IPv6 and IPv4 are treated differently by these 
considerations which was the key parameter that we are trying to sort out. 
Thanks.

Regards,

Abe (2022-11-24 15:40)


On 2022-11-24 12:23, Chris Welti wrote:
> Hi Abe,
>
> the problem is that the AMS-IX data only covers the public fabric, but 
> the peering connections between the big CDNs/clouds and the large ISPs 
> all happen on private dedicated circuits as it is so much traffic that 
> it does not make sense to run it over a public IX fabric (in addition 
> to local caches which dillute the stats even more). Thus that data you 
> are referring to is heavily biased and should not be used for this 
> generalized purpose.
>
> Regards,
> Chris
>
> On 24.11.22 18:01, Abraham Y. Chen wrote:
>> Hi, Eduard:
>>
>> 0) Thanks for sharing your research efforts.
>>
>> 1) Similar as your own experience, we also recognized the granularity 
>> issue of the data in this particular type of statistics. Any data 
>> that is based on a limited number of countries, regions, businesses, 
>> industry segments, etc. will always be rebutted with a counter 
>> example of some sort. So, we put more trust into those general 
>> service cases with continuous reports for consistency, such as 
>> AMS-IX. If you know any better sources, I would like to look into them.
>>
>> Regards,
>>
>>
>> Abe (2022-11-24 11:59 EST)
>>
>>
>> On 2022-11-24 04:43, Vasilenko Eduard wrote:
>>> Hi Abraham,
>>> Let me clarify a little bit on statistics - I did an investigation 
>>> last year.
>>>
>>> Google and APNIC report very similar numbers. APNIC permits drilling 
>>> down deep details. Then it is possible to understand that they see 
>>> only 100M Chinese. China itself reports 0.5B IPv6 users. APNIC gives 
>>> Internet population by country - it permits to construct proportion.
>>> Hence, it is possible to conclude that we need to add 8% to Google 
>>> (or APNIC) to get 48% of IPv6 preferred users worldwide. We would 
>>> likely cross 50% this year.
>>>
>>> I spent a decent time finding traffic statics. I have found one DPI 
>>> vendor who has it. Unfortunately, they sell it for money.
>>> ARCEP has got it for France and published it in their "Barometer". 
>>> Almost 70% of application requests are possible to serve from IPv6.
>>> Hence, 70%*48%=33.6%. We could claim that 1/3 of the traffic is IPv6 
>>> worldwide because France is typical.
>>> My boss told me "No-No" for this logic. His example is China where 
>>> we had reliable data for only 20% of application requests served on
>>> IPv6 (China has a very low IPv6 adoption by OTTs).
>>> My response was: But India has a much better IPv6 adoption on the 
>>> web server side. China and a few other countries are not 
>>> representative. The majority are like France.
>>> Unfortunately, we do not have per-country IPv6 adoption on the web 
>>> server side.
>>> OK. We could estimate 60% of the application readiness as a minimum. 
>>> Then 60%*48%=28.8%.
>>> Hence, we could claim that at least 1/4 of the worldwide traffic is 
>>> IPv6.
>>>
>>> IX data shows much low IPv6 adoption because the biggest OTTs have 
>>> many caches installed directly on Carriers' sites.
>>>
>>> Sorry for not the exact science. But it is all that I have. It is 
>>> better than nothing.
>>>
>>> PS: 60% of requests served by web servers does not mean "60% of 
>>> servers". For servers themselves we have statistics - it is just 
>>> 20%+. But it is for the biggest web resources.
>>>
>>> Eduard
>>> -Original Message-
>>> From: NANOG
>>> [mailto:nanog-bounces+vasilenko.eduard=huawei@nanog.org] On 
>>> Behalf Of Abraham Y. Chen
>>> Sent: Thursday, November 24, 2022 11:53 AM
>>> To: Joe Maimon
>>> Cc: NANOG;b...@theworld.com
>>> Subject: Re: Alternative Re: ipv4/25s and above Re: 202211232221.AYC
>>>
>>> Dear Joe:
>>>
>>> 0) Allow me to share my understanding of the two topics that you 
>>> brought up.
>>>
>>> 1) "...https://www.google.com/intl/en/ipv6/statistics.html, it looks 
>>> like we’ve gone from ~0% to ~40% in 12 years ": Your numbers may 
>>> be deceiving.
>>>
>>>     A. The IPv6 was introduced in 1995-12, launched on 2012-06-06 
>>> and ratified on 2017-07-14. So, the IPv6 efforts have been quite a 
>>> few years more than your impression. That is, the IPv6 has been 
>>> around over quarter of a century.
>>>
>>>     B. If you 

RE: Alternative Re: ipv4/25s and above Re: 202211232221.AYC

2022-11-24 Thread Vasilenko Eduard via NANOG
Hi Abraham,
Let me clarify a little bit on statistics - I did an investigation last year.

Google and APNIC report very similar numbers. APNIC permits drilling down deep 
details. Then it is possible to understand that they see only 100M Chinese. 
China itself reports 0.5B IPv6 users. APNIC gives Internet population by 
country - it permits to construct proportion.
Hence, it is possible to conclude that we need to add 8% to Google (or APNIC) 
to get 48% of IPv6 preferred users worldwide. We would likely cross 50% this 
year.

I spent a decent time finding traffic statics. I have found one DPI vendor who 
has it. Unfortunately, they sell it for money.
ARCEP has got it for France and published it in their "Barometer". Almost 70% 
of application requests are possible to serve from IPv6.
Hence, 70%*48%=33.6%. We could claim that 1/3 of the traffic is IPv6 worldwide 
because France is typical.
My boss told me "No-No" for this logic. His example is China where we had 
reliable data for only 20% of application requests served on IPv6 (China has a 
very low IPv6 adoption by OTTs).
My response was: But India has a much better IPv6 adoption on the web server 
side. China and a few other countries are not representative. The majority are 
like France.
Unfortunately, we do not have per-country IPv6 adoption on the web server side.
OK. We could estimate 60% of the application readiness as a minimum. Then 
60%*48%=28.8%.
Hence, we could claim that at least 1/4 of the worldwide traffic is IPv6.

IX data shows much low IPv6 adoption because the biggest OTTs have many caches 
installed directly on Carriers' sites.

Sorry for not the exact science. But it is all that I have. It is better than 
nothing.

PS: 60% of requests served by web servers does not mean "60% of servers". For 
servers themselves we have statistics - it is just 20%+. But it is for the 
biggest web resources.

Eduard
-Original Message-
From: NANOG [mailto:nanog-bounces+vasilenko.eduard=huawei@nanog.org] On 
Behalf Of Abraham Y. Chen
Sent: Thursday, November 24, 2022 11:53 AM
To: Joe Maimon 
Cc: NANOG ; b...@theworld.com
Subject: Re: Alternative Re: ipv4/25s and above Re: 202211232221.AYC

Dear Joe:

0) Allow me to share my understanding of the two topics that you brought up.

1) "... https://www.google.com/intl/en/ipv6/statistics.html, it looks like 
we’ve gone from ~0% to ~40% in 12 years ":  Your numbers may be deceiving.

   A. The IPv6 was introduced in 1995-12, launched on 2012-06-06 and ratified 
on 2017-07-14. So, the IPv6 efforts have been quite a few years more than your 
impression. That is, the IPv6 has been around over quarter of a century.

   B. If you read closely, the statement  "The graph shows the percentage of 
users that access Google over IPv6." above the graph actually means "equipment 
readiness". That is, how many Google users have IPv6 capable devices. This is 
similar as the APNIC statistics whose title makes this clearer. However, having 
the capability does not mean the owners are actually using it. Also, this is 
not general data, but within the Google environment. Since Google is one of the 
stronger promoters of the IPv6, this graph would be at best the cap of such 
data.

   C. The more meaningful data would be the global IPv6 traffic statistics. 
Interestingly, they do not exist upon our extensive search. 
(If you know of any, I would appreciate to receive a lead to such.) The closest 
that we could find is % of IPv6 in AMS-IX traffic statistics (see URL below). 
It is currently at about 5-6% and has been tapering off to a growth of less 
than 0.1% per month recently, after a ramp-up period in the past. (Similar 
saturation behavior can also be found in the above Google graph.)

https://stats.ams-ix.net/sflow/ether_type.html

   D.  One interesting parameter behind the last one is that as an 
Inter-eXchange operator, AMS-IX should see very similar percentage traffic mix 
between IPv6 and IPv4. The low numbers from AMS-IX does not support this 
viewpoint for matching with your observation. In addition, traffic through IX 
is the overflow among backbone routers. A couple years ago, there was a report 
that peering arrangements among backbone routers for IPv6 were much less 
matured then IPv4, which meant that AMS-IX should be getting more IPv6 traffic 
than the mix in the Internet core. Interpreted in reverse, % of IPv6 in overall 
Internet traffic should be less than what AMS-IX handles.

   E. This is a quite convoluted topic that we only scratched the surface. They 
should not occupy the attention of colleagues on this list. However, I am 
willing to provide more information to you off-line, if you care for further 
discussion.

2)  "... https://lore.kernel.org/lkml/20080108011057.ga21...@cisco.com/
...":  My basic training was in communication equipment hardware design. 
I knew little about software beyond what I needed for my primary assignment. 
Your example, however, reminds me of a programing course 

RE: Jon Postel Re: 202210301538.AYC

2022-11-04 Thread Vasilenko Eduard via NANOG
I do not understand why you believe that only AD matters,
if the real management is done mostly by Chairs.
Ed/
-Original Message-
From: NANOG [mailto:nanog-bounces+vasilenko.eduard=huawei@nanog.org] On 
Behalf Of Fred Baker
Sent: Friday, November 4, 2022 7:34 PM
To: Donald Eastlake 
Cc: North American Network Operators' Group 
Subject: Re: Jon Postel Re: 202210301538.AYC



Sent using a machine that autocorrects in interesting ways...

> On Nov 2, 2022, at 5:50 PM, Donald Eastlake  wrote:
> 
> In the early years of the
> NomCom, I believe there were a small number of cases of a 3 year term 
> but only for an AD who had already successfully served for 2 years.

There were two such cases - Jeff Schiller and myself. The situation was that in 
1997 (IIRC) we had  four areas with a single AD, and the IESG told the nomcom 
that the imbalance was strange. At its option, the nomcom could extend the term 
of a sitting AD that wasn’t up for renewal/replacement by one year to even 
things out. They did. In 2001, I resigned, and I think Jeff resigned in 1999.


RE: Jon Postel Re: 202210301538.AYC

2022-10-31 Thread Vasilenko Eduard via NANOG
It is believed by many that 2 terms should be the maximum for one position of 
any chair (if it is a democracy).
It is evidently not the case for IETF - people stay in power for decades. It is 
just a fact that is not possible to dispute.
Yes, Nomcom is the mechanism for AD and above. I do not want to sort out how 
exactly it is performed.
By the way, WG chairs have been put aside from any election mechanisms.

If any politician would manage to possess power for more than 2 terms - he 
would be immediately called "totalitarian".
Even if he would say that there is a mechanism for it.
Eduard
-Original Message-
From: Donald Eastlake [mailto:d3e...@gmail.com] 
Sent: Monday, October 31, 2022 4:28 PM
To: Vasilenko Eduard ; North American Network 
Operators' Group 
Subject: Re: Jon Postel Re: 202210301538.AYC

On Mon, Oct 31, 2022 at 2:37 AM Vasilenko Eduard via NANOG  
wrote:
>
> 1.   What is going on on the Internet is not democracy even formally, 
> because there is no formal voting.
> 3GPP, ETSI, 802.11 have voting. IETF decisions are made by bosses who did 
> manage to gain power (primarily by establishing a proper network of 
> relationships).
> It could be even called “totalitarian” because IETF bosses could stay in one 
> position for decades.

I do not see how it can be called totalitarian given the IETF Nomcom 
appointment and recall mechanisms. Admittedly it is not full on Sortition 
(https://en.wikipedia.org/wiki/Sortition) but it is just one level of 
indirection from Sortition. (See
https://www.forbes.com/sites/forbestechcouncil/2020/08/20/indirection-the-unsung-hero-of-software-engineering/?sh=2cc673587f47)

Thanks,
Donald

>  ...
>
> Eduard


RE: Jon Postel Re: 202210301538.AYC

2022-10-31 Thread Vasilenko Eduard via NANOG
1.   What is going on on the Internet is not democracy even formally, 
because there is no formal voting.
3GPP, ETSI, 802.11 have voting. IETF decisions are made by bosses who did 
manage to gain power (primarily by establishing a proper network of 
relationships).
It could be even called “totalitarian” because IETF bosses could stay in one 
position for decades.



2.   Democracy does not work anywhere because unqualified people could be 
driven to make wrong decisions.
Voting qualification check is mandatory, not everybody should have the right to 
vote for a particular question.
I do not want to tell what was the qualification check in the early US or 
ancient Greece (where democracy was working) – because many would shout at me. 
It is not relevant to the technical group anyway.
ETSI filters voting rights by money – the company should pay for memberships.
802.11 filter voting rights by the member's physical presence on the last 4 
meetings.
It is not ideal but it is better than no filtering at all.

Eduard
From: NANOG [mailto:nanog-bounces+vasilenko.eduard=huawei@nanog.org] On 
Behalf Of Abraham Y. Chen
Sent: Monday, October 31, 2022 1:42 AM
To: Noah 
Cc: North American Network Operators' Group 
Subject: Re: Jon Postel Re: 202210301538.AYC

Dear Noah:

0)  "Iterations often times leads back to the beginning.": Thanks for 
distilling this thread to a concise principle. Perhaps your name was given with 
the foresight of this discussion? 

1)  As a newcomer to the arena, I have always been perplexed by the apparent 
collective NIH (Not Invented Here) syndrome of the Internet community. While 
promoting openness, everything seems to go with "my way or noway". Of course, 
each Internet practice or convention was determined by some sort of consensus 
by majority opinion. However, once it gets going, it appears to be cast in 
concrete. There is a huge inertia against considering alternatives or 
improvements. Some of them even appear to be volunteered "policing" without 
full understanding of the background. Just like how practically all democratic 
governments are facing these days, a well-intended crowd can be led by an 
influencer to derail a social normality. It does not seem to me that strictly 
adhering to "one person one vote" rule can guide us toward a productive future.

2)  To follow what you are saying, I wonder how could we think "out of the box" 
or go "back to the future", before it is too late for our world wide 
communications infrastructure to serve as a reliable daily tool without being a 
distraction constantly? That is, four decades should be long enough for our 
Internet experiments to be reviewed, so that we can try navigating out of the 
current chaos, or start with an alternative.

Regards,


Abe (2022-10-30 18:41 EDT)




On 2022-10-30 12:47, Noah wrote:

On Mon, 17 Oct 2022, 00:18 Randy Bush, mailto:ra...@psg.com>> 
wrote:
my favorite is

It's perfectly appropriate to be upset.

Ack

I thought of it in a slightly
different way--like a space that we were exploring and, in the early days,
we figured out this consistent path through the space: IP, TCP, and so on.

the impact of IP, TCP in improving human life across the globe in the last 
decades can not be overstated.

Human enginuity through names like Google have enabled the age of information 
and access to information through addresses and digital trade routes have 
continued to ensure peace for humanity on the positive side of the 
communications spectrum.

What's been happening over the last few years is that the IETF is filling
the rest of the space with every alternative approach, not necessarily any
better.  Every possible alternative is now being written down.  And it's not
useful.  -- Jon Postel

I suppose original human ideas and thoughts tends to stand the taste of time.

Iterations often times leads back to the beginning.

Noah



[https://s-install.avcdn.net/ipm/preview/icons/icon-envelope-tick-round-orange-animated-no-repeat-v1.gif]

Virus-free.www.avast.com




RE: Any experiences using SIIT-DC in an IXP setting ?

2022-10-10 Thread Vasilenko Eduard via NANOG
As I understand the initial question: the client has no IPv4.
Initial “4” in 464XLAT means IPv4 client.

DNS64 could mislead the client that the server (on the internet) is available 
on IPv6.
Then NAT64 would convert IPv6 to IPv4.
But it is not stateless by any means (requested below).

Ed/
From: Ca By [mailto:cb.li...@gmail.com]
Sent: Monday, October 10, 2022 7:27 PM
To: Vasilenko Eduard 
Cc: Carlos Martinez-Cagnazzo ; NANOG 
Subject: Re: Any experiences using SIIT-DC in an IXP setting ?



On Mon, Oct 10, 2022 at 9:17 AM Vasilenko Eduard via NANOG 
mailto:nanog@nanog.org>> wrote:
The technology for IPv6 client to connect IPv4 web server on Internet is just 
not specified in IETF.
Ed/
Ed, you seem to be not so familiar with the this ietf body of work

RFC6877

“ 464XLAT is a simple and scalable

   technique to quickly deploy limited IPv4 access service to IPv6-only

   edge networks without encapsulation.

”

To the OP, you can google jpix has done this.



From: NANOG 
[mailto:nanog-bounces+vasilenko.eduard<mailto:nanog-bounces%2Bvasilenko.eduard>=huawei@nanog.org<mailto:huawei@nanog.org>]
 On Behalf Of Carlos Martinez-Cagnazzo
Sent: Monday, October 10, 2022 6:57 PM
To: NANOG mailto:nanog@nanog.org>>
Subject: Any experiences using SIIT-DC in an IXP setting ?

Hi all,

I'm looking at a use case for stateless 6-4 mappings in the context of an IXP.

The problem we are looking to solve is allowing IXP members who have no IPv4 of 
their own and in most cases they have a /26 or /27 issued by their transit 
provider and rely on CGN to provide service to their customers. They do have 
their own AS numbers and IPv6 prefixes though.

Any comments are appreciated. PM is fine too.

Thanks!

/Carlos



RE: Any experiences using SIIT-DC in an IXP setting ?

2022-10-10 Thread Vasilenko Eduard via NANOG
The technology for IPv6 client to connect IPv4 web server on Internet is just 
not specified in IETF.
Ed/
From: NANOG [mailto:nanog-bounces+vasilenko.eduard=huawei@nanog.org] On 
Behalf Of Carlos Martinez-Cagnazzo
Sent: Monday, October 10, 2022 6:57 PM
To: NANOG 
Subject: Any experiences using SIIT-DC in an IXP setting ?

Hi all,

I'm looking at a use case for stateless 6-4 mappings in the context of an IXP.

The problem we are looking to solve is allowing IXP members who have no IPv4 of 
their own and in most cases they have a /26 or /27 issued by their transit 
provider and rely on CGN to provide service to their customers. They do have 
their own AS numbers and IPv6 prefixes though.

Any comments are appreciated. PM is fine too.

Thanks!

/Carlos



RE: Mitigating the effects of SLAAC renumbering events (draft-ietf-6man-slaac-renum)

2022-08-31 Thread Vasilenko Eduard via NANOG
Such router behavior is completely legal by ND RFC.
It does not matter that real routers implementations do not do this.
We should think that they do because the standard permits it.

And the RA in the chain may be lost.
It is better to attach information about completeness to the information itself.
Eduard
-Original Message-
From: Fernando Gont [mailto:fg...@si6networks.com] 
Sent: Wednesday, August 31, 2022 4:12 PM
To: Vasilenko Eduard ; nanog@nanog.org
Subject: Re: Mitigating the effects of SLAAC renumbering events 
(draft-ietf-6man-slaac-renum)

Hi,

On 31/8/22 09:43, Vasilenko Eduard wrote:
> Hi all,
> 
> The router could split information between RAs (and send it at 
> different intervals). It may be difficult to guess what is stale and 
> what is just "not in this RA".

You ask the router, and the router responds.

If you want to consider the case where the router intentionally splits the 
options into multiple packets (which does not exist in practice), AND the link 
is super lossy, you just increase the number of retransmissions.

There's no guessing.

Thanks,
--
Fernando Gont
SI6 Networks
e-mail: fg...@si6networks.com
PGP Fingerprint: F242 FF0E A804 AF81 EB10 2F07 7CA1 321D 663B B494


RE: Mitigating the effects of SLAAC renumbering events (draft-ietf-6man-slaac-renum)

2022-08-31 Thread Vasilenko Eduard via NANOG
Hi all,

The router could split information between RAs (and send it at different 
intervals).
It may be difficult to guess what is stale and what is just "not in this RA".

Fernando proposing (not documented yet in draft-ietf-6man-slaac-renum-04) 
re-asking the router by RS and using timers (size of timers is not proposed 
yet) To guess that router has probably supplied the full set of information And 
we could start concluding what is stale.

There is an alternative proposal to signal by ND flag that "this RA has the 
complete set of information"
https://datatracker.ietf.org/doc/html/draft-vv-6man-nd-prefix-robustness-02
... then you could immediately make your reliable conclusion on what is stale.

IMHO: Clear signaling that "information is complete in this RA" is better than 
guessing by timers.
It is the more robust solution.
We need to sync the state between the host and just rebooted the router.

If you have an opinion on this matter,
Please send a message to i...@ietf.org

Thanks.

Eduard
-Original Message-
From: NANOG [mailto:nanog-bounces+vasilenko.eduard=huawei@nanog.org] On 
Behalf Of Fernando Gont
Sent: Wednesday, August 31, 2022 1:35 PM
To: nanog@nanog.org
Subject: Mitigating the effects of SLAAC renumbering events 
(draft-ietf-6man-slaac-renum)

Folks,

We have been discussing the potential problems associated with SLAAC 
renumbering events for a while now -- one of the most common cases being ISPs 
rotating home prefixes, and your devices ending up with stale/invalid addresses.

We have done quite a bit of work already:

   * Problem statement: https://datatracker.ietf.org/doc/html/rfc8978
   * CPE recommendations: https://datatracker.ietf.org/doc/html/rfc9096

But there's still some work to do to address this issue: The last remaining it 
is to improve SLAAC such that hosts can more gracefully deal with this 
renumbering events.

In that light, IETF's 6man has been working on this document: 
https://www.ietf.org/archive/id/draft-ietf-6man-slaac-renum-04.txt

And we have proposed a simple algorithm for SLAAC (an extension, if you
wish) that can easily help, as follows:

 If you (host) receive an RA that contains options, but not all
 of the previously-received options/information, simply send a
 unicast RS to the local-router, to verify/refresh that such missing
 information is still valid. If the information is stale, get rid of
 it.

I presented this algorithm at the last IETF meeting 
(https://youtu.be/eKEizC8xhhM?t=1308).

(You may find the slides here: 
https://datatracker.ietf.org/meeting/114/materials/slides-114-6man-improving-the-robustness-of-stateless-address-autoconfiguration-slaac-to-flash-renumbering-events-00)

Finally, I've sent draft text for the specification of the algorithm
here: 
https://mailarchive.ietf.org/arch/msg/ipv6/KD_Vpqg0NmkVXOQntVTOMlWHWwA/

We would be super thankful if you could take a look at the draft text (i.e.,
https://mailarchive.ietf.org/arch/msg/ipv6/KD_Vpqg0NmkVXOQntVTOMlWHWwA/)
and provide feedback/comments.

If you can post/comment on the 6man wg mailing list 
(https://www.ietf.org/mailman/listinfo/ipv6), that´d be fabulous.
But we'll appreciate your feedback off-line, on this list, etc. (that'd still 
be great ;-) )

Thanks in advance!

Regards,
--
Fernando Gont
SI6 Networks
e-mail: fg...@si6networks.com
PGP Fingerprint: F242 FF0E A804 AF81 EB10 2F07 7CA1 321D 663B B494


RE: IoT - The end of the internet

2022-08-11 Thread Vasilenko Eduard via NANOG
Exponential growth under the limited resource
Always finish by collapse.
Some resources are always limited in nature.
Smith’s joke from the “Matrix” (about modeling humans as a virus) is only 
partially a joke.
Whenever somebody talks about “exponent” – be alarmed – it would end in a very 
bad way.

The biggest one in the history of mankind was around 1200 b.c. Tin for Bronze 
has been finished, Bronze was the basement of the civilization.
It is the famous “Bronze Age collapse” that cut the population 100x, and 
civilization lost writing capability for a few hundreds of years.
Recovered by mastering Iron instead of Bronze. Iron is many thousands of times 
more available on Earth (in every swamp).

Tens of smaller collapses are traceable in human history.
Well, Roma's empire collapse was probably not so small, but smaller than the 
“Bronze Age collapse”.
The oldest is probably from humans in Australia, they have eaten all big 
animals and destroyed all forests, then depopulate and lose the basic tools 
(like arrows).
A very similar story that did happen for Easter Island, just on the island all 
become dead.

We are at the inflection point of the current exponent.
Natural resource energy production already declining for a couple of years 
(small decline yet) – carbon-hydrogen-based natural resources are limited.
If a replacement for the current energy source would not be found
Then the anticipated civilization collapse would become the biggest in history: 
1000x depopulations.
Nile river is capable to feed 1M of people using only muscles, not 120M. And so 
on everywhere in the world.
The transition period in collapse would bypass possible optimal under the new 
conditions (cut more people).

“Dark ages” are possible and happened in history many times. Don’t be too 
optimistic.
People could start eating each other instead of “Lunch on the Moon”. It is 
possible.
Fortunately, not mandatory.

PS: Canned energy from China (solar panels, wind turbines) is produced from 
coal. It is not a solution when coal would finish.
Moreover, energy return from such types of “green energy” is worse than direct 
electricity generation from coal.
It is popular just because dust is left in China. Others have “green”.
A closed nuclear fuel cycle is the only available solution (gives the next 
exponent that could last 5k years if Thorium is involved).
The ordinary nuclear reaction could prolong humans' agony only for 60 years 
(Uranium 235 is limited).
Nuclear fusion looks like fiction yet: the best story for money wastage, 
already 3 generations of scientists have made their careers.

Ed/
From: NANOG [mailto:nanog-bounces+vasilenko.eduard=huawei@nanog.org] On 
Behalf Of Etienne-Victor Depasquale via NANOG
Sent: Wednesday, August 10, 2022 9:19 PM
To: Chris Wright 
Cc: NANOG 
Subject: Re: IoT - The end of the internet

 because our lizard brains have a hard time comprehending exponential growth
Don't forget how we pontificate on how well we understand infinity.

Cheers,

Etienne

On Wed, Aug 10, 2022 at 6:09 PM Chris Wright 
mailto:chris.wri...@commnetbroadband.com>> 
wrote:
That’s just humans in general, and it certainly isn’t limited to our outlook on 
the future of the internet. Big advancements will always take us by surprise 
because our lizard brains have a hard time comprehending exponential growth. 
Someone please stop me here before I get on my Battery-EV soapbox. :D

Chris

From: NANOG 
mailto:commnetbroadband@nanog.org>>
 On Behalf Of Tom Beecher
Sent: Wednesday, August 10, 2022 9:25 AM
To: Christopher Wolff mailto:ch...@vergeinternet.com>>
Cc: NANOG mailto:nanog@nanog.org>>
Subject: Re: IoT - The end of the internet

It always amazes me how an industry that has , since its inception, been 
constantly solving new problems to make things work, always finds a way to 
assume the next problem will be unsolvable.

On Tue, Aug 9, 2022 at 10:23 PM Christopher Wolff 
mailto:ch...@vergeinternet.com>> wrote:
Hi folks,

Has anyone proposed that the adoption of billions of IoT devices will 
ultimately ‘break’ the Internet?

It’s not a rhetorical question I promise, just looking for a journal or other 
scholarly article that implies that the Internet is doomed.


--
Ing. Etienne-Victor Depasquale
Assistant Lecturer
Department of Communications & Computer Engineering
Faculty of Information & Communication Technology
University of Malta
Web. https://www.um.edu.mt/profile/etiennedepasquale


RE: 400G forwarding - how does it work?

2022-07-26 Thread Vasilenko Eduard via NANOG
Pipeline Stages are like separate computers (with their own ALU) sharing the 
same memory.
In the ASIC case, the computers have different types (different capabilities).

From: Etienne-Victor Depasquale [mailto:ed...@ieee.org]
Sent: Tuesday, July 26, 2022 2:05 PM
To: Saku Ytti 
Cc: Vasilenko Eduard ; NANOG 
Subject: Re: 400G forwarding - how does it work?

How do you define a pipeline?

For what it's worth, and
with just a cursory look through this email, and
without wishing to offend anyone's knowledge:

a pipeline in processing is the division of the instruction cycle into a number 
of stages.
General purpose RISC processors are often organized into five such stages.
Under optimal conditions,
which can be fairly, albeit loosely,
interpreted as "one instruction does not affect its peers which are already in 
one of the stages",
then a pipeline can increase the number of instructions retired per second,
often quoted as MIPS (millions of instructions per second)
by a factor equal to the number of stages in the pipeline.


Cheers,

Etienne


On Tue, Jul 26, 2022 at 10:56 AM Saku Ytti mailto:s...@ytti.fi>> 
wrote:

On Tue, 26 Jul 2022 at 10:52, Vasilenko Eduard 
mailto:vasilenko.edu...@huawei.com>> wrote:

Juniper is pipeline-based too (like any ASIC). They just invented one special 
stage in 1996 for lookup (sequence search by nibble in the big external memory 
tree) – it was public information up to 2000year. It is a different principle 
from TCAM search – performance is traded for flexibility/simplicity/cost.

How do you define a pipeline? My understanding is that fabric and wan 
connections are in chip called MQ, 'head' of packet being some 320B or so (bit 
less on more modern Trio, didn't measure specifically) is then sent to LU 
complex for lookup.
LU then sprays packets to one of many PPE, but once packet hits PPE, it is 
processed until done, it doesn't jump to another PPE.
Reordering will occur, which is later restored for flows, but outside flows 
reorder may remain.

I don't know what the cores are, but I'm comfortable to bet money they are not 
ARM. I know Cisco used to ezchip in ASR9k but is now jumping to their own NPU 
called lightspeed, and lightspeed like CRS-1 and ASR1k use tensilica cores, 
which are decidedly not ARM.

Nokia, as mentioned, kind of has a pipeline, because a single packet hits every 
core in line, and each core does separate thing.



Network Processors emulate stages on general-purpose ARM cores. It is a 
pipeline too (different cores for different functions, many cores for every 
function), just it is a virtual pipeline.



Ed/

-Original Message-
From: NANOG 
[mailto:nanog-bounces+vasilenko.eduard=huawei@nanog.org]
 On Behalf Of Saku Ytti
Sent: Monday, July 25, 2022 10:03 PM
To: James Bensley 
mailto:jwbensley%2bna...@gmail.com>>
Cc: NANOG mailto:nanog@nanog.org>>
Subject: Re: 400G forwarding - how does it work?



On Mon, 25 Jul 2022 at 21:51, James Bensley 
mailto:jwbensley+na...@gmail.com>> wrote:



> I have no frame of reference here, but in comparison to Gen 6 Trio of

> NP5, that seems very high to me (to the point where I assume I am

> wrong).



No you are right, FP has much much more PPEs than Trio.



For fair calculation, you compare how many lines FP has to PPEs in Trio. 
Because in Trio single PPE handles entire packet, and all PPEs run identical 
ucode, performing same work.



In FP each PPE in line has its own function, like first PPE in line could be 
parsing the packet and extracting keys from it, second could be doing 
ingressACL, 3rd ingressQoS, 4th ingress lookup and so forth.



Why choose this NP design instead of Trio design, I don't know. I don't 
understand the upsides.



Downside is easy to understand, picture yourself as ucode developer, and you 
get task to 'add this magic feature in the ucode'.

Implementing it in Trio seems trivial, add the code in ucode, rock on.

On FP, you might have to go 'aww shit, I need to do this before PPE5 but after 
PPE3 in the pipeline, but the instruction cost it adds isn't in the budget that 
I have in the PPE4, crap, now I need to shuffle around and figure out which PPE 
in line runs what function to keep the PPS we promise to customer.



Let's look it from another vantage point, let's cook-up IPv6 header with 
crapton of EH, in Trio, PPE keeps churning it out, taking long time, but 
eventually it gets there or raises exception and gives up.

Every other PPE in the box is fully available to perform work.

Same thing in FP? You have HOLB, the PPEs in the line after thisPPE are not 
doing anything and can't do anything, until the PPE before in line is done.



Today Cisco and Juniper do 'proper' CoPP, that is, they do ingressACL before 
and after lookup, before is normally needed for ingressACL but after lookup 
ingressACL is needed for CoPP (we only know after lookup if it is control-plane 
packet). Nokia doesn't do this at 

RE: 400G forwarding - how does it work?

2022-07-26 Thread Vasilenko Eduard via NANOG
Nope, ASIC vendors are not ARM-based for PFE. Every “stage” is a very 
specialized ASIC with small programmability (not so small for P4 and some 
latest generation ASICs).
ARM cores are for Network Processors (NP). ARM cores (with proper microcode) 
could emulate any “stage” of ASIC. It is the typical explanation for why NPs 
are more flexible than ASIC.

Stages are connected to the common internal memory where enriched packet 
headers are stored. The pipeline is just the order of stages to process these 
internal enriched headers.
The size of this internal header is the critical restriction of the ASIC, never 
disclosed or discussed (but people know it anyway for the most popular ASICs – 
it is possible to google “key buffer”).
Hint: the smallest one in the industry is 128bytes, the biggest 384bytes. It is 
not possible to process longer headers for one PFE pass.
Non-compressed SRv6 header could be 208bytes (+TCP/UDP +VLAN +L2 
+ASIC_internal_staff). Hence, the need for compressed.

It was a big marketing announcement from one famous ASIC vendor just a few 
years ago that some ASIC stages are capable of dynamically sharing common big 
external memory (used for MAC/IP/Filters).
It may be internal memory too for small scalability, but typically it is 
external memory. This memory is always discussed in detail – it is needed for 
the operation team.

It is only about headers. The packet itself (payload) is stored in the separate 
memory (buffer) that is not visible for pipeline stages.

There were times when it was difficult to squeeze everything into one ASIC. 
Then one chip prepares an internal (enriched) header and may do some processing 
(some simple stages), then send this header to the next chip for other “stages” 
(especially the complicated lookup with external memory connected). It is the 
artifact now.

Ed/
From: Saku Ytti [mailto:s...@ytti.fi]
Sent: Tuesday, July 26, 2022 11:53 AM
To: Vasilenko Eduard 
Cc: James Bensley ; NANOG 
Subject: Re: 400G forwarding - how does it work?


On Tue, 26 Jul 2022 at 10:52, Vasilenko Eduard 
mailto:vasilenko.edu...@huawei.com>> wrote:

Juniper is pipeline-based too (like any ASIC). They just invented one special 
stage in 1996 for lookup (sequence search by nibble in the big external memory 
tree) – it was public information up to 2000year. It is a different principle 
from TCAM search – performance is traded for flexibility/simplicity/cost.

How do you define a pipeline? My understanding is that fabric and wan 
connections are in chip called MQ, 'head' of packet being some 320B or so (bit 
less on more modern Trio, didn't measure specifically) is then sent to LU 
complex for lookup.
LU then sprays packets to one of many PPE, but once packet hits PPE, it is 
processed until done, it doesn't jump to another PPE.
Reordering will occur, which is later restored for flows, but outside flows 
reorder may remain.

I don't know what the cores are, but I'm comfortable to bet money they are not 
ARM. I know Cisco used to ezchip in ASR9k but is now jumping to their own NPU 
called lightspeed, and lightspeed like CRS-1 and ASR1k use tensilica cores, 
which are decidedly not ARM.

Nokia, as mentioned, kind of has a pipeline, because a single packet hits every 
core in line, and each core does separate thing.



Network Processors emulate stages on general-purpose ARM cores. It is a 
pipeline too (different cores for different functions, many cores for every 
function), just it is a virtual pipeline.



Ed/

-Original Message-
From: NANOG 
[mailto:nanog-bounces+vasilenko.eduard=huawei@nanog.org]
 On Behalf Of Saku Ytti
Sent: Monday, July 25, 2022 10:03 PM
To: James Bensley 
mailto:jwbensley%2bna...@gmail.com>>
Cc: NANOG mailto:nanog@nanog.org>>
Subject: Re: 400G forwarding - how does it work?



On Mon, 25 Jul 2022 at 21:51, James Bensley 
mailto:jwbensley+na...@gmail.com>> wrote:



> I have no frame of reference here, but in comparison to Gen 6 Trio of

> NP5, that seems very high to me (to the point where I assume I am

> wrong).



No you are right, FP has much much more PPEs than Trio.



For fair calculation, you compare how many lines FP has to PPEs in Trio. 
Because in Trio single PPE handles entire packet, and all PPEs run identical 
ucode, performing same work.



In FP each PPE in line has its own function, like first PPE in line could be 
parsing the packet and extracting keys from it, second could be doing 
ingressACL, 3rd ingressQoS, 4th ingress lookup and so forth.



Why choose this NP design instead of Trio design, I don't know. I don't 
understand the upsides.



Downside is easy to understand, picture yourself as ucode developer, and you 
get task to 'add this magic feature in the ucode'.

Implementing it in Trio seems trivial, add the code in ucode, rock on.

On FP, you might have to go 'aww shit, I need to do this before PPE5 but after 
PPE3 in the pipeline, but the 

RE: 400G forwarding - how does it work?

2022-07-26 Thread Vasilenko Eduard via NANOG
All high-performance networking devices on the market have pipeline 
architecture.

The pipeline consists of "stages".



ASICs have stages fixed to particular functions:

[cid:image002.png@01D8A0DD.988EC6A0]

Well, some stages are driven by code our days (a little flexibility).



Juniper is pipeline-based too (like any ASIC). They just invented one special 
stage in 1996 for lookup (sequence search by nibble in the big external memory 
tree) – it was public information up to 2000year. It is a different principle 
from TCAM search – performance is traded for flexibility/simplicity/cost.



Network Processors emulate stages on general-purpose ARM cores. It is a 
pipeline too (different cores for different functions, many cores for every 
function), just it is a virtual pipeline.



Ed/

-Original Message-
From: NANOG [mailto:nanog-bounces+vasilenko.eduard=huawei@nanog.org] On 
Behalf Of Saku Ytti
Sent: Monday, July 25, 2022 10:03 PM
To: James Bensley 
Cc: NANOG 
Subject: Re: 400G forwarding - how does it work?



On Mon, 25 Jul 2022 at 21:51, James Bensley 
mailto:jwbensley+na...@gmail.com>> wrote:



> I have no frame of reference here, but in comparison to Gen 6 Trio of

> NP5, that seems very high to me (to the point where I assume I am

> wrong).



No you are right, FP has much much more PPEs than Trio.



For fair calculation, you compare how many lines FP has to PPEs in Trio. 
Because in Trio single PPE handles entire packet, and all PPEs run identical 
ucode, performing same work.



In FP each PPE in line has its own function, like first PPE in line could be 
parsing the packet and extracting keys from it, second could be doing 
ingressACL, 3rd ingressQoS, 4th ingress lookup and so forth.



Why choose this NP design instead of Trio design, I don't know. I don't 
understand the upsides.



Downside is easy to understand, picture yourself as ucode developer, and you 
get task to 'add this magic feature in the ucode'.

Implementing it in Trio seems trivial, add the code in ucode, rock on.

On FP, you might have to go 'aww shit, I need to do this before PPE5 but after 
PPE3 in the pipeline, but the instruction cost it adds isn't in the budget that 
I have in the PPE4, crap, now I need to shuffle around and figure out which PPE 
in line runs what function to keep the PPS we promise to customer.



Let's look it from another vantage point, let's cook-up IPv6 header with 
crapton of EH, in Trio, PPE keeps churning it out, taking long time, but 
eventually it gets there or raises exception and gives up.

Every other PPE in the box is fully available to perform work.

Same thing in FP? You have HOLB, the PPEs in the line after thisPPE are not 
doing anything and can't do anything, until the PPE before in line is done.



Today Cisco and Juniper do 'proper' CoPP, that is, they do ingressACL before 
and after lookup, before is normally needed for ingressACL but after lookup 
ingressACL is needed for CoPP (we only know after lookup if it is control-plane 
packet). Nokia doesn't do this at all, and I bet they can't do it, because if 
they'd add it in the core where it needs to be in line, total PPS would go 
down. as there is no budget for additional ACL. Instead all control-plane 
packets from ingressFP are sent to control plane FP, and inshallah we don't 
congest the connection there or it.





>

> Cheers,

> James.







--

  ++ytti


RE: Upstream bandwidth usage

2022-06-10 Thread Vasilenko Eduard via NANOG
ONT always has SFP for PON. It is inside (built-in) – this way is cheaper.  OK. 
In this case, it is not SFP because it is not “pluggable”.
1G and 10G optics have a big cost difference for ONT.

From: Dave Bell [mailto:m...@geordish.org]
Sent: Friday, June 10, 2022 11:09 AM
To: Vasilenko Eduard 
Cc: Mel Beckman ; Raymond Burkholder ; 
nanog@nanog.org
Subject: Re: Upstream bandwidth usage

We are rolling out XGS-PON everywhere which is 10G symmetric. Just because the 
PON runs at 10G, doesn't mean you need to provision all of your customers at 
10G.

We have a range of residential packages from 150Mbps up to 1Gbps symmetric. The 
ONT is the same in all situations. There is no SFP cost, due to it being a 
copper port. If we were to offer residential packages beyond 1G, a CPE swap 
would be required, but there is little demand for that... yet...

The future is bright for PON with NG-PON2, and 50G PON on their way.

Regards,
Dave

On Fri, 10 Jun 2022 at 08:54, Vasilenko Eduard via NANOG 
mailto:nanog@nanog.org>> wrote:
I did believe that it is about the cost of SFP on the CPE/ONT side: 5$ against 
7$ makes a big difference if you multiply by 100.

By the way, there are many deployments of 10G symmetric PON. It was promoted 
for "Enterprise clients".
CPE cost hurts in this case.
But some CPE could be 10GE and another 1GE upstream (10G downstream) on the 
same tree.

Ed/
-Original Message-
From: NANOG 
[mailto:nanog-bounces+vasilenko.eduard<mailto:nanog-bounces%2Bvasilenko.eduard>=huawei@nanog.org<mailto:huawei@nanog.org>]
 On Behalf Of Mel Beckman
Sent: Friday, June 10, 2022 4:11 AM
To: Raymond Burkholder mailto:r...@oneunified.net>>
Cc: nanog@nanog.org<mailto:nanog@nanog.org>
Subject: Re: Upstream bandwidth usage

I’m not mistaken, it also depends on the optics in the splitter, given that 
GPON is bidirectional single strand fiber.

-mel via cell

> On Jun 9, 2022, at 5:01 PM, Raymond Burkholder 
> mailto:r...@oneunified.net>> wrote:
>
> 
>
>> On 2022-06-09 17:35, Michael Thomas wrote:
>>
>>> On 6/9/22 4:31 PM, Mel Beckman wrote:
>>> Adam,
>>>
>>> Your point on asymmetrical technologies is excellent. But you may not be 
>>> aware that residential optical fiber is also asymmetrical. For example, 
>>> GPON, the latest ITU specified PON standard, and the most widely deployed, 
>>> calls for a 2.4 Gbps downstream and a 1.25 Gbps upstream optical line rate.
>>
>> Why would they mandate such a thing? That seems like purely an operator 
>> decision.
>
> There are also vendor issues involved.  I am glad that Mel mentioned 'optical 
> line' rate.  Which becomes a theoretical thing.  If the line cards aren't set 
> up with buffering properly, then line rate won't be seen.  And I think the 
> line cards can also be easily over-subscribed.  Oh, and due to the two or 
> three step fan-out of 8/16/32, upstream becomes even more limited.
>
> So, if you have FTTH with 1::1 house::port, then you are cooking with fire.  
> Else, it is the luck of the draw in terms of how conservative the ISP is 
> provisioning a GPON infrastructure.  Which, I suppose, depends if it is 1G or 
> 10G GPON.


RE: Upstream bandwidth usage

2022-06-10 Thread Vasilenko Eduard via NANOG
I did believe that it is about the cost of SFP on the CPE/ONT side: 5$ against 
7$ makes a big difference if you multiply by 100.

By the way, there are many deployments of 10G symmetric PON. It was promoted 
for "Enterprise clients".
CPE cost hurts in this case.
But some CPE could be 10GE and another 1GE upstream (10G downstream) on the 
same tree.

Ed/
-Original Message-
From: NANOG [mailto:nanog-bounces+vasilenko.eduard=huawei@nanog.org] On 
Behalf Of Mel Beckman
Sent: Friday, June 10, 2022 4:11 AM
To: Raymond Burkholder 
Cc: nanog@nanog.org
Subject: Re: Upstream bandwidth usage

I’m not mistaken, it also depends on the optics in the splitter, given that 
GPON is bidirectional single strand fiber. 

-mel via cell

> On Jun 9, 2022, at 5:01 PM, Raymond Burkholder  wrote:
> 
> 
> 
>> On 2022-06-09 17:35, Michael Thomas wrote:
>> 
>>> On 6/9/22 4:31 PM, Mel Beckman wrote:
>>> Adam,
>>> 
>>> Your point on asymmetrical technologies is excellent. But you may not be 
>>> aware that residential optical fiber is also asymmetrical. For example, 
>>> GPON, the latest ITU specified PON standard, and the most widely deployed, 
>>> calls for a 2.4 Gbps downstream and a 1.25 Gbps upstream optical line rate.
>> 
>> Why would they mandate such a thing? That seems like purely an operator 
>> decision.
> 
> There are also vendor issues involved.  I am glad that Mel mentioned 'optical 
> line' rate.  Which becomes a theoretical thing.  If the line cards aren't set 
> up with buffering properly, then line rate won't be seen.  And I think the 
> line cards can also be easily over-subscribed.  Oh, and due to the two or 
> three step fan-out of 8/16/32, upstream becomes even more limited.
> 
> So, if you have FTTH with 1::1 house::port, then you are cooking with fire.  
> Else, it is the luck of the draw in terms of how conservative the ISP is 
> provisioning a GPON infrastructure.  Which, I suppose, depends if it is 1G or 
> 10G GPON.


RE: Let's Focus on Moving Forward Re: V6 still not supported re: 202203261833.AYC

2022-04-04 Thread Vasilenko Eduard via NANOG
Well, if something is stateless then it is not CGNAT, it is just a router that 
may be called a gateway.
It is a very similar thing that we have on the border between any domains when 
having a different data plane:

-  DC (VxLAN) and Backbone (MPLS)

-  Backbone and Metro (both MPLS)
For sure, it is better to avoid gateways because it is typically (not always) 
an additional hop that costs money.
But the router is 3x less expensive than CGNAT. Hence, I would like to point 
out that the problem is 3x smaller.
Ed/
From: Dave Bell [mailto:m...@geordish.org]
Sent: Monday, April 4, 2022 9:21 PM
To: Nicholas Warren 
Cc: Vasilenko Eduard ; Abraham Y. Chen 
; Pascal Thubert (pthubert) ; Justin 
Streiner ; NANOG 
Subject: Re: Let's Focus on Moving Forward Re: V6 still not supported re: 
202203261833.AYC


This seems pretty unworkable.

We would now all need to maintain large CG-NAT boxes in the network to 
decasulate the traffic from a source to the subscriber. It does seem like it 
would be fairly stateless, it is not improving things.

Assuming the end host is sending traffic with the magic header already affixed, 
we now need to update literally every IP stack in existence if it wants to take 
part.

I need to update all my customer facing routers to have some fancy feature to 
look deep into the packet to check they are not circumventing BCP38.

This all seems like a lot of work just to not deploy IPv6.

Regards,
Dave

On Mon, 4 Apr 2022 at 15:37, Nicholas Warren 
mailto:nwar...@barryelectric.com>> wrote:
The vocabulary is distracting...

In practice this extends IPv4 addresses by 32 bits, making them 64 bits in 
total. They are referring to the top 32 bits (240.0.0.0/6<http://240.0.0.0/6>) 
as a “shaft.” The bottom 32 bits make up the "realm."

Here is the way my teeny tiny brain understands it:
1. We get our shafts from ARIN. I get 240.0.0.1; you get 240.0.0.2.
2. We announce our shiny new shafts in BGP. Yes, we announce the /32 that is 
our shaft.
3. We setup our networks to use the bottom 32 bits however we see fit in our 
network. (for the example, I assign my client 1.2.3.4 and you assign your 
client 4.3.2.1)
4. Somehow, we get DNS to hand out 64 bit addresses, probably through a  
and just ignoring the last 64 bits.
5. My client, assigned the address 1.2.3.4 in my realm, queries your client's 
address "shaft:240.0.0.2; realm 4.3.2.1" from DNS.
6. My client then sends your client a packet (IPv4 source: 240.0.0.1; IPv4 
destination: 240.0.0.2; Next Header: 4 (IPv4); IPv4 source: 1.2.3.4; IPv4 
destination: 4.3.2.1)
7. 240.0.0.0/6<http://240.0.0.0/6> is routable on plain old normal internet 
routers, so nothing needs to be changed. (lol)
8a. Your router receives the packet, and your router does special things with 
its shaft. (IPv4 source: 240.0.0.1; IPv4 destination: _4.3.2.1_; Next Header: 4 
(IPv4); IPv4 source: 1.2.3.4; IPv4 destination: _240.0.0.2_)
8b. Alternatively, every router in your network could determine next hop by 
investigating the second header when the destination is your shaft.
9. Your client receives the packet and can either handle this case in a special 
way or translate it to a v6 address for higher level applications.

No, as a matter of fact, I don't know I'm talking about. Hopefully one of the 
authors can correct my walkthrough of how it works 

Shaft and realm are fun words. I see why they picked them.

- Nich

From: NANOG 
mailto:barryelectric....@nanog.org>>
 On Behalf Of Vasilenko Eduard via NANOG
Sent: Monday, April 4, 2022 3:28 AM
To: Abraham Y. Chen mailto:ayc...@avinta.com>>; Pascal 
Thubert (pthubert) mailto:pthub...@cisco.com>>; Justin 
Streiner mailto:strein...@gmail.com>>
Cc: NANOG mailto:nanog@nanog.org>>
Subject: RE: Let's Focus on Moving Forward Re: V6 still not supported re: 
202203261833.AYC

2)When you extend each floor to use the whole IPv4 address pool, however, 
you are essential talking about covering the entire surface of the earth. Then, 
there is no isolated buildings with isolated floors to deploy your model 
anymore. There is only one spherical layer of physical earth surface for you to 
use as a realm, which is the current IPv4 deployment. How could you still have 
multiple full IPv4 address sets deployed, yet not seeing their identical twins, 
triplets, etc.? Are you proposing multiple spherical layers of "realms", one on 
top of the other?

It is the same as what I was trying to explain to Pascal. How to map the 
2-level hierarchy of the draft (“Shaft”:”Realm”) to the real world?
I am sure that it is possible to do this if assume that the real world has 2 
levels of hierarchy where the high level is “BGP AS”.
“BGP AS” is the name that everybody understands, No need for a new name “Shaft”.

Ed/
From: Abraham Y. Chen [mailto:ayc...@avinta.com<mailto:ayc...@avinta.com>]
Sent: Saturday, April 2, 2022 12:45 AM
To: Pascal Thubert 

RE: Let's Focus on Moving Forward Re: V6 still not supported re: 202203261833.AYC

2022-04-04 Thread Vasilenko Eduard via NANOG
e: 1.2.3.4; IPv4 destination: _240.0.0.2_)
>> 
>>> 8b. Alternatively, every router in your network could determine next 
>>> hop by investigating the second header when the destination is your shaft.
>> 
>> 8b is not suggested, because in your example I could be the Internet.
>> 
>> 
>>> 9. Your client receives the packet and can either handle this case 
>>> in a special way or translate it to a v6 address for higher level 
>>> applications.
>> 
>> The socket be updated to could understand the AA and play ball. Or 
>> statelesslessly NAT to IPv6, yes. This uses a well known IID that the IPv6 
>> stack would autoconf it automatically when handed out a prefix in the F000/6 
>> range. Note that it's a also /64 per host, which many have been asking for a 
>> while.
>> 
>> 
>>> No, as a matter of fact, I don't know I'm talking about. Hopefully 
>>> one of the authors can correct my walkthrough of how it works 
>> 
>> You were mostly there. Just that routing inside the shaft is probably a 
>> single IGP with no prefix attached, just links and router IDs.
>> 
>>> 
>>> Shaft and realm are fun words. I see why they picked them.
>>> 
>> 
>> Cool 
>> 
>> Keep safe;
>> 
>> Pascal
>> 
>> 
>>> - Nich
>>> 
>>> From: NANOG  On 
>>> Behalf Of Vasilenko Eduard via NANOG
>>> Sent: Monday, April 4, 2022 3:28 AM
>>> To: Abraham Y. Chen ; Pascal Thubert (pthubert) 
>>> ; Justin Streiner 
>>> Cc: NANOG 
>>> Subject: RE: Let's Focus on Moving Forward Re: V6 still not supported re:
>>> 202203261833.AYC
>>> 
>>> 2)When you extend each floor to use the whole IPv4 address pool, 
>>> however, you are essential talking about covering the entire surface 
>>> of the earth. Then, there is no isolated buildings with isolated 
>>> floors to deploy your model anymore. There is only one spherical 
>>> layer of physical earth surface for you to use as a realm, which is 
>>> the current IPv4 deployment. How could you still have multiple full 
>>> IPv4 address sets deployed, yet not seeing their identical twins, 
>>> triplets, etc.? Are you proposing multiple spherical layers of "realms", 
>>> one on top of the other?
>>> 
>>> It is the same as what I was trying to explain to Pascal. How to map 
>>> the 2-level hierarchy of the draft (“Shaft”:”Realm”) to the real world?
>>> I am sure that it is possible to do this if assume that the real 
>>> world has
>>> 2 levels of hierarchy where the high level is “BGP AS”.
>>> “BGP AS” is the name that everybody understands, No need for a new 
>>> name “Shaft”.
>>> 
>>> Ed/
>>> From: Abraham Y. Chen [mailto:ayc...@avinta.com]
>>> Sent: Saturday, April 2, 2022 12:45 AM
>>> To: Pascal Thubert (pthubert) <mailto:pthub...@cisco.com>; Vasilenko 
>>> Eduard <mailto:vasilenko.edu...@huawei.com>; Justin Streiner 
>>> <mailto:strein...@gmail.com>
>>> Cc: NANOG <mailto:nanog@nanog.org>
>>> Subject: Re: Let's Focus on Moving Forward Re: V6 still not supported re:
>>> 202203261833.AYC
>>> 
>>> Hi, Pascal:
>>> 
>>> 1)" ...  for the next version. ...":I am not sure that I 
>>> can wait for so long, because I am asking for the basics. The reason 
>>> that I asked for an IP packet header example of your proposal is to 
>>> visualize what do you mean by the model of "realms and shafts in a 
>>> multi-level building". The presentation in the draft  sounds okay, 
>>> because the floors are physically isolated from one another. And, 
>>> even the building is isolated from other buildings. This is pretty 
>>> much how PBX numbering plan worked.
>>> 
>>> 2)When you extend each floor to use the whole IPv4 address pool, 
>>> however, you are essential talking about covering the entire surface 
>>> of the earth. Then, there is no isolated buildings with isolated 
>>> floors to deploy your model anymore. There is only one spherical 
>>> layer of physical earth surface for you to use as a realm, which is 
>>> the current IPv4 deployment. How could you still have multiple full 
>>> IPv4 address sets deployed, yet not seeing their identical twins, 
>>> triplets, etc.? Are you proposing multiple spherical layers of "realms", 
>>> one on top of the other?
>>> 
>>> 2)When I c

RE: Let's Focus on Moving Forward Re: V6 still not supported re: 202203261833.AYC

2022-04-04 Thread Vasilenko Eduard via NANOG
240.0.01.1 address is appointed not to the router. It is appointed to Realm.
It is up to the realm owner (ISP to Enterprise) what particular router (or 
routers) would do translation between realms.

-Original Message-
From: Pascal Thubert (pthubert) [mailto:pthub...@cisco.com] 
Sent: Monday, April 4, 2022 7:20 PM
To: Nicholas Warren ; Vasilenko Eduard 
; Abraham Y. Chen ; Justin 
Streiner 
Cc: NANOG 
Subject: RE: Let's Focus on Moving Forward Re: V6 still not supported re: 
202203261833.AYC

Hello Nicholas

Sorry for the distraction with the names; I did not forge realm, found it in 
the art. OTOH I created shaft because of elevator shaft, could have used 
staircase.

 
> In practice this extends IPv4 addresses by 32 bits, making them 64 
> bits in total. They are referring to the top 32 bits (240.0.0.0/6) as a 
> “shaft.”
> The bottom 32 bits make up the "realm."



 
> Here is the way my teeny tiny brain understands it:
> 1. We get our shafts from ARIN. I get 240.0.0.1; you get 240.0.0.2.

On address per realm, yes. The we create an IXP where my 240.0.0.1 discovers 
your 240.0.0.2.
Depending on the size of the shaft, we can have an IGP, probably not BGP 
though. Because The 240.0.01.1 address could litelally be the router ID and 
there would be nothing else advertised inside the shaft.


> 2. We announce our shiny new shafts in BGP. Yes, we announce the /32 
> that is our shaft.

Inside your realm you inject 240.0.0.0/6. You roulm router(s) attract all 
traffic to the shaft. Traffic that remains inside the realm is routed normally, 
no IP in IP. Traffic towards another realm has the outer 240.0.0.2 destination.



> 3. We setup our networks to use the bottom 32 bits however we see fit 
> in our network. (for the example, I assign my client 1.2.3.4 and you 
> assign your client 4.3.2.1) 4. Somehow, we get DNS to hand out 64 bit 
> addresses, probably through a  and just ignoring the last 64 bits.

Or a new AA, yes

4?


> 5. My client, assigned the address 1.2.3.4 in my realm, queries your 
> client's address "shaft:240.0.0.2; realm 4.3.2.1" from DNS.

Yes



> 6. My client then sends your client a packet (IPv4 source: 240.0.0.1; 
> IPv4
> destination: 240.0.0.2; Next Header: 4 (IPv4); IPv4 source: 1.2.3.4; 
> IPv4
> destination: 4.3.2.1) 7. 240.0.0.0/6 is routable on plain old normal 
> internet routers, so nothing needs to be changed. (lol)

Hopefully the routers are less subject to 240 hiccups than the hosts. I'm not 
aware of code in our boxes that does anything special about it but then the 
code base is large.
Now, 240 is just because F000/6 is free in IPv6 so you can literally place the 
2 IPv4 in one IPv6 /64. Otherwise there will be some nastly little natting 
there too.

7?

> 8a. Your router receives the packet, and your router does special things with 
> its shaft.
> (IPv4 source: 240.0.0.1; IPv4 destination: _4.3.2.1_; Next Header: 4 
> (IPv4); IPv4 source: 1.2.3.4; IPv4 destination: _240.0.0.2_)

> 8b. Alternatively, every router in your network could determine next 
> hop by investigating the second header when the destination is your shaft.

8b is not suggested, because in your example I could be the Internet.


> 9. Your client receives the packet and can either handle this case in 
> a special way or translate it to a v6 address for higher level applications.

The socket be updated to could understand the AA and play ball. Or 
statelesslessly NAT to IPv6, yes. This uses a well known IID that the IPv6 
stack would autoconf it automatically when handed out a prefix in the F000/6 
range. Note that it's a also /64 per host, which many have been asking for a 
while.

 
> No, as a matter of fact, I don't know I'm talking about. Hopefully one 
> of the authors can correct my walkthrough of how it works 

You were mostly there. Just that routing inside the shaft is probably a single 
IGP with no prefix attached, just links and router IDs.

> 
> Shaft and realm are fun words. I see why they picked them.
> 

Cool 

Keep safe;

Pascal


> - Nich
> 
> From: NANOG  On 
> Behalf Of Vasilenko Eduard via NANOG
> Sent: Monday, April 4, 2022 3:28 AM
> To: Abraham Y. Chen ; Pascal Thubert (pthubert) 
> ; Justin Streiner 
> Cc: NANOG 
> Subject: RE: Let's Focus on Moving Forward Re: V6 still not supported re:
> 202203261833.AYC
> 
> 2)    When you extend each floor to use the whole IPv4 address pool, 
> however, you are essential talking about covering the entire surface 
> of the earth. Then, there is no isolated buildings with isolated 
> floors to deploy your model anymore. There is only one spherical layer 
> of physical earth surface for you to use as a realm, which is the 
> current IPv4 deployment. How could you still have multiple full IPv4 
> address sets deployed, yet not seeing their identical twins, trip

RE: Let's Focus on Moving Forward Re: V6 still not supported re: 202203261833.AYC

2022-04-04 Thread Vasilenko Eduard via NANOG
Hi Pascal,
The world has moved to 32bit AS# not far in the past. For sure, AS# would not 
cross 28bits.
I do not understand why you need something different from AS# to point to the 
Realm.
The one who would need a new realm - could go to RIR and ask for AS. Realm 
would be calculated automatically as 240.0.0.0+AS#.

I fail to see why you continue talking about IBM property.
Why do you need it?
Why do you believe IBM would grant it to the community?
Eduard
-Original Message-
From: Pascal Thubert (pthubert) [mailto:pthub...@cisco.com] 
Sent: Monday, April 4, 2022 7:27 PM
To: Vasilenko Eduard ; Nicholas Warren 
; Abraham Y. Chen ; Justin 
Streiner 
Cc: NANOG 
Subject: RE: Let's Focus on Moving Forward Re: V6 still not supported re: 
202203261833.AYC

Hello Eduard

As (badly) written, all ASes and IP addresses that exist today in the internet 
could be either reused or moved in any parallel realm. 

Now that the ASN space is 32 bits, there would not be room for non-ASN based 
shaft routers. I fail to see the value in conflating.

IBM could move 9.0.0.0 to another realm, and then grow outside of 9.0.0.0 to 
whatever they need inside. The YADA format would not be much worse than the 
socks they used at the time I was there.

That's the way I prefer it, but happy to see the little bird fly from the nest 
and become what it likes.

Keep safe;

Pascal

> -Original Message-
> From: Vasilenko Eduard 
> Sent: lundi 4 avril 2022 16:52
> To: Nicholas Warren ; Abraham Y. Chen 
> ; Pascal Thubert (pthubert) ; 
> Justin Streiner 
> Cc: NANOG 
> Subject: RE: Let's Focus on Moving Forward Re: V6 still not supported re:
> 202203261833.AYC
> 
> Hi Nicholas,
> In fact, your explanation is much better than the draft explanation.
> Could I propose a small modification?
> Every AS announces 240.0.0.0 + AS# that they already have then there 
> is no need for "shafts from ARIN" - AS# is already distributed and unique.
> Eduard
> -Original Message-
> From: Nicholas Warren [mailto:nwar...@barryelectric.com]
> Sent: Monday, April 4, 2022 5:33 PM
> To: Vasilenko Eduard ; Abraham Y. Chen 
> ; Pascal Thubert (pthubert) ; 
> Justin Streiner 
> Cc: NANOG 
> Subject: RE: Let's Focus on Moving Forward Re: V6 still not supported re:
> 202203261833.AYC
> 
> The vocabulary is distracting...
> 
> In practice this extends IPv4 addresses by 32 bits, making them 64 
> bits in total. They are referring to the top 32 bits (240.0.0.0/6) as a 
> “shaft.”
> The bottom 32 bits make up the "realm."
> 
> Here is the way my teeny tiny brain understands it:
> 1. We get our shafts from ARIN. I get 240.0.0.1; you get 240.0.0.2.
> 2. We announce our shiny new shafts in BGP. Yes, we announce the /32 
> that is our shaft.
> 3. We setup our networks to use the bottom 32 bits however we see fit 
> in our network. (for the example, I assign my client 1.2.3.4 and you 
> assign your client 4.3.2.1) 4. Somehow, we get DNS to hand out 64 bit 
> addresses, probably through a  and just ignoring the last 64 bits.
> 5. My client, assigned the address 1.2.3.4 in my realm, queries your 
> client's address "shaft:240.0.0.2; realm 4.3.2.1" from DNS.
> 6. My client then sends your client a packet (IPv4 source: 240.0.0.1; 
> IPv4
> destination: 240.0.0.2; Next Header: 4 (IPv4); IPv4 source: 1.2.3.4; 
> IPv4
> destination: 4.3.2.1) 7. 240.0.0.0/6 is routable on plain old normal 
> internet routers, so nothing needs to be changed. (lol) 8a. Your 
> router receives the packet, and your router does special things with its 
> shaft.
> (IPv4 source: 240.0.0.1; IPv4 destination: _4.3.2.1_; Next Header: 4 
> (IPv4); IPv4 source: 1.2.3.4; IPv4 destination: _240.0.0.2_) 8b.
> Alternatively, every router in your network could determine next hop 
> by investigating the second header when the destination is your shaft.
> 9. Your client receives the packet and can either handle this case in 
> a special way or translate it to a v6 address for higher level applications.
> 
> No, as a matter of fact, I don't know I'm talking about. Hopefully one 
> of the authors can correct my walkthrough of how it works 
> 
> Shaft and realm are fun words. I see why they picked them.
> 
> - Nich
> 
> From: NANOG  On 
> Behalf Of Vasilenko Eduard via NANOG
> Sent: Monday, April 4, 2022 3:28 AM
> To: Abraham Y. Chen ; Pascal Thubert (pthubert) 
> ; Justin Streiner 
> Cc: NANOG 
> Subject: RE: Let's Focus on Moving Forward Re: V6 still not supported re:
> 202203261833.AYC
> 
> 2)    When you extend each floor to use the whole IPv4 address pool, 
> however, you are essential talking about covering the entire surface 
> of the earth. Then, there is no isolated buildings with isolated 
> floors to deploy your model anymore. T

RE: Let's Focus on Moving Forward Re: V6 still not supported re: 202203261833.AYC

2022-04-04 Thread Vasilenko Eduard via NANOG
Hi Nicholas,
In fact, your explanation is much better than the draft explanation.
Could I propose a small modification?
Every AS announces 240.0.0.0 + AS# that they already have
then there is no need for "shafts from ARIN" - AS# is already distributed and 
unique.
Eduard
-Original Message-
From: Nicholas Warren [mailto:nwar...@barryelectric.com] 
Sent: Monday, April 4, 2022 5:33 PM
To: Vasilenko Eduard ; Abraham Y. Chen 
; Pascal Thubert (pthubert) ; Justin 
Streiner 
Cc: NANOG 
Subject: RE: Let's Focus on Moving Forward Re: V6 still not supported re: 
202203261833.AYC

The vocabulary is distracting...

In practice this extends IPv4 addresses by 32 bits, making them 64 bits in 
total. They are referring to the top 32 bits (240.0.0.0/6) as a “shaft.” The 
bottom 32 bits make up the "realm."

Here is the way my teeny tiny brain understands it:
1. We get our shafts from ARIN. I get 240.0.0.1; you get 240.0.0.2.
2. We announce our shiny new shafts in BGP. Yes, we announce the /32 that is 
our shaft.
3. We setup our networks to use the bottom 32 bits however we see fit in our 
network. (for the example, I assign my client 1.2.3.4 and you assign your 
client 4.3.2.1)
4. Somehow, we get DNS to hand out 64 bit addresses, probably through a  
and just ignoring the last 64 bits.
5. My client, assigned the address 1.2.3.4 in my realm, queries your client's 
address "shaft:240.0.0.2; realm 4.3.2.1" from DNS.
6. My client then sends your client a packet (IPv4 source: 240.0.0.1; IPv4 
destination: 240.0.0.2; Next Header: 4 (IPv4); IPv4 source: 1.2.3.4; IPv4 
destination: 4.3.2.1)
7. 240.0.0.0/6 is routable on plain old normal internet routers, so nothing 
needs to be changed. (lol)
8a. Your router receives the packet, and your router does special things with 
its shaft. (IPv4 source: 240.0.0.1; IPv4 destination: _4.3.2.1_; Next Header: 4 
(IPv4); IPv4 source: 1.2.3.4; IPv4 destination: _240.0.0.2_)
8b. Alternatively, every router in your network could determine next hop by 
investigating the second header when the destination is your shaft.
9. Your client receives the packet and can either handle this case in a special 
way or translate it to a v6 address for higher level applications.

No, as a matter of fact, I don't know I'm talking about. Hopefully one of the 
authors can correct my walkthrough of how it works 

Shaft and realm are fun words. I see why they picked them.

- Nich

From: NANOG  On Behalf Of 
Vasilenko Eduard via NANOG
Sent: Monday, April 4, 2022 3:28 AM
To: Abraham Y. Chen ; Pascal Thubert (pthubert) 
; Justin Streiner 
Cc: NANOG 
Subject: RE: Let's Focus on Moving Forward Re: V6 still not supported re: 
202203261833.AYC

2)    When you extend each floor to use the whole IPv4 address pool, however, 
you are essential talking about covering the entire surface of the earth. Then, 
there is no isolated buildings with isolated floors to deploy your model 
anymore. There is only one spherical layer of physical earth surface for you to 
use as a realm, which is the current IPv4 deployment. How could you still have 
multiple full IPv4 address sets deployed, yet not seeing their identical twins, 
triplets, etc.? Are you proposing multiple spherical layers of "realms", one on 
top of the other?

It is the same as what I was trying to explain to Pascal. How to map the 
2-level hierarchy of the draft (“Shaft”:”Realm”) to the real world?
I am sure that it is possible to do this if assume that the real world has 2 
levels of hierarchy where the high level is “BGP AS”.
“BGP AS” is the name that everybody understands, No need for a new name “Shaft”.

Ed/
From: Abraham Y. Chen [mailto:ayc...@avinta.com]
Sent: Saturday, April 2, 2022 12:45 AM
To: Pascal Thubert (pthubert) <mailto:pthub...@cisco.com>; Vasilenko Eduard 
<mailto:vasilenko.edu...@huawei.com>; Justin Streiner 
<mailto:strein...@gmail.com>
Cc: NANOG <mailto:nanog@nanog.org>
Subject: Re: Let's Focus on Moving Forward Re: V6 still not supported re: 
202203261833.AYC

Hi, Pascal:

1)    " ...  for the next version. ...    ":    I am not sure that I can wait 
for so long, because I am asking for the basics. The reason that I asked for an 
IP packet header example of your proposal is to visualize what do you mean by 
the model of "realms and shafts in a multi-level building". The presentation in 
the draft  sounds okay, because the floors are physically isolated from one 
another. And, even the building is isolated from other buildings. This is 
pretty much how PBX numbering plan worked. 

2)    When you extend each floor to use the whole IPv4 address pool, however, 
you are essential talking about covering the entire surface of the earth. Then, 
there is no isolated buildings with isolated floors to deploy your model 
anymore. There is only one spherical layer of physical earth surface for you to 
use as a realm, which is the current IPv4 deployment. How could you 

RE: Enhance CG-NAT Re: V6 still not supported

2022-04-04 Thread Vasilenko Eduard via NANOG
Hi Abraham,
I propose you improve EzIP by the advice in the draft on the way how to 
randomize small sites choice inside 240/4 (like in ULA?).
To give the chance for the merge that may be needed for a business. Minimize 
probability for address duplication inside 240/4 block (that everybody would 
use).

You have not discussed in the document CGNAT case that is typically called 
NAT444 (double NAT translation).
I assume it is possible, but would be a big question how to coordinate one 
240/4 distribution between all subscribers. Because address space between 
Carrier and Subscriber are Private too.

I do not see a big difference between EzIP and NAPT that we have right now. 
Explanation:
Initially, the majority of servers on the internet would not be capable to read 
Ez options (private 240/4 address extension).
Hence, the Web server would see just UDP:Public_IP.
The gateway (that would be exposing 240/4 options) would need additionally to 
translate UDP ports to avoid a collision (as usual for NAPT).
The gateway could not stop NAPT till the last server on the internet would be 
capable to read address extension (240/4) in options, because the gateway would 
not know what server is capable to parse EzIP options.
It means NEVER, at least not in this century. Hence, the additional value from 
EzIP is small, because the primary job would be still done by NAPT.
You could try to patch this problem. If the new server would signal to the 
gateway that it is capable to understand EzIP options then overlapping UDP 
ports from the same Public IP address would be not a problem, because the 
server may additionally use private address space for traffic multiplexing.
IMHI: it would be a very dirty work-around if servers would need to teach their 
capabilities to every NAPT device.

Sorry, I have not read all 55 pages, but the principal architecture questions 
are not possible to understand from the first 9 pages.
Your first pages are oriented for low-level engineers (“for dummies”).

Eduard
From: NANOG [mailto:nanog-bounces+vasilenko.eduard=huawei@nanog.org] On 
Behalf Of Abraham Y. Chen
Sent: Sunday, April 3, 2022 6:14 AM
To: Matthew Petach ; Masataka Ohta 

Cc: nanog@nanog.org
Subject: Enhance CG-NAT Re: V6 still not supported

Hi, Matt:

1)The challenge that you described can be resolved as one part of the 
benefits from the EzIP proposal that I introduced to this mailing list about 
one month ago. That discussion has gyrated into this thread more concerned 
about IPv6 related topics, instead. If you missed that introduction, please 
have a look at the following IETF draft to get a feel of what could be done:


https://datatracker.ietf.org/doc/html/draft-chen-ati-adaptive-ipv4-address-space

2)   With respect to the specific case you brought up, consider the EzIP 
address pool (240/4 netblock with about 256M addresses) as the replacement to 
that of CG-NAT (100.64/10 netblock with about 4M addresses). This much bigger 
(2^6 times) pool enables every customer premises to get a static IP address 
from the 240/4 pool to operate in simple router mode, instead of requesting for 
a static port number and still operates in NAT mode. Within each customer 
premises, the conventional three private netblocks may be used to handle the 
hosts (IoTs).

3)There is a whitepaper that presents an overview of other possibilities 
based on EzIP approach:

https://www.avinta.com/phoenix-1/home/RevampTheInternet.pdf

Hope the above makes sense to you.

Regards,


Abe (2022-04-02 23:10)






On 2022-04-02 16:25, Matthew Petach wrote:


On Fri, Apr 1, 2022 at 6:37 AM Masataka Ohta 
mailto:mo...@necom830.hpcl.titech.ac.jp>> 
wrote:

If you make the stateful NATs static, that is, each
private address has a statically configured range of
public port numbers, it is extremely easy because no
logging is necessary for police grade audit trail
opacity.
Masataka Ohta

Hi Masataka,
One quick question.  If every host is granted a range of public port
numbers on the static stateful NAT device, what happens when
two customers need access to the same port number?

Because there's no way in a DNS NS entry to specify a
port number, if I need to run a DNS server behind this
static NAT, I *have* to be given port 53 in my range;
there's no other way to make DNS work.  This means
that if I have two customers that each need to run a
DNS server, I have to put them on separate static
NAT boxes--because they can't both get access to
port 53.

This limits the effectiveness of a stateful static NAT
box to the number of customers that need hard-wired
port numbers to be mapped through; which, depending
on your customer base, could end up being all of them,
at which point you're back to square one, with every
customer needing at least 1 IPv4 address dedicated
to them on the NAT device.

Either that, or you simply tell your customers "so sorry
you didn't get on the Internet soon enough; you're all
second 

RE: Let's Focus on Moving Forward Re: V6 still not supported re: 202203261833.AYC

2022-04-04 Thread Vasilenko Eduard via NANOG
2)When you extend each floor to use the whole IPv4 address pool, however, 
you are essential talking about covering the entire surface of the earth. Then, 
there is no isolated buildings with isolated floors to deploy your model 
anymore. There is only one spherical layer of physical earth surface for you to 
use as a realm, which is the current IPv4 deployment. How could you still have 
multiple full IPv4 address sets deployed, yet not seeing their identical twins, 
triplets, etc.? Are you proposing multiple spherical layers of "realms", one on 
top of the other?

It is the same as what I was trying to explain to Pascal. How to map the 
2-level hierarchy of the draft (“Shaft”:”Realm”) to the real world?
I am sure that it is possible to do this if assume that the real world has 2 
levels of hierarchy where the high level is “BGP AS”.
“BGP AS” is the name that everybody understands, No need for a new name “Shaft”.

Ed/
From: Abraham Y. Chen [mailto:ayc...@avinta.com]
Sent: Saturday, April 2, 2022 12:45 AM
To: Pascal Thubert (pthubert) ; Vasilenko Eduard 
; Justin Streiner 
Cc: NANOG 
Subject: Re: Let's Focus on Moving Forward Re: V6 still not supported re: 
202203261833.AYC

Hi, Pascal:

1)" ...  for the next version. ...":I am not sure that I can wait 
for so long, because I am asking for the basics. The reason that I asked for an 
IP packet header example of your proposal is to visualize what do you mean by 
the model of "realms and shafts in a multi-level building". The presentation in 
the draft  sounds okay, because the floors are physically isolated from one 
another. And, even the building is isolated from other buildings. This is 
pretty much how PBX numbering plan worked.

2)When you extend each floor to use the whole IPv4 address pool, however, 
you are essential talking about covering the entire surface of the earth. Then, 
there is no isolated buildings with isolated floors to deploy your model 
anymore. There is only one spherical layer of physical earth surface for you to 
use as a realm, which is the current IPv4 deployment. How could you still have 
multiple full IPv4 address sets deployed, yet not seeing their identical twins, 
triplets, etc.? Are you proposing multiple spherical layers of "realms", one on 
top of the other?

2)When I cited the DotConnectAfrica graphic logo as a visual model for the 
EzIP deployment over current IPv4, I was pretty specific that each RAN was 
tethered from the current Internet core via one IPv4 address. We were very 
careful about isolating the netblocks in terms of which one does what. In other 
words, even though the collection of RANs form a parallel cyberspace to the 
Internet, you may look at each RAN as an isolated balloon for others. So that 
each RAN can use up the entire 240/4 netblock.

Please clarify your configuration.

Thanks,


Abe (2022-04-01 17:44)




On 2022-04-01 10:55, Abraham Y. Chen wrote:
On 2022-04-01 10:00, Pascal Thubert (pthubert) wrote:
Makes sense, Abe, for the next version.

Note that the intention is NOT any to ANY. A native IPv6 IoT device can only 
talk to another IPv6 device, where that other device may use a YATT address or 
any other IPv6 address.
But it cannot talk to a YADA node. That’s what I mean by baby steps for those 
who want to.

Keep safe;

Pascal

From: Abraham Y. Chen 
Sent: vendredi 1 avril 2022 15:49
To: Vasilenko Eduard 
; Pascal 
Thubert (pthubert) ; Justin 
Streiner 
Cc: NANOG 
Subject: Re: Let's Focus on Moving Forward Re: V6 still not supported re: 
202203261833.AYC

Hi, Pascal:

What I would appreciate is an IP packet header design/definition layout, 
word-by-word, ideally in bit-map style, of an explicit presentation of all IP 
addresses involved from one IoT in one realm to that in the second realm. This 
will provide a clearer picture of how the real world implementation may look 
like.

Thanks,


Abe (2022-04-01 09:48)


On 2022-04-01 08:49, Vasilenko Eduard wrote:
As I understand: “IPv4 Realms” between “Shaft” should be capable to have a 
plain IPv4 header (or else why all of these).
Then Gateway in the Shaft should change headers (from IPv4 to IPv6).
Who should implement this gateway and why? He should be formally appointed to 
such an exercise, right?
Map this 2 level hierarchy to the real world – you may fail with this.
Ed/
From: Pascal Thubert (pthubert) [mailto:pthub...@cisco.com]
Sent: Friday, April 1, 2022 3:41 PM
To: Vasilenko Eduard 
; Justin 
Streiner ; Abraham Y. Chen 

Subject: RE: Let's Focus on Moving Forward Re: V6 still not supported re: 
202203261833.AYC

Hello Eduard:

Did you just demonstrate that POPs cannot exist? Or that there cannot be a 
Default Free Zone?
I agree with your real world issue that some things will have to be 

RE: Let's Focus on Moving Forward Re: V6 still not supported re: 202203261833.AYC

2022-03-31 Thread Vasilenko Eduard via NANOG
IMHO: IETF is only partially guilty. Who was capable to predict in 1992-1994 
that:

- Wireless would become so popular (WiFi is from 1997) and wireless would 
emulate multicast so badly (hi SLAAC)
- Hardware forwarding (PFE) would be invented (1997) that would have a big 
additional cost to implement Enhanced Headers
- Encryption would never have a small enough cost to make it mandatory
- Router would be available in every smallest thing that makes distributed 
address acquisition redundant (hi SLAAC)

We should be fair - it was not possible to guess.

Ed/
-Original Message-
From: NANOG [mailto:nanog-bounces+vasilenko.eduard=huawei@nanog.org] On 
Behalf Of Joe Maimon
Sent: Thursday, March 31, 2022 3:01 AM
To: Tom Beecher 
Cc: NANOG 
Subject: Re: Let's Focus on Moving Forward Re: V6 still not supported re: 
202203261833.AYC



Tom Beecher wrote:
>
> If the IETF has really been unable to achieve consensus on properly
> supporting the currently still dominant internet protocol, that is
> seriously problematic and a huge process failure.
>
>
> That is not an accurate statement.
>
> The IETF has achieved consensus on this topic. It's explained here by 
> Brian Carpenter.
>
> https://mailarchive.ietf.org/arch/msg/int-area/qWaHXBKT8BOx208SbwWILDX
> yAUA/

As I have explained with my newly introduced consensus standards, there is no 
such consensus.

To reiterate my consensus standards, consensus is only to be considered as 
amongst stakeholders and IPv6 specific related stakes are not relevant to IPv4. 
If you consider the reverse to be true as well, I think my version of consensus 
would achieve a much wider and diverse consensus than the the stated IETF's 
consensus.

Once a consensus has been proven invalid its beyond obnoxious to cling to it as 
though it maintains its own reality field.

>
> He expressly states with many +1s that if something IPv4 related needs 
> to get worked on , it will be worked on,

IPv4 still needs address exhaustion solutions.

> but the consensus solution to V4 address exhaustion was IPng that 
> became IPv6, so that is considered a solved problem.

IPv6 is not a solution. Its a replacement that does not have the same problem. 
Which could be a solution to the problem, but only if the replacement happens 
on schedule. However, so long as the replacement hasnt happened, we still are 
dealing with the problem.

The IETF made a stupendously bad bet that IPv6 would happen in time. 
That is the kind of bet that you better be right about. They were a 
decade+ wrong. That they have the audacity and temerity to continue
doubling down on that would be funny if it wasnt so outrageous, wrong and 
harmful.

Let us re-examine the premise. When did it become acceptable to quash work on 
one protocol because of the existence of another one that is preferred by the 
quashers?

Or in other words, the way you are framing things makes it seem as if the IETF 
has with intent and malice chosen to extend or at the very least ignore 
exhaustion issues for actual internet users so as to rig the system for their 
preferred outcome.

>
> Some folks don't LIKE the solution, as is their right to do.

I agree. I like most of IPv6 just fine. Not SLAAC, not multicast l2 resolution, 
not addressing policy, not the chaos of choice of inadequate interoperability 
approaches, not the denial of features desired by users, not the pmtud, not the 
fragmentation, and many other warts. I dont even like the notation schemes. 
They require multiple vision passes.

I do like the extra bits. Just not the way they are being frittered.

The real crux of the matter is that it did not work. Address exhaustion has not 
been alleviated. For many years now and who knows how much longer.

> But the problem of V4 address exhaustion is NOT the same thing as "I 
> don't like the solution that they chose."

The problem of V4 address exhaustion is NOT the same thing as turn on
IPv6 and wait for the rest of the world to do the same.

When considered in that manner the IETF's bet looks even worse.

What I dont like is that they were wrong. What I dislike even more is that they 
refuse to admit it and learn from their mistakes.

Joe

> On Wed, Mar 30, 2022 at 12:18 PM Joe Maimon  > wrote:
>
>
>
> Owen DeLong via NANOG wrote:
>
> >
> > Well… It’s a consensus process. If your idea isn’t getting
> consensus,
> > then perhaps it’s simply that the group you are seeking
> consensus from
> > doesn’t like your idea.
>

Consensus processes are vulnerable to tyranny of a well positioned minority.

Joe


RE: CGNAT scaling cost (was Re: V6 still not supported)

2022-03-31 Thread Vasilenko Eduard via NANOG
No.
I have already forgotten that SDH did exist (and yes, I remember X.25 - I have 
operated X.25 network).
I was talking in the next message about 100GE.
In fact, the situation would be similar for 10E too.

Ed/
-Original Message-
From: NANOG [mailto:nanog-bounces+vasilenko.eduard=huawei@nanog.org] On 
Behalf Of Masataka Ohta
Sent: Thursday, March 31, 2022 3:56 AM
To: nanog@nanog.org
Subject: Re: CGNAT scaling cost (was Re: V6 still not supported)

Vasilenko Eduard via NANOG wrote:

> CGNAT cost was very close to 3x compared to routers of the same 
> performance.

That should be because you are comparing cost of carrier, that is telco, grade 
NAT and consumer grade routers.

Remember the cost of carrier grade datalink of SONET/SDH.

Masataka Ohta


RE: RE: CGNAT scaling cost (was V6 still not supported)

2022-03-30 Thread Vasilenko Eduard via NANOG
Hi Jared,
I did mean big systems where performance needed is n*100Gbps or bigger.
For router or CGNAT: the chassis cost is less than 1 card. Hence, all cost is 
in ports (for the big router up to 95% if counting QSFP too). Chassis, power 
supplies, switching fabrics - could be discarded for a big system cost 
estimation.
You could think that I was comparing the average cost for the 100GE port of the 
router and CGNAT
That may be wire-speed for the reasonable average packet size (750B?)
And typical profile 6:1 upstream/downstream.

Scaling router and especially CGNAT (that is very often big because 
centralized) means: adding cards to empty slots.
Where all cost is in the ports only.
It is a little more complex for CGNAT because input/output ports are separate 
from processing cards. But let's assume that the proper mix is inserted.

Of course, if you would use router card ports by 50% (or install not all 
processing cards for CGNAT) then the cost may vary.
But let's assume almost full utilization for comparable results. It would be 
the case for the big populated system anyway.

Hence, yes, it is almost linear for big systems.
But if you would start from just 1 card (not possible for a big system?) then 
the port cost would start from 2x (+ common components).

Eduard
-Original Message-
From: Jared Brown [mailto:nanog-...@mail.com] 
Sent: Wednesday, March 30, 2022 8:17 PM
To: Vasilenko Eduard 
Cc: nanog@nanog.org
Subject: Re: RE: CGNAT scaling cost (was V6 still not supported)

Hi Eduard,

Do I interpret your findings correctly, if this means that CGNAT costs scale 
more or less linearly with traffic growth over time?

And as a corollary, that the cost of scaling CGNAT in itself isn't likely a 
primary driver for IPv6 adoption?


- Jared


Vasilenko Eduard wrote:
>
> CGNAT cost was very close to 3x compared to routers of the same performance.
> Hence, 1 hop through CGNAT = 3 hops through routers.
> 3 router hops maybe the 50% of overall hops in the particular Carrier (or 
> even less).
>
> DWDM is 3x more expensive per hop. Fiber is much more expensive (greatly 
> varies per situation and distance).
> Hence, +50% for IP does not mean +50% for the whole infrastructure, not at 
> all.
>
> I was on all primary vendors for 2.5 decades. 3x cost of NAT was consistent 
> for all vendors and at all times.
> Because it is a "Network processor" (really flexible one with a big memory) 
> against "specialized ASIC". COTS (x86) is much worse for the big scale - does 
> not make sense to compare.
> It has started to decrease recently when SFPs have become the bigger part of 
> the router (up to 50% for single-mode).
> Hence, I expect the decrease of the difference between router and CGNAT cost 
> to 2x long-term.
> Optical vendors are more capable to protect their margins.
>
> It is a different situation in Mobile Carriers, where Packet Core and Gi-LAN 
> were never accelerated in hardware.
> Everything else is so expensive (x86) per Gbps, that CGNAT is not visible in 
> the cost.
>
> Eduard
> -Original Message-
> From: NANOG [mailto:nanog-bounces+vasilenko.eduard=huawei@nanog.org] On 
> Behalf Of Jared Brown
> Sent: Wednesday, March 30, 2022 6:33 PM
> To: nanog@nanog.org
> Subject: CGNAT scaling cost (was Re: V6 still not supported)
>
> An oft-cited driver of IPv6 adoption is the cost of scaling CGNAT or 
> equivalent infrastructure for IPv4.
>
> Those of you facing costs for scaling CGNAT, are your per unit costs rising 
> or declining faster or slower than your IPv4 traffic growth?
>
> I ask because I realize I am not fit to evaluate the issue on a general 
> level, as, most probably due to our insignificant scale, our CGNAT marginal 
> costs are zero. This is mainly because our CGNAT solution is oversized to our 
> needs. Even though scaling up our currently oversized system further would 
> lower per unit costs, I understand this may not be the case outside our 
> bubble.
>
>
> - Jared
>


RE: CGNAT scaling cost (was Re: V6 still not supported)

2022-03-30 Thread Vasilenko Eduard via NANOG
CGNAT cost was very close to 3x compared to routers of the same performance.
Hence, 1 hop through CGNAT = 3 hops through routers.
3 router hops maybe the 50% of overall hops in the particular Carrier (or even 
less).

DWDM is 3x more expensive per hop. Fiber is much more expensive (greatly varies 
per situation and distance).
Hence, +50% for IP does not mean +50% for the whole infrastructure, not at all.

I was on all primary vendors for 2.5 decades. 3x cost of NAT was consistent for 
all vendors and at all times.
Because it is a "Network processor" (really flexible one with a big memory) 
against "specialized ASIC". COTS (x86) is much worse for the big scale - does 
not make sense to compare.
It has started to decrease recently when SFPs have become the bigger part of 
the router (up to 50% for single-mode).
Hence, I expect the decrease of the difference between router and CGNAT cost to 
2x long-term.
Optical vendors are more capable to protect their margins.

It is a different situation in Mobile Carriers, where Packet Core and Gi-LAN 
were never accelerated in hardware.
Everything else is so expensive (x86) per Gbps, that CGNAT is not visible in 
the cost.

Eduard
-Original Message-
From: NANOG [mailto:nanog-bounces+vasilenko.eduard=huawei@nanog.org] On 
Behalf Of Jared Brown
Sent: Wednesday, March 30, 2022 6:33 PM
To: nanog@nanog.org
Subject: CGNAT scaling cost (was Re: V6 still not supported)

An oft-cited driver of IPv6 adoption is the cost of scaling CGNAT or equivalent 
infrastructure for IPv4.

Those of you facing costs for scaling CGNAT, are your per unit costs rising or 
declining faster or slower than your IPv4 traffic growth?

I ask because I realize I am not fit to evaluate the issue on a general level, 
as, most probably due to our insignificant scale, our CGNAT marginal costs are 
zero. This is mainly because our CGNAT solution is oversized to our needs. Even 
though scaling up our currently oversized system further would lower per unit 
costs, I understand this may not be the case outside our bubble.


- Jared


RE: MAP-T (was: Re: V6 still not supported)

2022-03-25 Thread Vasilenko Eduard via NANOG
The best MAP discussion (really rich in details) is from Richard Patterson.
Sky has implemented green field FBB in Italy.
He did many presentations in different places. This one should be looked from 
00:37 to 1:09 
https://www.ripe.net/participate/meetings/open-house/ripe-ncc-open-house-ipv6-only-networks

The absence of logs is a really big advantage.
But where to get big enough IPv4 address space for MAP?

XLAT464 would win anyway because it is the only IPv4aaS translation available 
on a smartphone.

Eduard
-Original Message-
From: Rajiv Asati (rajiva) [mailto:raj...@cisco.com] 
Sent: Saturday, March 26, 2022 12:44 AM
To: Vasilenko Eduard ; Jared Brown 
; nanog@nanog.org
Subject: Re: MAP-T (was: Re: V6 still not supported)

FWIW, MAP has been deployed by few operators (in at least 3 continents that I 
am aware of).

Charter communications is one of the public references (see NANOG preso 
https://www.youtube.com/watch?v=ZmfYHCpfr_w).

MAP (CPE function) has been supported in OpenWRT software (as well as many CPE 
vendor implementations) for few years now; MAP (BR function) has been supported 
by few vendors including Cisco (in IOS-XE and XR).

Cheers,
Rajiv 

https://openwrt.org/packages/pkgdata_owrt18_6/map-t
https://openwrt.org/docs/guide-user/network/map

 

-Original Message-
From: NANOG  on behalf of Vasilenko 
Eduard via NANOG 
Reply-To: Vasilenko Eduard 
Date: Friday, March 25, 2022 at 11:17 AM
To: Jared Brown , "nanog@nanog.org" 
Subject: RE: MAP-T (was: Re: V6 still not supported)

Hi Jared,
Theoretically, MAP is better. But 

1. Nobody has implemented it for the router.
The code for the CGNAT engine gives the same cost/performance.
No promised advantage from potentially stateless protocol.

2.MAP needs much bigger address space (not everybody has) because:
a) powered-off subscribers consume their blocks anyway
b) it is not possible to add "on the fly" additional 64 ports to the 
particular subscriber that abuse some Apple application (and go to 1k ports 
consumption) that may drive far above any reasonable limit of ports per subs.
Design should block a big enough number of UDP/TCP ports for every subs 
(even most silent/conservative).

Ed/
-Original Message-
From: NANOG [mailto:nanog-bounces+vasilenko.eduard=huawei@nanog.org] On 
Behalf Of Jared Brown
Sent: Friday, March 25, 2022 4:49 PM
To: nanog@nanog.org
Subject: MAP-T (was: Re: V6 still not supported)

Most IPv6 transition mechanisms involve some form of (CG)NAT. After 
watching a NANOG presentation on MAP-T, I have a question regarding this.

Why isn't MAP-T more prevalent, given that it is (almost) stateless on the 
provider side?

Is it CPE support, the headache of moving state to the CPE, vendor support, 
or something else?


NANOG 2017
Mapping of Address and Port using Translation MAP T: Deployment at Charter 
Communications https://www.youtube.com/watch?v=ZmfYHCpfr_w


- Jared



RE: MAP-T (was: Re: V6 still not supported)

2022-03-25 Thread Vasilenko Eduard via NANOG
Hi Jared,
Theoretically, MAP is better. But 

1. Nobody has implemented it for the router.
The code for the CGNAT engine gives the same cost/performance.
No promised advantage from potentially stateless protocol.

2.MAP needs much bigger address space (not everybody has) because:
a) powered-off subscribers consume their blocks anyway
b) it is not possible to add "on the fly" additional 64 ports to the particular 
subscriber that abuse some Apple application (and go to 1k ports consumption) 
that may drive far above any reasonable limit of ports per subs.
Design should block a big enough number of UDP/TCP ports for every subs (even 
most silent/conservative).

Ed/
-Original Message-
From: NANOG [mailto:nanog-bounces+vasilenko.eduard=huawei@nanog.org] On 
Behalf Of Jared Brown
Sent: Friday, March 25, 2022 4:49 PM
To: nanog@nanog.org
Subject: MAP-T (was: Re: V6 still not supported)

Most IPv6 transition mechanisms involve some form of (CG)NAT. After watching a 
NANOG presentation on MAP-T, I have a question regarding this.

Why isn't MAP-T more prevalent, given that it is (almost) stateless on the 
provider side?

Is it CPE support, the headache of moving state to the CPE, vendor support, or 
something else?


NANOG 2017
Mapping of Address and Port using Translation MAP T: Deployment at Charter 
Communications https://www.youtube.com/watch?v=ZmfYHCpfr_w


- Jared


RE: V6 still not supported

2022-03-24 Thread Vasilenko Eduard via NANOG
Hi all,
From 10k meters: IPv6 is different from IPv4 only by:
- extension headers
- SLAAC instead of DHCP
Everything else is minor.

Enterprises could easily ignore EH.
Carriers could test EH for closed domains and support.
I do not see a problem with EHs.

Hence, the primary blocking entity for IPv6 adoption is Google: they do not 
support DHCPv6 for the most popular OS.
Whatever else the community would develop may be blocked by some monopoly in 
the same easy way.
No point to change IPv6 to IPv6-
Typical market pressure on such a company is not applicable here because:
1) Google is too big and powerful
2) Enterprises do not understand why they need IPv6 - they do not want to spend 
cycles on this pressure.
Deadlock.

Eduard
-Original Message-
From: NANOG [mailto:nanog-bounces+vasilenko.eduard=huawei@nanog.org] On 
Behalf Of Mark Delany
Sent: Thursday, March 24, 2022 11:35 AM
To: nanog@nanog.org
Subject: Re: V6 still not supported

On 23Mar22, Owen DeLong via NANOG allegedly wrote:

> I would not say that IPv6 has been and continues to be a failure

Even if one might ask that question, what are the realistic alternatives?

1. Drop ipv6 and replace it with ipv4++ or ipv6-lite or whatever other protocol 
that
   magically creates a better and quicker transition?

2. Drop ipv6 and extend above the network layer for the forseeable future? By 
extend I
   mean things which only introduce ipv4-compatible changes: NATs, TURN, CDN at 
the edge,
   application overlays and other higher layer solutions.

3. Live with ipv6 and continue to engineer simpler, better, easier and 
no-brainer
   deployments?

I'll admit it risks being a "sunk cost falacy" argument to perpetuate ipv6, but 
are the alternatives so clear that we're really ready to drop ipv6?


> so much as IPv6 has not yet achieved its goal.

As someone previously mentioned, "legacy" support can have an extremely long 
tail which might superficially make "achieving a goal" look like a failure.

Forget ss7 and SIP, what about 100LL vs unleaded petrol or 1/2" bolts vs 13mm 
bolts? Both must be 50 years in the making with many more years to come. The 
glacial grind of displacing legacy tech is hardly unique to network protocols.

In the grand scheme of things, the goal of replacing ipv4 with ipv6 has really 
only had a relatively short life-time compared to many other tech transitions. 
Perhaps it's time to adopt the patience of the physical world?


Mark.


RE: V6 still not supported

2022-03-21 Thread Vasilenko Eduard via NANOG
Hi all,
Hierarchical addressing when the small zone has a smaller address size, but the 
bigger zone has a bigger address size
Does not make too much sense.
Indeed, it is possible to increase the source address from 32bits to something 
bigger when the packet would go out of the small zone (and decrease when the 
packet would go in the reverse direction).
But it is not possible to do the same for the destination address - it should 
be long enough (more the 32bits) from the source host to point to another host 
far away.

Hence, the assumption below is optimistic that may be smooth interoperability 
between smaller and bigger address spaces.
It is the same disruptive as the introduction of IPv6.
Eduard
-Original Message-
From: NANOG [mailto:nanog-bounces+vasilenko.eduard=huawei@nanog.org] On 
Behalf Of Mark Delany
Sent: Sunday, March 20, 2022 7:25 AM
To: nanog@nanog.org
Subject: Re: V6 still not supported

On 19Mar22, Matt Hoppes allegedly wrote:

> So, while it's true that a 192.168.0.1 computer couldn't connect to a
> 43.23.0.0.12.168.0.1 computer, without a software patch - that patch 
> would be very simple and quick to deploy

Let's call this ipv4++

Question: How does 192.168.0.1 learn about 43.23.0.0.12.168.0.1? Is that a DNS 
lookup?

How does the DNS support ipv4++ addresses? Is that some extension to the A RR? 
It better be an extension that doesn't break packet validation rules embeded in 
most DNS libraries and middleware. You give 'em an A RData longer than 32 bits 
and they're going to drop it with prejudice. Perhaps you should invent a new 
ipv4++ address RR to avoid some of these issues?

In either case, how does every program on my ipv4 computer deal with these new 
addresses that come back from a DNS lookup? Do you intend to modify every DNS 
library to hide these extensions from older programs? How do you do that 
exactly? What about my home-grown DNS library? Who patches that?

Here's a code fragment from my ipv4-only web browser:

   uint32 ip
   ip = dnslookup("www.rivervalleyinternet.net", TypeA)
   socket = connect(ip)

What does 'ip' contain if www.rivervalleyinternet.net is ipv4++ compliant and 
advertises 43.23.0.0.199.34.228.100? Do these magical concentrators sniff out 
DNS queries and do some form of translation? How does that work with DoH and 
DoT?

Or are you suggesting that www.rivervalleyinternet.net continues to advertise 
and listen on both 43.23.0.0.199.34.228.100 *and* good ol' 199.34.228.100 until 
virtually every client and network on the planet transitions to ipv4++? In 
short, the transition plan is to have www.rivervalleyinternet.net run 
dual-stacked for many years to come. Yes?

Speaking of DNS lookups. If my ipv4++ DNS server is on the same LAN as my 
laptop, how do I talk to it? You can't ARP for ipv4++ addresses, so you'll have 
to invent a new ARP protocol type or a new LAN protocol. Is that in your patch 
too? Make sure the patch applies to network devices running proxy ARP as well, 
ok?

If I connect to an ipv4++ network, how do I acquire my ipv4++ address?  If it's 
DHCP, doesn't that require an extension to the DHCP protocol to support the 
larger ipv4++ addresses?  So DHCP protocol extensions and changes to all DHCP 
servers and clients are part of the patch too, right? Or perhaps you plan to 
invent a new DHCP packet which better accomodates all of the ipv4++ addresses 
that can get returned? Still plenty of code changes.

And how do I even do DHCP? Does ipv4++ support broadcast in the same way as 
ipv4? What of DHCP relays? They will need to be upgraded I presume.


So let's say we've solved all the issues with getting on a network, talking 
over a LAN, acquiring an ipv4++ address, finding our ipv4++ capable router and 
resolving ipv4++ addresses. My application is ready to send an ipv4++ packet to 
a remote destionation.

But what does an ipv4++ packet look like on the wire? Is it an ipv4 packet with 
bigger address fields?  An ipv4 packet with an extension? Or do you propose 
inventing a new IP type? Do these packets pass thru ipv4-only routers untainted 
or must they be "concentrated" beforehand?  Won't all the firewalls and router 
vendors need to change their products to allow such packets to pass? Normally 
oddball ipv4 packets are dropped of course. As we know, vendor changes can 
notoriously take decades; just ask the ipv6 crowd.

Ok. We've upgraded all our infrastructure to route ipv4++ packets. The packet 
reached the edge of our network. But how does the edge know that the next hop 
(our ISP) supports
ipv4++ packets?  Do routers have to exchange capabilities with each 
ipv4++ other? How do they do
that?

For that matter, how does my application know that the whole path thru to the 
destination supports ipv4++? It only needs one transition across an ipv4-only 
network somewhere in the path and the packet will be dropped. I think you're 
going to have to advertise ipv4++ reachability on a per-network basis. Perhaps