Re: 100GbE beyond 40km

2021-09-25 Thread Lady Benjamin Cannon
My guess is that he was talking about the difference between a 100gbit/sec 
stream of ethernet frames with no error correction, and a 112gbit/sec (or so, 
depending on scheme) stream of transport with FEC (Forward Error Correction - 
which is essentially just cramming extra bits in there incase they are needed.

Ethernet has to re-transmit instead, and that can cause performance degradation 
and jitter, until it just quits working altogether.   Systems implementing FEC 
are much 

(This is a guess, there’s a chance something else was meant by this)

-LB.

> On Sep 25, 2021, at 1:55 AM, Etienne-Victor Depasquale via NANOG 
>  wrote:
> 
> Bear with my ignorance, I'm genuinely surprised at this:
> 
> Does this have to be Ethernet? You could look into line gear with coherent 
> optics.
> 
> Specifically, do you mean something like: "does this have to be 
> IEEE-standardized all the way down to L1 optics?" Because you can transmit 
> Ethernet frames over line gear with coherent optics, right ?
> 
> Please don't flame me, I'm just ignorant and willing to learn.
> 
> Cheers,
> 
> Etienne
> 
> On Fri, Sep 24, 2021 at 11:25 PM Bill Blackford  > wrote:
> Does this have to be Ethernet? You could look into line gear with coherent 
> optics. IIRC, they have built-in chromatic dispersion compensation, and 
> depending on the card, would include amplification.
> 
> On Fri, Sep 24, 2021 at 1:40 PM Randy Carpenter  > wrote:
> 
> How is everyone accomplishing 100GbE at farther than 40km distances?
> 
> Juniper is saying it can't be done with anything they offer, except for a 
> single CFP-based line card that is EOL.
> 
> There are QSFP "ZR" modules from third parties, but I am hesitant to try 
> those without there being an equivalent official part.
> 
> 
> The application is an ISP upgrading from Nx10G, where one of their fiber 
> paths is ~35km and the other is ~60km.
> 
> 
> 
> thanks,
> -Randy
> 
> 
> -- 
> Bill Blackford
> 
> Logged into reality and abusing my sudo privileges.
> 
> 
> -- 
> Ing. Etienne-Victor Depasquale
> Assistant Lecturer
> Department of Communications & Computer Engineering
> Faculty of Information & Communication Technology
> University of Malta
> Web. https://www.um.edu.mt/profile/etiennedepasquale 
> 



Re: IPv6 woes - RFC

2021-09-25 Thread Chris Adams
Once upon a time, Andy Smith  said:
> On Sat, Sep 25, 2021 at 08:44:00PM -0400, Valdis Klētnieks wrote:
> > 19:17:38 0 [~] ping 2130706433
> 
> "ping 01770001" and "ping 0x7F01" also fun ones :)

More than once, I've had to explain why zero-filling octets, like
127.000.000.001 (which still works) or 008.008.008.008 (which does not),
is broken.

-- 
Chris Adams 


Re: IPv6 woes - RFC

2021-09-25 Thread Andy Smith
Hello,

On Sat, Sep 25, 2021 at 08:44:00PM -0400, Valdis Klētnieks wrote:
> 19:17:38 0 [~] ping 2130706433

"ping 01770001" and "ping 0x7F01" also fun ones :)

Cheers,
Andy


Re: IPv6 woes - RFC

2021-09-25 Thread James R Cutler
On Sep 25, 2021, at 8:44 PM, Valdis Klētnieks  wrote:
> 
> On Sat, 25 Sep 2021 23:20:26 +0200, Baldur Norddahl said:
> 
>> We should remember there are also multiple ways to print IPv4 addresses.
>> You can zero extend the addresses and on some ancient systems you could
>> also use the integer value.
> 
> 19:17:38 0 [~] ping 2130706433
> PING 2130706433 (127.0.0.1) 56(84) bytes of data.
> 64 bytes from 127.0.0.1: icmp_seq=1 ttl=64 time=0.126 ms
> 64 bytes from 127.0.0.1: icmp_seq=2 ttl=64 time=0.075 ms
> 64 bytes from 127.0.0.1: icmp_seq=3 ttl=64 time=0.063 ms
> 64 bytes from 127.0.0.1: icmp_seq=4 ttl=64 time=0.082 ms
> ^C
> --- 2130706433 ping statistics ---
> 4 packets transmitted, 4 received, 0% packet loss, time 84ms
> rtt min/avg/max/mdev = 0.063/0.086/0.126/0.025 ms
> 
> Works on Fedora Rawhide based on RedHat, Debian 10, and Android 9.
> 
> That's a bit more than just 'some ancient systems' - depending whether
> it works on other Android releases, and what IoT systems do, we may have
> more systems today that support it than don't support it.

It also works on this 'ancient' macOS Monterey system.

Last login: Sat Sep 25 20:50:00 on ttys000
xz4gb8 ~ % ping 2130706433
PING 2130706433 (127.0.0.1): 56 data bytes
64 bytes from 127.0.0.1: icmp_seq=0 ttl=64 time=0.047 ms
64 bytes from 127.0.0.1: icmp_seq=1 ttl=64 time=0.111 ms
64 bytes from 127.0.0.1: icmp_seq=2 ttl=64 time=0.103 ms
64 bytes from 127.0.0.1: icmp_seq=3 ttl=64 time=0.109 ms
^C
--- 2130706433 ping statistics ---
4 packets transmitted, 4 packets received, 0.0% packet loss
round-trip min/avg/max/stddev = 0.047/0.092/0.111/0.026 ms
xz4gb8 ~ % 



Re: IPv6 woes - RFC

2021-09-25 Thread Valdis Klētnieks
On Sat, 25 Sep 2021 23:20:26 +0200, Baldur Norddahl said:

> We should remember there are also multiple ways to print IPv4 addresses.
> You can zero extend the addresses and on some ancient systems you could
> also use the integer value.

19:17:38 0 [~] ping 2130706433
PING 2130706433 (127.0.0.1) 56(84) bytes of data.
64 bytes from 127.0.0.1: icmp_seq=1 ttl=64 time=0.126 ms
64 bytes from 127.0.0.1: icmp_seq=2 ttl=64 time=0.075 ms
64 bytes from 127.0.0.1: icmp_seq=3 ttl=64 time=0.063 ms
64 bytes from 127.0.0.1: icmp_seq=4 ttl=64 time=0.082 ms
^C
--- 2130706433 ping statistics ---
4 packets transmitted, 4 received, 0% packet loss, time 84ms
rtt min/avg/max/mdev = 0.063/0.086/0.126/0.025 ms

Works on Fedora Rawhide based on RedHat, Debian 10, and Android 9.

That's a bit more than just 'some ancient systems' - depending whether
it works on other Android releases, and what IoT systems do, we may have
more systems today that support it than don't support it.


Re: Rack rails on network equipment

2021-09-25 Thread Joe Greco
On Sat, Sep 25, 2021 at 04:23:38PM -0700, Jay Hennigan wrote:
> On 9/25/21 16:14, George Herbert wrote:
> >(Crying, thinking about racks and racks and racks of AT 56k modems 
> >strapped to shelves above PM-2E-30s???)
> 
> And all of their wall-warts [...]

You were doing it wrong, then.  :-)

ExecPC had this down to a science, and had used a large transformer
to power a busbar along the back of two 60-slot literature organizers,
with 4x PM2E30's on top, a modem in each slot, and they snipped off
the wall warts, using the supplied cable for power.  A vertical board
was added over the top so that the rears of the PM2s were exposed, and
the board provided a mounting point for an ethernet hub and three Amp
RJ21 breakouts.  This gave you a modem "pod" that held 120 USR Courier
56K modems, neatly cabled and easily serviced.  The only thing coming
to each of these racks was 3x AMP RJ21, 1x power, and 1x ethernet.

They had ten of these handling their 1200 (one thousand two hundred!)
modems before it got unmanageable, and part of that was that US Robotics
offered a deal that allowed them to be a testing site for Total Control.

At which point they promptly had a guy solder all the wall warts back on
to the power leads and proceeded to sell them at a good percentage of
original price to new Internet users.

The other problem was that they were getting near two full DS3's worth
of analog lines being delivered this way, and it was taking up a TON of
space.  A full "pod" could be reduced to 3x USR TC's, so two whole pods
could be replaced with a single rack of gear.

... JG
-- 
Joe Greco - sol.net Network Services - Milwaukee, WI - http://www.sol.net
"The strain of anti-intellectualism has been a constant thread winding its way
through our political and cultural life, nurtured by the false notion that
democracy means that 'my ignorance is just as good as your knowledge.'"-Asimov


Re: Rack rails on network equipment

2021-09-25 Thread Jay Hennigan

On 9/25/21 16:14, George Herbert wrote:

(Crying, thinking about racks and racks and racks of AT 56k modems strapped 
to shelves above PM-2E-30s…)


And all of their wall-warts and serial cables


The early 90s were a dangerous place, man.


Yes, but the good news is that shortly thereafter you got to replace 
that all of that gear with Ascend TNT space heaters which did double 
duty as modem banks.


--
Jay Hennigan - j...@west.net
Network Engineering - CCIE #7880
503 897-8550 - WB6RDV


Re: Rack rails on network equipment

2021-09-25 Thread George Herbert
(Crying, thinking about racks and racks and racks of AT 56k modems strapped 
to shelves above PM-2E-30s…)

The early 90s were a dangerous place, man.

-George 

Sent from my iPhone

> On Sep 24, 2021, at 8:05 PM, Wayne Bouchard  wrote:
> 
> Didn't require any additional time at all when equipment wasn't bulky
> enough to need rails in the first place
> 
> 
> I've never been happy about that change.
> 
> 
>> On Fri, Sep 24, 2021 at 09:37:58AM -0700, Andrey Khomyakov wrote:
>> Hi folks,
>> Happy Friday!
>> 
>> Would you, please, share your thoughts on the following matter?
>> 
>> Back some 5 years ago we pulled the trigger and started phasing out Cisco
>> and Juniper switching products out of our data centers (reasons for that
>> are not quite relevant to the topic). We selected Dell switches in part due
>> to Dell using "quick rails'' (sometimes known as speed rails or toolless
>> rails).  This is where both the switch side rail and the rack side rail
>> just snap in, thus not requiring a screwdriver and hands of the size no
>> bigger than a hamster paw to hold those stupid proprietary screws (lookin
>> at your, cisco) to attach those rails.
>> We went from taking 16hrs to build a row of compute (from just network
>> equipment racking pov) to maybe 1hr... (we estimated that on average it
>> took us 30 min to rack a switch from cut open the box with Juniper switches
>> to 5 min with Dell switches)
>> Interesting tidbit is that we actually used to manufacture custom rails for
>> our Juniper EX4500 switches so the switch can be actually inserted from the
>> back of the rack (you know, where most of your server ports are...) and not
>> be blocked by the zero-U PDUs and all the cabling in the rack. Stock rails
>> didn't work at all for us unless we used wider racks, which then, in turn,
>> reduced floor capacity.
>> 
>> As far as I know, Dell is the only switch vendor doing toolless rails so
>> it's a bit of a hardware lock-in from that point of view.
>> 
>> *So ultimately my question to you all is how much do you care about the
>> speed of racking and unracking equipment and do you tell your suppliers
>> that you care? How much does the time it takes to install or replace a
>> switch impact you?*
>> 
>> I was having a conversation with a vendor and was pushing hard on the fact
>> that their switches will end up being actually costlier for me long term
>> just because my switch replacement time quadruples at least, thus requiring
>> me to staff more remote hands. Am I overthinking this and artificially
>> limiting myself by excluding vendors who don't ship with toolless rails
>> (which is all of them now except Dell)?
>> 
>> Thanks for your time in advance!
>> --Andrey
> 
> ---
> Wayne Bouchard
> w...@typo.org
> Network Dude
> http://www.typo.org/~web/


Re: Rack rails on network equipment

2021-09-25 Thread Shawn L via NANOG
Why about thinks like the Cisco 4500 switch series that are almost as long as a 
1u server.  But yet only has mounts for a relay type rack. 

As far as boot times, try a Asr920.  Wait 15 minutes and decide if it’s time to 
power cycle again or wait 5 more minutes 

Sent from my iPhone

> On Sep 25, 2021, at 5:22 PM, Michael Thomas  wrote:
> 
> 
>> On 9/25/21 2:08 PM, Jay Hennigan wrote:
>>> On 9/25/21 13:55, Baldur Norddahl wrote:
>>> 
>>> My personal itch is how new equipment seems to have even worse boot time 
>>> than previous generations. I am currently installing juniper acx710 and 
>>> while they are nice, they also make me wait 15 minutes to boot. This is a 
>>> tremendous waste of time during installation. I can not leave the site 
>>> without verification and typically I also have some tasks to do after boot.
>>> 
>>> Besides if you have a crash or power interruption, the customers are not 
>>> happy to wait additionally 15 minutes to get online again.
>> 
>> Switches in particular have a lot of ASICs that need to be loaded on boot. 
>> This takes time and they're really not optimized for speed on a process that 
>> occurs once.
> 
> It doesn't seem like it would take too many reboots to really mess with your 
> reliability numbers for uptime. And what on earth are the developers doing 
> with that kind of debug cycle time?
> 
> Mike
> 


Re: IPv6 woes - RFC

2021-09-25 Thread Owen DeLong via NANOG


> On Sep 25, 2021, at 14:20 , Baldur Norddahl  wrote:
> 
> 
> 
> On Sat, 25 Sept 2021 at 21:26, Owen DeLong via NANOG  > wrote:
> So the fact that:
> 
> 2001:db8:0:1::5
> 2001:db8::1:0:0:0:5
> 
> Are two different ways of representing the same address isn’t
> of any concern unless you’re making the mistake of trying to
> string wise compare them in their text-representation format.
> Both equate to the same uint128_t value.
> 
> If you adhere to RFC 5952 only the former is to be used (2001:db8:0:1::5). 
> Also strict RFC 5952 on any output will make a string compare ok because 
> there is only one way to print any address.

IIRC 5952 only specifies display, it does not control (and even if it purports 
to, depending users to comply is silly) user input.

> We should remember there are also multiple ways to print IPv4 addresses. You 
> can zero extend the addresses and on some ancient systems you could also use 
> the integer value. 

Truth.

> You can even encounter IPv4 printed as IPv6 which is not too uncommon. Many 
> programs internally are IPv6 only and IPv4 is therefore mapped to IPv6. It 
> appears some people are forgetting this fact when proposing to drop IPv6.

Fair point.

I think that :::1.2.3.4 is fine and I doubt it confuses anyone in IPv4 land 
much.

Owen



Re: IPv6 woes - RFC

2021-09-25 Thread Baldur Norddahl
On Sat, 25 Sept 2021 at 21:26, Owen DeLong via NANOG 
wrote:

> So the fact that:
>
> 2001:db8:0:1::5
> 2001:db8::1:0:0:0:5
>
> Are two different ways of representing the same address isn’t
> of any concern unless you’re making the mistake of trying to
> string wise compare them in their text-representation format.
> Both equate to the same uint128_t value.


If you adhere to RFC 5952 only the former is to be used (2001:db8:0:1::5).
Also strict RFC 5952 on any output will make a string compare ok because
there is only one way to print any address.

We should remember there are also multiple ways to print IPv4 addresses.
You can zero extend the addresses and on some ancient systems you could
also use the integer value.

You can even encounter IPv4 printed as IPv6 which is not too uncommon. Many
programs internally are IPv6 only and IPv4 is therefore mapped to IPv6. It
appears some people are forgetting this fact when proposing to drop IPv6.

Regards,

Baldur


Re: Rack rails on network equipment

2021-09-25 Thread Michael Thomas



On 9/25/21 2:08 PM, Jay Hennigan wrote:

On 9/25/21 13:55, Baldur Norddahl wrote:

My personal itch is how new equipment seems to have even worse boot 
time than previous generations. I am currently installing juniper 
acx710 and while they are nice, they also make me wait 15 minutes to 
boot. This is a tremendous waste of time during installation. I can 
not leave the site without verification and typically I also have 
some tasks to do after boot.


Besides if you have a crash or power interruption, the customers are 
not happy to wait additionally 15 minutes to get online again.


Switches in particular have a lot of ASICs that need to be loaded on 
boot. This takes time and they're really not optimized for speed on a 
process that occurs once.


It doesn't seem like it would take too many reboots to really mess with 
your reliability numbers for uptime. And what on earth are the 
developers doing with that kind of debug cycle time?


Mike



Re: Rack rails on network equipment

2021-09-25 Thread Brandon Butterworth
On Sat Sep 25, 2021 at 12:48:38PM -0700, Andrey Khomyakov wrote:
> We are looking at Nvidia (former Mellanox) switches

If I was going to rule any out based on rails it'd be their half width
model. Craziest rails I've seen. It's actually a frame that sits inside
the rack rails so you need quite a bit of space above to angle it to
fit between the rails.

Once you have stuff above and below the frame isn't coming out (at
least the switches just slide into it)

branodn



Re: Rack rails on network equipment

2021-09-25 Thread Owen DeLong via NANOG


> On Sep 25, 2021, at 12:48 , Andrey Khomyakov  
> wrote:

> Let me just say from the get go that no one is making toolless rails a 
> priority to the point of shutting vendors out of the evaluation process. I am 
> not quite sure why that assumption was made by at least a few folks. With 
> that said, when all things being equal or fairly equal, which they rarely 
> are, that's when the rails come in as a factor.
> 
> Back some 5 years ago we pulled the trigger and started phasing out Cisco and 
> Juniper switching products out of our data centers (reasons for that are not 
> quite relevant to the topic). We selected Dell switches in part due to Dell 
> using "quick rails'' (sometimes known as speed rails or toolless rails).  
> This is where both the switch side rail and the rack side rail just snap in, 
> thus not requiring a screwdriver and hands of the size no bigger than a 
> hamster paw to hold those stupid proprietary screws (lookin at your, cisco) 
> to attach those rails.

Perhaps from this paragraph?

Owen




Re: Rack rails on network equipment

2021-09-25 Thread Jay Hennigan

On 9/25/21 13:55, Baldur Norddahl wrote:

My personal itch is how new equipment seems to have even worse boot time 
than previous generations. I am currently installing juniper acx710 and 
while they are nice, they also make me wait 15 minutes to boot. This is 
a tremendous waste of time during installation. I can not leave the site 
without verification and typically I also have some tasks to do after boot.


Besides if you have a crash or power interruption, the customers are not 
happy to wait additionally 15 minutes to get online again.


Switches in particular have a lot of ASICs that need to be loaded on 
boot. This takes time and they're really not optimized for speed on a 
process that occurs once.


--
Jay Hennigan - j...@west.net
Network Engineering - CCIE #7880
503 897-8550 - WB6RDV


Re: Rack rails on network equipment

2021-09-25 Thread Baldur Norddahl
The "niceness" of equipment does factor in but it might be invisible. For
example if you like junipers cli environment, you will look at their stuff
first even if you do not have it explicitly in your requirement list.

Better rack rails will make slightly more people prefer your gear, although
it might be hard to measure exactly how much. Which is probably the
problem.

Our problem with racking switches is how vendors deliver NO rack rails and
expect us to have them hanging on just the front posts. I have a lot of
switches on rack shelfs for that reason. Does not look very professional
but neither does rack posts bent out of shape.

My personal itch is how new equipment seems to have even worse boot time
than previous generations. I am currently installing juniper acx710 and
while they are nice, they also make me wait 15 minutes to boot. This is a
tremendous waste of time during installation. I can not leave the site
without verification and typically I also have some tasks to do after boot.

Besides if you have a crash or power interruption, the customers are not
happy to wait additionally 15 minutes to get online again.

Desktop computers used to be ages to boot until Microsoft declared that you
need to be ready in 30 seconds to be certified. And suddenly everything
could boot in 30 seconds or less. There is no good reason to waste techs
time by scanning the SCSI bus in a server that does not even have the
hardware.

Regards

Baldur



lør. 25. sep. 2021 21.49 skrev Andrey Khomyakov :

> Well, folks, the replies have certainly been interesting. I did get my
> answer, which seems to be "no one cares", which, in turn, explains why
> network equipment manufacturers give very little to no attention to this
> problem. A point of clarification is I'm talking about the problem in the
> context of operating a data center with cabinet racks, not a telecom closet
> with 2 post racks.
>
> Let me just say from the get go that no one is making toolless rails a
> priority to the point of shutting vendors out of the evaluation process. I
> am not quite sure why that assumption was made by at least a few folks.
> With that said, when all things being equal or fairly equal, which they
> rarely are, that's when the rails come in as a factor.
>
> We operate over 1000 switches in our data centers, and hardware failures
> that require a switch swap are common enough where the speed of swap starts
> to matter to some extent. We probably swap a switch or two a month.
> Furthermore, those switches several of you referenced, which run for 5+
> years are not the ones we use. I think you are thinking of the legacy days
> where you pay $20k plus for a top of rack switch from Cisco, and then sweat
> that switch until it dies of old age. I used to operate exactly like that
> in my earlier days. This does not work for us for a number of reasons, and
> so we don't go down that path.
>
> We use Force10 family Dell switches which are basically Broadcom TD2+/TD3
> based switches (ON4000 and ON5200 series) and we run Cumulus Linux on
> those, so swapping hardware without swapping the operating system for us is
> quite plausible and very much possible. We just haven't had the need to
> switch away from Dell until recently after Cumulus Networks (now Nvidia)
> had a falling out with Broadcom and effectively will cease support for
> Broadcom ASICs in the near future. We have loads of network config
> automation rolled out and very little of it is tied to anything Cumulus
> Linux specific, so there is a fair chance to switch over to Sonic with low
> to medium effort on our part, thus returning to the state where we can
> switch hardware vendors with fairly low effort. We are looking at Nvidia
> (former Mellanox) switches which hardly have any toolless rails, and we are
> also looking at all the other usual suspects in the "white box" world,
> which is why I asked how many of you care about the rail kit and I got my
> answer: "very little to not at all". In my opinion, if you never ask,
> you'll never get it, so I am asking my vendors for toolless rails, even if
> most of them will likely never get there, since I'm probably one of the
> very few who even brights that question up to them. I'd say network
> equipment has always been in a sad state of being compared to, well, just
> about any other equipment and for some reason we are all more or less
> content with it. May I suggest you all at least raise that question to your
> suppliers even if you know full well the answer is "no". At least it will
> start showing the vendors there is demand for this feature.
>
> On the subject of new builds. Over the course of my career I have hired
> contractors to rack/stack large build-outs and a good number of them treat
> your equipment the same way they treat their 2x4s. They torque all the
> screws to such a degree that when you have to undo that, you are sweating
> like a pig trying to undo one screw, eventually stripping it, so you have
> to drill 

Re: Rack rails on network equipment

2021-09-25 Thread Andrey Khomyakov
Well, folks, the replies have certainly been interesting. I did get my
answer, which seems to be "no one cares", which, in turn, explains why
network equipment manufacturers give very little to no attention to this
problem. A point of clarification is I'm talking about the problem in the
context of operating a data center with cabinet racks, not a telecom closet
with 2 post racks.

Let me just say from the get go that no one is making toolless rails a
priority to the point of shutting vendors out of the evaluation process. I
am not quite sure why that assumption was made by at least a few folks.
With that said, when all things being equal or fairly equal, which they
rarely are, that's when the rails come in as a factor.

We operate over 1000 switches in our data centers, and hardware failures
that require a switch swap are common enough where the speed of swap starts
to matter to some extent. We probably swap a switch or two a month.
Furthermore, those switches several of you referenced, which run for 5+
years are not the ones we use. I think you are thinking of the legacy days
where you pay $20k plus for a top of rack switch from Cisco, and then sweat
that switch until it dies of old age. I used to operate exactly like that
in my earlier days. This does not work for us for a number of reasons, and
so we don't go down that path.

We use Force10 family Dell switches which are basically Broadcom TD2+/TD3
based switches (ON4000 and ON5200 series) and we run Cumulus Linux on
those, so swapping hardware without swapping the operating system for us is
quite plausible and very much possible. We just haven't had the need to
switch away from Dell until recently after Cumulus Networks (now Nvidia)
had a falling out with Broadcom and effectively will cease support for
Broadcom ASICs in the near future. We have loads of network config
automation rolled out and very little of it is tied to anything Cumulus
Linux specific, so there is a fair chance to switch over to Sonic with low
to medium effort on our part, thus returning to the state where we can
switch hardware vendors with fairly low effort. We are looking at Nvidia
(former Mellanox) switches which hardly have any toolless rails, and we are
also looking at all the other usual suspects in the "white box" world,
which is why I asked how many of you care about the rail kit and I got my
answer: "very little to not at all". In my opinion, if you never ask,
you'll never get it, so I am asking my vendors for toolless rails, even if
most of them will likely never get there, since I'm probably one of the
very few who even brights that question up to them. I'd say network
equipment has always been in a sad state of being compared to, well, just
about any other equipment and for some reason we are all more or less
content with it. May I suggest you all at least raise that question to your
suppliers even if you know full well the answer is "no". At least it will
start showing the vendors there is demand for this feature.

On the subject of new builds. Over the course of my career I have hired
contractors to rack/stack large build-outs and a good number of them treat
your equipment the same way they treat their 2x4s. They torque all the
screws to such a degree that when you have to undo that, you are sweating
like a pig trying to undo one screw, eventually stripping it, so you have
to drill that out, etc, etc. How is that acceptable? I'm not trying to say
that _every_ contractor does that, but a lot do to the point that that
matters. I have no interest in discussing how to babysit contractors so
they don't screw up your equipment.

I will also concede that operating 10 switches in a colo cage probably
doesn't warrant considerations for toolless rails. Operating 500 switches
and growing per site?... It slowly starts to matter. And when your outlook
is expansion, then it starts to matter even more.

Thanks to all of you for your contribution. It definitely shows the
perspective I was looking for.

Special thanks to Jason How-Kow, who linked the Arista toolless rails
(ironically we have Arista evals in the pipeline and I didn't know they do
toolless, so it's super helpful)

--Andrey


On Fri, Sep 24, 2021 at 9:37 AM Andrey Khomyakov 
wrote:

> Hi folks,
> Happy Friday!
>
> Would you, please, share your thoughts on the following matter?
>
> Back some 5 years ago we pulled the trigger and started phasing out Cisco
> and Juniper switching products out of our data centers (reasons for that
> are not quite relevant to the topic). We selected Dell switches in part due
> to Dell using "quick rails'' (sometimes known as speed rails or toolless
> rails).  This is where both the switch side rail and the rack side rail
> just snap in, thus not requiring a screwdriver and hands of the size no
> bigger than a hamster paw to hold those stupid proprietary screws (lookin
> at your, cisco) to attach those rails.
> We went from taking 16hrs to build a row of compute (from just network
> equipment 

Re: IPv6 woes - RFC

2021-09-25 Thread Owen DeLong via NANOG



> On Sep 25, 2021, at 02:10 , b...@uu3.net wrote:
> 
> Because IPv4 loopback is 127.0.0.1/8 and its usefull?

How so? Where do you actually use 16.7 million loopback addresses, let
alone 18 Quitillion+ * 65536 (/48)?

> 
> 0:0:1-:0/32 means you generate addreses from
> that range and not necessary using /32 prefix..
> It just range thats reserved for LL.

So you want to reserve the range 0:0:1:0..0:0::0 with all zeros in the last
16 bits as loopback? Why the (effectively discontiguous net mask)?
Why not include 0:0:0:0 in it?

Sorry, not trying to rain on your parade, but trying to understand your 
thinking here.

> Same about RFC1918 aka space.. its a range reserved for local addreses.

My point was why repeat the RFC-1918 mistake. There’s really no need for
it unless you intend to recreate the NAT problems.

Further, you specified:
0:0:0:1/48 as loopback, that’s the range 0:0:0:0..0:0:0: in your 
proposed
addressing structure.

0:0:1-:0/16 as RFC-1918, that’s an odd way of notating 
0:0:0:0..0::: 
in your proposed addressing structure. As such, your meaning is 
unclear.

So it’s unclear how you intend to map ranges and netmxasks in your proposal.

> The whole rationale is:
> - shorter prefix wins (so no overlap?)

Usually longest matching prefix wins, but when you’re talking about the 
distinction
between RFC-1918 and loopback, I think overlap poses a human factors problem
that you haven’t considered.

> - you can use nice short addreses like ::1234 for loopback
>  or ::1: for LL or ::1:0:1234 for RFC1918 like

Not to put too fine a point on it, but ::1 works in IPv6 today. If you want, you
are free to assign anything else you want on the loopback interface, so for
example, you could assign fd:0:0:1::/64 to the loopback interface and use
any address you want from fd:0:0:1::0 through fd:0:0:1:::: as
loopback addresses. (I don’t see a point in using GUA for loopback as long
as the ULA silliness exists. In fact, this might be the one and only legitimate
use case I can see for ULA.)

For RFC1918, you can make up anything you want within fd::/8 in IPv6
as it exists today. Ideally, you choose a randomized /48 from fd::/8 and
subnet that along /64 boundaries, but I don’t see that as significantly more
complex than what I think you are proposing.

> Whole ::1 short format should be used only to cut leading zeros
> not "more zeroes whatnever they appear".

Why? The current format allows you to put the :: wherever you think
it makes the most sense and as long as there’s only one :: in an
address (which is a requirement), there’s no ambiguity about the
number of 0s replaced.

Yes, it makes textual comparison of addresses messy, but there’s
really no need for that. Far better use textual representations only
for user presentation anyway. Internally, they should always be
stored/handled as a 128-bit unsigned integer value.

So the fact that:

2001:db8:0:1::5
2001:db8::1:0:0:0:5

Are two different ways of representing the same address isn’t
of any concern unless you’re making the mistake of trying to
string wise compare them in their text-representation format.
Both equate to the same uint128_t value.

> ND is new thing and it requires new things to protect it from attacks?

Not so much.

The defined ND attacks aren’t new for ND. They all exist in ARP as well.
What is new is 64-bit wide network segments. If you put a /6 on a switch
in IPv4 and then do an ARP scan, you’ll get the same table overflow
problems as you are talking about with “ND attacks”. The difference is
that in IPv4, /6 networks are extraordinarily rare (if any exist at all) while
it is commonplace in IPv6 to have /64 network segments.

Nonetheless, this isn’t actually inherent in the difference between ND ad
ARP, it is inherent in the difference between network segments limited
to 1024 possible host addresses or less and network segments that have
more than 18 Quintillion possible host addresses.

In fact, if you’re really super-worried about this (which isn’t really a thing
in most environments, TBH), you can create your IPv6 networks as /120s
or /118s or whatever and simply limit the total number of available host
addresses. You have to give up some IPv6 features that don’t exist in
IPv4 anyway, but ND will still work and you won’t have to worry about
table overflows any more than you did in IPv4.

> While all the hate towards NAT, after years of using it I see it as cool
> thing now. Yeah it breaks E2E, and thats why its place is only at CPE.
> The true tragedy is CGN.

So making the average household a second-class citizen on the network
is “cool thing now”? Not in my opinion. There are lots of applications that
should exist today and don’t because we chose to break the E2E model.
Of the applications that do, there is a great deal of added complexity,
fragility, and a loss of privacy due to the use of rendezvous hosts to
overcome the 

Re: IPv6 woes - RFC

2021-09-25 Thread Owen DeLong via NANOG



> On Sep 25, 2021, at 01:57 , b...@uu3.net wrote:
> 
> Well, I think we should not compare IPX to IPv4 because those protocols
> were made to handle completly different networks?
> 
> Yeah, IPv6 is new, but its more like revolution instead of evolution.
> 
> Well, Industry seems to addapt things quickly when they are good enough.
> Better things replace worse. Of course its not always the case, sometimes
> things are being forced here.. And thats how I feel about IPv6..

Sometimes worse things replace better. NAT, for example was definitely not
an improvement to IPv4. It was a necessary evil intended to be a temporary
fix.

> 
> IPv4 Lookback is 127.0.0.1/8
> You can use bind IPs within range by applications. Handy
> In IPv6 its not the case.

You are free to assign any additional IPv6 addresses you like to the loopback
interface and then bind them to applications. Personally, I haven’t found a
particularly good use for this, but it is possible.

It does mean that instead of wasting 1/256th of the entire address space
in every context on loopbacks, you have to assign what you need there,
but you can easily assign a /64 prefix to a loopback interface and have
applications bind within range.

> IPv6 ND brings new problems that has been (painfully?) fixed in IPv4.
> Tables overflows, attacks and DDoS.. Why to repeat history again?

Table overflows weren’t fixed in IPv4 and have nothing to do with ND vs.
ARP. Table overflows are (not really an issue in my experience) the
result of a larger address space than the memory available for the L2
forwarding table on switches or the ND table on hosts. This isn’t due
to a difference in ND vs. ARP. It is due to the fact that there are no
64-bit networks in IPv4, but they are commonplace in IPv6.

Mostly this has been solved in software by managing table discards more
effectively.

> IPv6 DHCP: Im not using IPv6, but I heard ppl talking about some 
> issues. If this is not the case, im sorry. Its been a while when I last time
> played with IPv6...

I am using IPv6 and I’m using IPv6 DHCP. I haven’t encountered any significant
problems with it other than some minor inconveniences introduced by the ability
to have different DUID types and vendors doing semi-obnoxious things along that
line.

> IPv6 interop: yeah, I agree here.. But people involved with IPv6 should 
> think about some external IPv4 interop.. Internet was exploding at 1997..
> Maybe they had hope that everyone upgrade like in CIDR case. And maybe it 
> could happen if IPv6 wasnt so alien ;)

It was thought about… It was considered. It was long pondered. Problem was,
nobody could come up with a way to overcome the fact that you can’t put
128 bits of data in a 32 bit field without loss.

IPv6 really isn’t so alien, so I don’t buy that argument. The software changes
necessary to implement IPv6 were significantly bigger than CIDR and IPv6
affected applications, not just network. There was no way around these
two facts. The IPv6 network stack did get adopted and implemented nearly
as fast as CIDR and virtually every OS, Switch, Router has had IPv6 support
for quite some time now at the network stack level. It is applications and
content providers that are lagging and they never did anything for CIDR.

> As for IPv4 vs IPv6 complexity, again, why repeat history.

What complexity?

> Biggest IPv4
> mistake was IPv4 being classfull. It was fixed by bringing CIDR into game.

No, biggest IPv4 mistake was 32-bit addresses. A larger address would have been
inconvenient in hardware at the time, but it would have made IPv4 much more
scalable and would have allowed it to last significantly longer.

> (Another big mistake was class E reservation...)

Not really. It was a decision that made sense at the time. Class D reservation
made sense originally too. Without it, we wouldn’t have had addresses available
to experiment with or develop multicast.

There was no way to know at the time that decision was made that IPv4 would run
out of addresses before it would find some new thing to experiment with.

> Internet was tiny at that time so everyone followed.

Followed what, exactly?

> Image something like this today? Same about IPv6.. it brings
> forced network::endpoint probably due to IoT, sacrificing flexibility.

I can’t parse this into a meaningful comment. Can you clarify please?
What is “forced network::endpoint” supposed to mean and what does it
have to do with IoT? What flexibility has been sacrificed?

> Again, I dont want to really defend my standpoint here. Its too late for 
> that. I kinda regret now dropping into discussion...

OK, so you want to make random comments which are not even necessarily
true and then walk away from the discussion? I have trouble understanding
that perspective.

I’m not trying to bash your position or you. I’m trying to understand your
objections, figure out which ones are legitimate criticism of IPv6, which
ones are legitimate criticism, but not actually IPv6, and which ones
are 

Re: 100GbE beyond 40km

2021-09-25 Thread Colton Conor
It seems that many of you are recommending the SolidOptics 1U
appliances for this application. What do those cost?

On Fri, Sep 24, 2021 at 6:01 PM Lady Benjamin Cannon of Glencoe, ASCE
 wrote:
>
> Above 40km I like coherent systems with FEC. You can feed the juniper into a 
> pair of SolidOptics 1U appliances
>
> Ms. Lady Benjamin PD Cannon of Glencoe, ASCE
> 6x7 Networks & 6x7 Telecom, LLC
> CEO
> l...@6by7.net
> "The only fully end-to-end encrypted global telecommunications company in the 
> world.”
>
> FCC License KJ6FJJ
>
> Sent from my iPhone via RFC1149.
>
> On Sep 24, 2021, at 2:35 PM, Edwin Mallette  wrote:
>
> I just bite the bullet and use 3rd party optics.  It’s easier and once  you 
> make the switch, lower cost.
>
> Ed
>
> Sent from my iPhone
>
> On Sep 25, 2021, at 12:29 AM, Joe Freeman  wrote:
>
> 
> Open Line Systems can get you to 80K with a 100G DWDM Optic (PAM4) -
>
> I've used a lot of SmartOptics DCP-M40 shelves for this purpose. They also 
> have transponders that allow you to go from a QSFP28 to CFP to do coherent 
> 100G out to 120Km using the DCP-M40, without a need for regen or extra amps 
> in line.
>
> The DCP-M40 is a 1RU box. It looks like a deep 40ch DWDM filter but includes 
> a VAO, EDFA amp, and a WSS I think.
>
> On Fri, Sep 24, 2021 at 4:40 PM Randy Carpenter  wrote:
>>
>>
>> How is everyone accomplishing 100GbE at farther than 40km distances?
>>
>> Juniper is saying it can't be done with anything they offer, except for a 
>> single CFP-based line card that is EOL.
>>
>> There are QSFP "ZR" modules from third parties, but I am hesitant to try 
>> those without there being an equivalent official part.
>>
>>
>> The application is an ISP upgrading from Nx10G, where one of their fiber 
>> paths is ~35km and the other is ~60km.
>>
>>
>>
>> thanks,
>> -Randy


Re: Rack rails on network equipment

2021-09-25 Thread ic
Hi,

> On 24 Sep 2021, at 12:37, Andrey Khomyakov  wrote:
> 
> (you know, where most of your server ports are…)

Port side intake (switch at the front of the rack) is generally better for 
cooling the optical modules. The extra cabling difficulty is worth it.

Also, as others said, choosing an arguably inferior product only because it’s 
easier to rack sounds like a bad idea.

BR, ic



Re: Rack rails on network equipment

2021-09-25 Thread Sabri Berisha
- On Sep 24, 2021, at 11:19 AM, William Herrin b...@herrin.us wrote:

Hi,

> Seriously, the physical build of network equipment is not entirely
> competent.

Except, sometimes there is little choice. Look at 400G QSFP-DD for
example. Those optics can generate up to 20 watts of heat that needs
to be dissipated. For 800G that can go up to 25 watts.

That makes back-to-front cooling, as some people demand, very
challenging, if not impossible.

Thanks,

Sabri



Re: IPv6 woes - RFC

2021-09-25 Thread Baldur Norddahl
On Sat, 25 Sept 2021 at 11:10,  wrote:

> Because IPv4 loopback is 127.0.0.1/8 and its usefull?
>

I am not sure why it is useful but nothing stops you from adding more
loopback addresses:

root@jump2:~# ip addr add ::2/128 dev lo
root@jump2:~# ping6 ::2
PING ::2(::2) 56 data bytes
64 bytes from ::2: icmp_seq=1 ttl=64 time=0.043 ms

While I am not sure what use extra addresses from the 127.0.0.0/8 prefix are
on the loopback, it is quite common for us to add extra global addresses
and then use that with proxy arp. Of course that is only necessary on IPv4
since IPv6 isn't so restrained that we have to save every last address bit
using tricks.


>
> - you can use nice short addreses like ::1234 for loopback
>

root@jump2:~# ip addr add ::1234/128 dev lo
root@jump2:~# ping6 ::1234
PING ::1234(::1234) 56 data bytes
64 bytes from ::1234: icmp_seq=1 ttl=64 time=0.046 ms

:-)


>   or ::1: for LL or ::1:0:1234 for RFC1918 like
>

With IPv6 you can use fe80::1: for link local and fd00::1:0:1234 for
your RFC1918 like setup. And then you can use 1:1 NAT to transform that to
GUA on the router. Even NAT, if you insist on using it, is better with IPv6.

The confusion here appears to be that auto generated link local prefixes
are long with many hex digits. But compared to the new proposal, which
could have no auto generated link local due to having too few bits, there
is nothing that stops you from manually assigning link local addresses. It
is just that nobody wants to bother with that and you wouldn't either.

Example:

root@jump2:~# ip addr add fe80::1:/64 dev eth0
root@jump2:~# ping6 fe80::1:%eth0
PING fe80::1:%eth0(fe80::1:) 56 data bytes
64 bytes from fe80::1:: icmp_seq=1 ttl=64 time=0.033 ms



> ND is new thing and it requires new things to protect it from attacks?
>

I am not aware of any NDP attacks that would be any different if based on
ARP. Those two protocols are practically the same.

Regards,

Baldur


Re: IPv6 woes - RFC

2021-09-25 Thread borg
Because IPv4 loopback is 127.0.0.1/8 and its usefull?

0:0:1-:0/32 means you generate addreses from
that range and not necessary using /32 prefix..
It just range thats reserved for LL.

Same about RFC1918 aka space.. its a range reserved for local addreses.

The whole rationale is:
- shorter prefix wins (so no overlap?)
- you can use nice short addreses like ::1234 for loopback
  or ::1: for LL or ::1:0:1234 for RFC1918 like

Whole ::1 short format should be used only to cut leading zeros
not "more zeroes whatnever they appear".

ND is new thing and it requires new things to protect it from attacks?

While all the hate towards NAT, after years of using it I see it as cool
thing now. Yeah it breaks E2E, and thats why its place is only at CPE.
The true tragedy is CGN.

Yeah, services make money so they should addapt new protocol, so users
can access those services. In my opinion, due to IPv4 exhaustion, this
is right adoption scheme. You move users to IPv6 and free IPv4 addresses
for more services. It means internet can still grow and noone is really cut
off. Once IPv6 mass is big enough, you can start to fade IPv4 services.

Prototype yeah... if only this would be 1997 again... ;)


-- Original message --

From: Owen DeLong 
To: b...@uu3.net
Cc: nanog@nanog.org
Subject: Re: IPv6 woes - RFC
Date: Fri, 24 Sep 2021 17:24:29 -0700



> On Sep 24, 2021, at 2:01 AM, b...@uu3.net wrote:
> 
> Oh yeah, it would be very funny if this will really happen (new protocol).
> Im not happy with IPv6, and it seems many others too.
> 
> This is short list how my ideal IPv6 proto looks like:
> - 64bit address space
>  more is not always better

Perhaps, but the benefits of a 128 bit address space with a convenient
near universal network/host boundary has benefits. What would be the
perceived benefit of 64-bit addressing over 128?

> - loopback 0:0:0:1/48

Why dedicate a /48 to loopback?

> - soft LL 0:0:1-:0/32 (Link Local)

Having trouble understanding that expression˙˙ Wouldn˙˙t it overlap loopback, 
since
0:0::/32 and 0:0:0::/48 would be overlapping prefixes?

> - RFC1918 address space 0:1-:0:0/16

Why repeat this mistake?

> - keep ARPs, ND wasnt great idea after all?

I don˙˙t see a significant difference (pro or con) to ND vs. ARP.

> - NAT support (because its everywhere these days)

That˙˙s a tragedy of IPv4, I don˙˙t see a benefit to inflicting it on a new 
protocol.

> - IPv6 -> IPv4 interop (oneway)
>  we can put customers on IPv6, while keeping services dualstack

That requires the services to be dual stack which is kind of the problem we have
with IPv6 today˙˙ Enough services that matter aren˙˙t dual stack.

> - correct DHCP support (SLAAC wasnt great idea after all?)
>  I think its already in IPv6, but was an issue at the begining

Depends on your definition of ˙˙correct˙˙. I disagree about SLAAC not being a 
great
idea. It might not fit every need, but it˙˙s certainly a low-overhead highly 
useful
mechanism in a lot of deployments.

> If there are some weird requirements from others, put them into layer up.
> L3 needs to be simple (KISS concept), so its easy to implement and less
> prone to bugs.
> 
> And that IPv6 I would love to see and addapt right away :)

Well.. Present your working prototype on at least two different systems. ;-)

Owen

> 
> 
> -- Original message --
> 
> From: Joe Maimon 
> To: Owen DeLong , Bjrn Mork 
> Cc: nanog@nanog.org
> Subject: Re: IPv6 woes - RFC
> Date: Thu, 23 Sep 2021 16:26:17 -0400
> 
> 
> 
> Owen DeLong via NANOG wrote:
>>> There are real issues with dual-stack, as this thread started out with.
>>> I don't think there is a need neither to invent IPv6 problems, nor to
>>> promote IPv6 advantages.  What we need is a way out of dual-stack-hell.
>> I dont disagree, but a reversion to IPv4-only certainly wont do it.
> 
> For everyone who does have enough IPv4 addresses, it does. This is the problem
> in a nutshell. If that starts trending, IPv6 is done.
> 
>> I think the only way out is through.
> 
> I hope not, both for IPv6 sake and for the network users. We dont know how 
> much
> longer the goal will take, there is materializing a real possibility we will
> never quite reach it, and the potholes on the way are pretty rough.
> 
> And as the trip winds on, the landscape is changing, not necessarily for the
> better.
> 
> One more "any decade now" and another IPv4 replacement/extension might just
> happen on the scene and catch on, rendering IPv6 the most wasteful global
> technical debacle to date.
> 
> 
>>  Unfortunately, the IPv6 resistant forces
>> are making that hard for everyone else.
>> 
>> Owen
> 
> You say that as if it was a surprise, when it should not have been, and you 
> say
> that as if something can be done about it, which we should know by now cannot 
> be
> the primary focus, since it cannot be done in any timely fashion. If at all.
> 
> Its time to throw mud on the wall and see what sticks. Dual stack and wait 

Re: IPv6 woes - RFC

2021-09-25 Thread borg
Well, I think we should not compare IPX to IPv4 because those protocols
were made to handle completly different networks?

Yeah, IPv6 is new, but its more like revolution instead of evolution.

Well, Industry seems to addapt things quickly when they are good enough.
Better things replace worse. Of course its not always the case, sometimes
things are being forced here.. And thats how I feel about IPv6..

IPv4 Lookback is 127.0.0.1/8
You can use bind IPs within range by applications. Handy
In IPv6 its not the case.

IPv6 ND brings new problems that has been (painfully?) fixed in IPv4.
Tables overflows, attacks and DDoS.. Why to repeat history again?

IPv6 DHCP: Im not using IPv6, but I heard ppl talking about some 
issues. If this is not the case, im sorry. Its been a while when I last time
played with IPv6...

IPv6 interop: yeah, I agree here.. But people involved with IPv6 should 
think about some external IPv4 interop.. Internet was exploding at 1997..
Maybe they had hope that everyone upgrade like in CIDR case. And maybe it 
could happen if IPv6 wasnt so alien ;)

As for IPv4 vs IPv6 complexity, again, why repeat history. Biggest IPv4
mistake was IPv4 being classfull. It was fixed by bringing CIDR into game.
(Another big mistake was class E reservation...)
Internet was tiny at that time so everyone followed.
Image something like this today? Same about IPv6.. it brings
forced network::endpoint probably due to IoT, sacrificing flexibility.

Again, I dont want to really defend my standpoint here. Its too late for 
that. I kinda regret now dropping into discussion...


-- Original message --

From: Grant Taylor via NANOG 
To: nanog@nanog.org
Subject: Re: IPv6 woes - RFC
Date: Fri, 24 Sep 2021 14:26:27 -0600

On 9/24/21 11:53 AM, b...@uu3.net wrote:
> Well, I see IPv6 as double failure really.

I still feel like you are combining / conflating two distinct issues into one
generalization.

> First, IPv6 itself is too different from IPv4.

Is it?  Is it really?  Is the delta between IPv4 and IPv6 greater than the delta
between IPv4 and IPX?

If anything, I think the delta between IPv4 and IPv6 is too small. Small enough
that both IPv4 and IPv6 get treated as one protocol and thus a lot of friction
between the multiple personalities therein.  I also think that the grouping of
IPv4 and IPv6 as one protocol is part of the downfall.

More over if you think of IPv4 and IPv6 dual stack as analogous to the
multi-protocol networks of the '90s, and treat them as disparate protocols that
serve similar purposes in (completely) different ways, a lot of the friction
seems to make sense and as such becomes less friction through understanding and
having reasonable expectations for the disparate protocols.

> What Internet wanted is IPv4+ (aka IPv4 with bigger address space, likely
> 64bit). Of course we could not extend IPv4, so having new protocol is fine.

I don't think you truly mean that having a new protocol is fine. Because if you
did, I think you would treat IPv6 as a completely different protocol from IPv4.
E.g. AppleTalk vs DECnet.  After all, we effectively do have a new protocol;
IPv6.

IPv6 is as similar to IPv4 as Windows 2000 is similar to Windows 98.  Or
"different" in place of "similar".

> It should just fix problem (do we have other problems I am not aware of with
> IPv4?) of address space and thats it.  Im happy with IPv4, after 30+ years of
> usage we pretty much fixed all problems we had.

I disagree.

> The second failure is adoption. Even if my IPv6 hate is not rational, adoption
> of IPv6 is crap. If adoption would be much better, more IPv4 could be used for
> legacy networks ;) So stuborn guys like me could be happy too ;)

I blame the industry, not the IPv6 protocol, for the lackluster adoption of
IPv6.

> As for details, that list is just my dream IPv6 protocol ;)
> 
> But lets talk about details:
> - Loopback on IPv6 is ::1/128
>I have setups where I need more addresses there that are local only.
>Yeah I know, we can put extra aliases on interfaces etc.. but its extra
>work and not w/o problems

How does IPv6 differ from IPv4 in this context?

> - IPv6 Link Local is forced.
>I mean, its always on interface, nevermind you assign static IP.
>LL is still there and gets in the way (OSPFv3... hell yeah)

I agree that IPv6 addresses seem to accumulate on interfaces like IoT devices do
on a network.  But I don't see a technical problem with this in and of itself.
--  I can't speak to OSPFv3 issues.

> - ULA space, well.. its like RFC1918 but there are some issues with it
>(or at least was? maybe its fixed) like source IP selection on with
>multiple addresses.

I consider this to be implementation issues and not a problem with the protocol
itself.

> - Neighbor Discovery protocol... quite a bit problems it created.

Please elaborate.

>What was wrong w/ good old ARP? I tought we fixed all those problems
>already like ARP poisoning via port security.. etc


Re: 100GbE beyond 40km

2021-09-25 Thread Etienne-Victor Depasquale via NANOG
Bear with my ignorance, I'm genuinely surprised at this:

Does this have to be Ethernet? You could look into line gear with coherent
> optics.
>

Specifically, do you mean something like: "does this have to be
IEEE-standardized all the way down to L1 optics?" Because you can transmit
Ethernet frames over line gear with coherent optics, right ?

Please don't flame me, I'm just ignorant and willing to learn.

Cheers,

Etienne

On Fri, Sep 24, 2021 at 11:25 PM Bill Blackford 
wrote:

> Does this have to be Ethernet? You could look into line gear with coherent
> optics. IIRC, they have built-in chromatic dispersion compensation, and
> depending on the card, would include amplification.
>
> On Fri, Sep 24, 2021 at 1:40 PM Randy Carpenter 
> wrote:
>
>>
>> How is everyone accomplishing 100GbE at farther than 40km distances?
>>
>> Juniper is saying it can't be done with anything they offer, except for a
>> single CFP-based line card that is EOL.
>>
>> There are QSFP "ZR" modules from third parties, but I am hesitant to try
>> those without there being an equivalent official part.
>>
>>
>> The application is an ISP upgrading from Nx10G, where one of their fiber
>> paths is ~35km and the other is ~60km.
>>
>>
>>
>> thanks,
>> -Randy
>>
>
>
> --
> Bill Blackford
>
> Logged into reality and abusing my sudo privileges.
>


-- 
Ing. Etienne-Victor Depasquale
Assistant Lecturer
Department of Communications & Computer Engineering
Faculty of Information & Communication Technology
University of Malta
Web. https://www.um.edu.mt/profile/etiennedepasquale


Cogent Contact

2021-09-25 Thread Nikolas Geyer
If someone from Cogent is on list can you please contact me regarding some on 
going route hijacking problems? Attempts to reach your listed contacts (PDB, 
RIRs, IRR etc) has resulted in radio silence.

Thanks!
Nik.