Re: Rack rails on network equipment

2021-09-24 Thread Bryan Fields
On 9/24/21 10:58 PM, Owen DeLong via NANOG wrote:
> Meh… Turn off power supply input switch, open chassis carefully, apply 
> high-wattage 1Ω resistor across capacitor terminals for 10 seconds.
> 

If dealing with a charged capacitor, do not use a low resistance such as a
ohm.  This is the same as using a screwdriver, and will cause a big arc.  You
want to use a 100k ohm device for a couple seconds, this will bleed it off
over 5-10 seconds.

Most (all?) power supplies will have a bleeder over any large value caps, and
will likely be shielded/encased near the input anyways.  If you let it sit for
5-10 minutes the leakage resistance will dissipate the charge in any typical
capacitor.

-- 
Bryan Fields

727-409-1194 - Voice
http://bryanfields.net


Re: Rack rails on network equipment

2021-09-24 Thread Wayne Bouchard
Didn't require any additional time at all when equipment wasn't bulky
enough to need rails in the first place


I've never been happy about that change.


On Fri, Sep 24, 2021 at 09:37:58AM -0700, Andrey Khomyakov wrote:
> Hi folks,
> Happy Friday!
> 
> Would you, please, share your thoughts on the following matter?
> 
> Back some 5 years ago we pulled the trigger and started phasing out Cisco
> and Juniper switching products out of our data centers (reasons for that
> are not quite relevant to the topic). We selected Dell switches in part due
> to Dell using "quick rails'' (sometimes known as speed rails or toolless
> rails).  This is where both the switch side rail and the rack side rail
> just snap in, thus not requiring a screwdriver and hands of the size no
> bigger than a hamster paw to hold those stupid proprietary screws (lookin
> at your, cisco) to attach those rails.
> We went from taking 16hrs to build a row of compute (from just network
> equipment racking pov) to maybe 1hr... (we estimated that on average it
> took us 30 min to rack a switch from cut open the box with Juniper switches
> to 5 min with Dell switches)
> Interesting tidbit is that we actually used to manufacture custom rails for
> our Juniper EX4500 switches so the switch can be actually inserted from the
> back of the rack (you know, where most of your server ports are...) and not
> be blocked by the zero-U PDUs and all the cabling in the rack. Stock rails
> didn't work at all for us unless we used wider racks, which then, in turn,
> reduced floor capacity.
> 
> As far as I know, Dell is the only switch vendor doing toolless rails so
> it's a bit of a hardware lock-in from that point of view.
> 
> *So ultimately my question to you all is how much do you care about the
> speed of racking and unracking equipment and do you tell your suppliers
> that you care? How much does the time it takes to install or replace a
> switch impact you?*
> 
> I was having a conversation with a vendor and was pushing hard on the fact
> that their switches will end up being actually costlier for me long term
> just because my switch replacement time quadruples at least, thus requiring
> me to staff more remote hands. Am I overthinking this and artificially
> limiting myself by excluding vendors who don't ship with toolless rails
> (which is all of them now except Dell)?
> 
> Thanks for your time in advance!
> --Andrey

---
Wayne Bouchard
w...@typo.org
Network Dude
http://www.typo.org/~web/


Re: Rack rails on network equipment

2021-09-24 Thread Owen DeLong via NANOG



> On Sep 24, 2021, at 3:35 PM, Niels Bakker  wrote:
> 
> * c...@cmadams.net (Chris Adams) [Sat 25 Sep 2021, 00:17 CEST]:
>> Which - why do I have to order different part numbers for back to front 
>> airflow?  It's just a fan, can't it be made reversible?  Seems like that 
>> would be cheaper than stocking alternate part numbers.
> 
> The fan is inside the power supply right next to the high-voltage capacitors. 
> You shouldn't be near that without proper training.

Meh… Turn off power supply input switch, open chassis carefully, apply 
high-wattage 1Ω resistor across capacitor terminals for 10 seconds.

There isn’t going to be any high voltage left after that.

Owen

> 
> 
>   -- Niels.



Re: IPv6 woes - RFC

2021-09-24 Thread Owen DeLong via NANOG



> On Sep 24, 2021, at 10:53 AM, b...@uu3.net wrote:
> 
> Well, I see IPv6 as double failure really. First, IPv6 itself is too
> different from IPv4. What Internet wanted is IPv4+ (aka IPv4 with
> bigger address space, likely 64bit). Of course we could not extend IPv4,
> so having new protocol is fine. It should just fix problem (do we have other
> problems I am not aware of with IPv4?) of address space and thats it.
> Im happy with IPv4, after 30+ years of usage we pretty much fixed all 
> problems we had.

Lack of address space is a key problem with IPv4 resolved by IPv6.
Lack of address transparency is another one, but that’s largely a side-effect
of the hacks applied to try and (temporarily cope with the first one). (also
solved by IPv6)
Inability to scale routing by using topology indicators separate from addresses
is a problem inherent in both IPv4 and IPv6. No, I do not consider that the
various hacks that have been applied on this, including LISP, are a solution.
Zeroconf in IPv4 is quite a bit less functional than in IPv6.
PMTU-D is a problem in both protocols, but in some ways, not quite as bad in 
IPv6.

> As for details, that list is just my dream IPv6 protocol ;)
> But lets talk about details:
> - Loopback on IPv6 is ::1/128
>  I have setups where I need more addresses there that are local only.
>  Yeah I know, we can put extra aliases on interfaces etc.. but its extra
>  work and not w/o problems

I haven’t had a problem assigining additional lo addresses, but whatever.
I think wasting an entire /8 on loopbacks was utterly stupid. Do you have
any situation where you need more than 18 quintillion loopbacks? If not,
then an IPv6 /64 would be fine worst case. (0:0:0:1::/64?)
> - IPv6 Link Local is forced.
>  I mean, its always on interface, nevermind you assign static IP.
>  LL is still there and gets in the way (OSPFv3... hell yeah)
How is it in the way? It’s quite useful and utilitarian.
OSPFv3 uses LL because it doesn’t want to risk having its traffic forwarded
off-link and this guarantees that it won’t.

> - ULA space, well.. its like RFC1918 but there are some issues with it
>  (or at least was? maybe its fixed) like source IP selection on with 
>  multiple addresses.

Isn’t that an issue in IPv4 if you assign a host a 1918 address and a GUA
on the same interface, too?

Bottom line, if you have GUA, ULA is mostly pointless in most scenarios.
This isn’t a problem unique to IPv6, save for the fact that you don’t
usually have the option of putting GUA where you put RFC-1918 because
if you have GUA, you don’t usually want 1918. So I really don’t see
any difference here other than the fact that IPv6 gives you an additional
choice you don’t generally have in IPv4, but that choice does come with
some additional tradeoffs.

> - Neighbor Discovery protocol... quite a bit problems it created.
>  What was wrong w/ good old ARP? I tought we fixed all those problems
>  already like ARP poisoning via port security.. etc

What are the problems you’re seeing with ND? In my experience, it mostly
just works.

> - NAT is there in IPv6 so no futher comments

Sadly, this is true. The good news is that hardly anyone uses it and
most people doing IPv6 seem to understand that it’s a really bad idea.

> - DHCP start to get working on IPv6.. but it still pain sometimes

I haven’t seen anything in DHCPv6 that is significantly harder than in
IPv4 other than the need to chase the DUID format that a particular
host uses in order to set up a fixed address.

> And biggest problem, interop w/ IPv4 was completly failure.
> Currently we have best Internet to migrate to new protocol.
> Why? Because how internet become centralized. Eyeball networks
> just want to reach content. E2E communication is not that much needed.
> We have games and enhusiast, but those can pay extra for public IPv4.
> Or get VPN/VPS.

Lots of possibilities were discussed, but it boiled down to the eventual
realization that there is mathematically no way to put a 128 bit
address into a 32 bit field without loss of information.

I bet any solution you can theorize ends up dying due to a variant
of the above issue.

> And end comment. I do NOT want to start some kind of flame war here.
> Yeah I know, Im biased toward IPv4. If something new popups, I want it 
> better than previous thingie (a lot) and easier or at least same level of 
> complications, but IPv6 just solves one thing and brings a lot of 
> complexity.

I have implemented IPv6 in a lot of environments and other than the complexities
around vendors doing a poor job of implementation, I haven’t encountered any
complexity that I couldn’t relate to the same problem in IPv4.

Can you elaborate on these supposed complexities and how they have actually
impacted you in real world scenarios? Perhaps the issues you’ve encountered
can be addressed in a useful way.

> The fact is, IPv6 failed. There are probably multiple reasons for it.
> Do we ever move to IPv6? I dont know.. Do I care for now? Nope, 

Re: IPv6 woes - RFC

2021-09-24 Thread Owen DeLong via NANOG



> On Sep 24, 2021, at 9:56 AM, Joe Maimon  wrote:
> 
> 
> 
> Owen DeLong wrote:
>> 
>>> On Sep 23, 2021, at 13:26 , Joe Maimon  wrote:
>>> 
>>> 
>>> I hope not, both for IPv6 sake and for the network users. We dont know how 
>>> much longer the goal will take, there is materializing a real possibility 
>>> we will never quite reach it, and the potholes on the way are pretty rough.
>> By “the only way out is through” I meant that the only way we can get back 
>> to anything resembling mono-stack is, in fact, to complete the transition to 
>> IPv6.
> 
> The question is how? Waiting for everyone or nearly everyone to dual stack, 
> the current strategy, is awful. Like pulling gum off a shoe.

Agreed, so the question boils down to what can be done to motivate the laggard 
content providers to get off the dime.

>>> And as the trip winds on, the landscape is changing, not necessarily for 
>>> the better.
>> The IPv4 landscape will continue to get worse and worse. It cannot possibly 
>> get better, there just aren’t enough addresses for that.
> 
> I was referring to the more general network landscape, the governance system, 
> the end of p2p, balkanization, etc, all trends and shifts that become more 
> likely and entrenched the longer IPv6 lags and the scarcer IPv4 becomes.
> 
>> 
>>> One more "any decade now" and another IPv4 replacement/extension might just 
>>> happen on the scene and catch on, rendering IPv6 the most wasteful global 
>>> technical debacle to date.
>> If that’s what it takes to move forward with a protocol that has enough 
>> addresses, then so be it. I’m not attached to IPv6 particularly, but I 
>> recognize that IPv4 can’t keep up. As such, IPv6 is just the best current 
>> candidate. If someone offers a better choice, I’m all for it.
> 
> Whose to say it would be a proper p2p system? I know you believe strongly in 
> that and want it fully restored, at least on the protocol level.

There are so many potential useful things we could do with a restored e2e 
system that are simply not proactical today that yes, I consider that vital.

For one thing, I’m really tired of vendor cloud locking just because products 
need rendezvous hosts with public addresses.

  Unfortunately, the IPv6 resistant forces
 are making that hard for everyone else.
 
 Owen
>>> You say that as if it was a surprise, when it should not have been, and you 
>>> say that as if something can be done about it, which we should know by now 
>>> cannot be the primary focus, since it cannot be done in any timely fashion. 
>>> If at all.
>> It’s not a surprise, but it is a tragedy.
>> 
>> There are things that can be done about it, but nobody currently wants to do 
>> them.
> 
> So lets make the conversation revolve around what can be done to actually 
> advance IPv6, and what we should know by now is that convincing or coercing 
> deployment with the current state of affairs does not have enough horsepower 
> to get IPv6 anywhere far anytime soon.

I’m open to alternatives if you have any to offer.

>>> Its time to throw mud on the wall and see what sticks. Dual stack and wait 
>>> is an ongoing failure slouching to disaster.
>> IPv4 is an ongoing failure slouching to disaster, but the IPv6-resistant 
>> among us remain in denial about that.
> 
> Who is this "us"? Anybody even discussing IPv6 in a public forum is well 
> ahead of the curve. Unfortunately. All early adopters. Real Early.

Everybody using the internet, but more importantly, the content providers that 
are resisting IPv6 deployment on their content are probably the biggest problem 
at this time.

>> At some point, we are going to have to make a choice about how much longer 
>> we want to keep letting them hold us back. It will not be an easy choice, it 
>> will not be convenient, and it will not be simple.
>> 
>> The question is how much more pain an dhow much longer will it take before 
>> the choice becomes less difficult than the wait?
>> 
>> Owen
>> 
> Exactly what does this choice look like? Turn off IPv4 when its fully 
> functional? Only the have-nots may make the choice not to deploy IPv4 
> sometime in the future, and for them, its not going to be a real choice.

IPv4 hasn’t been fully functional for more than a decade. At some point, the 
pain of continuing to wait for the laggards will become sufficient that those 
who have been running dual-stack will simply turn off IPv4 and leave the 
laggards behind. It might tragically not happen in my lifetime, but it has to 
happen at some point.

Owen



Re: IPv6 woes - RFC

2021-09-24 Thread Owen DeLong via NANOG



> On Sep 24, 2021, at 2:01 AM, b...@uu3.net wrote:
> 
> Oh yeah, it would be very funny if this will really happen (new protocol).
> Im not happy with IPv6, and it seems many others too.
> 
> This is short list how my ideal IPv6 proto looks like:
> - 64bit address space
>  more is not always better

Perhaps, but the benefits of a 128 bit address space with a convenient
near universal network/host boundary has benefits. What would be the
perceived benefit of 64-bit addressing over 128?

> - loopback 0:0:0:1/48

Why dedicate a /48 to loopback?

> - soft LL 0:0:1-:0/32 (Link Local)

Having trouble understanding that expression… Wouldn’t it overlap loopback, 
since
0:0::/32 and 0:0:0::/48 would be overlapping prefixes?

> - RFC1918 address space 0:1-:0:0/16

Why repeat this mistake?

> - keep ARPs, ND wasnt great idea after all?

I don’t see a significant difference (pro or con) to ND vs. ARP.

> - NAT support (because its everywhere these days)

That’s a tragedy of IPv4, I don’t see a benefit to inflicting it on a new 
protocol.

> - IPv6 -> IPv4 interop (oneway)
>  we can put customers on IPv6, while keeping services dualstack

That requires the services to be dual stack which is kind of the problem we have
with IPv6 today… Enough services that matter aren’t dual stack.

> - correct DHCP support (SLAAC wasnt great idea after all?)
>  I think its already in IPv6, but was an issue at the begining

Depends on your definition of “correct”. I disagree about SLAAC not being a 
great
idea. It might not fit every need, but it’s certainly a low-overhead highly 
useful
mechanism in a lot of deployments.

> If there are some weird requirements from others, put them into layer up.
> L3 needs to be simple (KISS concept), so its easy to implement and less
> prone to bugs.
> 
> And that IPv6 I would love to see and addapt right away :)

Well.. Present your working prototype on at least two different systems. ;-)

Owen

> 
> 
> -- Original message --
> 
> From: Joe Maimon 
> To: Owen DeLong , Bjrn Mork 
> Cc: nanog@nanog.org
> Subject: Re: IPv6 woes - RFC
> Date: Thu, 23 Sep 2021 16:26:17 -0400
> 
> 
> 
> Owen DeLong via NANOG wrote:
>>> There are real issues with dual-stack, as this thread started out with.
>>> I don't think there is a need neither to invent IPv6 problems, nor to
>>> promote IPv6 advantages.  What we need is a way out of dual-stack-hell.
>> I dont disagree, but a reversion to IPv4-only certainly wont do it.
> 
> For everyone who does have enough IPv4 addresses, it does. This is the problem
> in a nutshell. If that starts trending, IPv6 is done.
> 
>> I think the only way out is through.
> 
> I hope not, both for IPv6 sake and for the network users. We dont know how 
> much
> longer the goal will take, there is materializing a real possibility we will
> never quite reach it, and the potholes on the way are pretty rough.
> 
> And as the trip winds on, the landscape is changing, not necessarily for the
> better.
> 
> One more "any decade now" and another IPv4 replacement/extension might just
> happen on the scene and catch on, rendering IPv6 the most wasteful global
> technical debacle to date.
> 
> 
>>  Unfortunately, the IPv6 resistant forces
>> are making that hard for everyone else.
>> 
>> Owen
> 
> You say that as if it was a surprise, when it should not have been, and you 
> say
> that as if something can be done about it, which we should know by now cannot 
> be
> the primary focus, since it cannot be done in any timely fashion. If at all.
> 
> Its time to throw mud on the wall and see what sticks. Dual stack and wait is 
> an
> ongoing failure slouching to disaster.
> 
> Joe
> 
> 
> 
> 



Re: 100GbE beyond 40km

2021-09-24 Thread Steven Karp
If you can’t wait for Juniper to release their supported QSFP28 100G-ZR optic, 
shop for third party 100G-ZR optics.  I know many networks are already using 
third party QSFP28 100G-ZR optics in Juniper routers.  I have one 80 km span 
between two MX204 routers using third party 100G-ZR optics with no issues.   As 
long as the optic doesn’t draw too much power or generate too much heat it 
should be good.


From: NANOG  on behalf of 
Joe Freeman 
Date: Friday, September 24, 2021 at 4:30 PM
To: Randy Carpenter 
Cc: nanog 
Subject: Re: 100GbE beyond 40km

Open Line Systems can get you to 80K with a 100G DWDM Optic (PAM4) -

I've used a lot of SmartOptics DCP-M40 shelves for this purpose. They also have 
transponders that allow you to go from a QSFP28 to CFP to do coherent 100G out 
to 120Km using the DCP-M40, without a need for regen or extra amps in line.

The DCP-M40 is a 1RU box. It looks like a deep 40ch DWDM filter but includes a 
VAO, EDFA amp, and a WSS I think.

On Fri, Sep 24, 2021 at 4:40 PM Randy Carpenter 
mailto:rcar...@network1.net>> wrote:

How is everyone accomplishing 100GbE at farther than 40km distances?

Juniper is saying it can't be done with anything they offer, except for a 
single CFP-based line card that is EOL.

There are QSFP "ZR" modules from third parties, but I am hesitant to try those 
without there being an equivalent official part.


The application is an ISP upgrading from Nx10G, where one of their fiber paths 
is ~35km and the other is ~60km.



thanks,
-Randy


Re: Rack rails on network equipment

2021-09-24 Thread Martin Hannigan
On Fri, Sep 24, 2021 at 1:34 PM Jay Hennigan  wrote:

> On 9/24/21 09:37, Andrey Khomyakov wrote:
>
> > *So ultimately my question to you all is how much do you care about the
> > speed of racking and unracking equipment and do you tell your suppliers
> > that you care? How much does the time it takes to install or replace a
> > switch impact you?*
>
> Very little. I don't even consider it when comparing hardware. It's a
> nice-to-have but not a factor in purchasing.
>
> You mention a 25-minute difference between racking a no-tools rail kit
> and one that requires a screwdriver. At any reasonable hourly rate for
> someone to rack and stack that is a very small percentage of the cost of
> the hardware. If a device that takes half an hour to rack is $50 cheaper
> than one that has the same specs and takes five minutes, you're past
> break-even to go with the cheaper one.
>

This. Once they're racked, they're not going anywhere. I would summarize as
they're certainly nice, but more of a nice to have. The only racking
systems I try to avoid are the WECO (Western Electric COmpany) standard.
The square "holes".

Warm regards,

-M<


Re: 100GbE beyond 40km

2021-09-24 Thread Lady Benjamin Cannon of Glencoe, ASCE
Above 40km I like coherent systems with FEC. You can feed the juniper into a 
pair of SolidOptics 1U appliances 

Ms. Lady Benjamin PD Cannon of Glencoe, ASCE
6x7 Networks & 6x7 Telecom, LLC 
CEO 
l...@6by7.net
"The only fully end-to-end encrypted global telecommunications company in the 
world.”

FCC License KJ6FJJ

Sent from my iPhone via RFC1149.

> On Sep 24, 2021, at 2:35 PM, Edwin Mallette  wrote:
> 
> I just bite the bullet and use 3rd party optics.  It’s easier and once  you 
> make the switch, lower cost. 
> 
> Ed
> 
> Sent from my iPhone
> 
>>> On Sep 25, 2021, at 12:29 AM, Joe Freeman  wrote:
>>> 
>> 
>> Open Line Systems can get you to 80K with a 100G DWDM Optic (PAM4) -
>> 
>> I've used a lot of SmartOptics DCP-M40 shelves for this purpose. They also 
>> have transponders that allow you to go from a QSFP28 to CFP to do coherent 
>> 100G out to 120Km using the DCP-M40, without a need for regen or extra amps 
>> in line.
>> 
>> The DCP-M40 is a 1RU box. It looks like a deep 40ch DWDM filter but includes 
>> a VAO, EDFA amp, and a WSS I think. 
>> 
>>> On Fri, Sep 24, 2021 at 4:40 PM Randy Carpenter  
>>> wrote:
>>> 
>>> How is everyone accomplishing 100GbE at farther than 40km distances?
>>> 
>>> Juniper is saying it can't be done with anything they offer, except for a 
>>> single CFP-based line card that is EOL.
>>> 
>>> There are QSFP "ZR" modules from third parties, but I am hesitant to try 
>>> those without there being an equivalent official part.
>>> 
>>> 
>>> The application is an ISP upgrading from Nx10G, where one of their fiber 
>>> paths is ~35km and the other is ~60km.
>>> 
>>> 
>>> 
>>> thanks,
>>> -Randy


Re: Rack rails on network equipment

2021-09-24 Thread Chris Adams
Once upon a time, Niels Bakker  said:
> * c...@cmadams.net (Chris Adams) [Sat 25 Sep 2021, 00:17 CEST]:
> >Which - why do I have to order different part numbers for back to
> >front airflow?  It's just a fan, can't it be made reversible?
> >Seems like that would be cheaper than stocking alternate part
> >numbers.
> 
> The fan is inside the power supply right next to the high-voltage
> capacitors. You shouldn't be near that without proper training.

I wasn't talking about opening up the case, although lots of fans are
themselves hot-swappable, so it should be possible to do without opening
anything.  They are just DC motors though, so it seems like a fan could
be built to reverse (although maybe the blade characteristics don't work
as well in the opposite direction).

-- 
Chris Adams 


Re: Rack rails on network equipment

2021-09-24 Thread William Herrin
On Fri, Sep 24, 2021 at 3:36 PM Niels Bakker  wrote:
> * c...@cmadams.net (Chris Adams) [Sat 25 Sep 2021, 00:17 CEST]:
> >Which - why do I have to order different part numbers for back to
> >front airflow?  It's just a fan, can't it be made reversible?  Seems
> >like that would be cheaper than stocking alternate part numbers.
>
> The fan is inside the power supply right next to the high-voltage
> capacitors. You shouldn't be near that without proper training.

Last rack switch I bought, no fan was integrated into the power
supply. Instead, a blower module elsewhere forced air past the various
components including the power supply. Efficient power supplies (which
you really should be using in 24/7 data centers) don't even generate
all that much heat.

Regards,
Bill Herrin




-- 
William Herrin
b...@herrin.us
https://bill.herrin.us/


Re: Rack rails on network equipment

2021-09-24 Thread Niels Bakker

* c...@cmadams.net (Chris Adams) [Sat 25 Sep 2021, 00:17 CEST]:
Which - why do I have to order different part numbers for back to 
front airflow?  It's just a fan, can't it be made reversible?  Seems 
like that would be cheaper than stocking alternate part numbers.


The fan is inside the power supply right next to the high-voltage 
capacitors. You shouldn't be near that without proper training.



-- Niels.


Re: Rack rails on network equipment

2021-09-24 Thread Chris Adams
Once upon a time, William Herrin  said:
> I care, but it bothers me less that the inconsiderate air flow
> implemented in quite a bit of network gear. Side cooling? Pulling air
> from the side you know will be facing the hot aisle? Seriously, the
> physical build of network equipment is not entirely competent.

Which - why do I have to order different part numbers for back to front
airflow?  It's just a fan, can't it be made reversible?  Seems like that
would be cheaper than stocking alternate part numbers.
-- 
Chris Adams 


Re: 100GbE beyond 40km

2021-09-24 Thread Edwin Mallette
I just bite the bullet and use 3rd party optics.  It’s easier and once  you 
make the switch, lower cost. 

Ed

Sent from my iPhone

> On Sep 25, 2021, at 12:29 AM, Joe Freeman  wrote:
> 
> 
> Open Line Systems can get you to 80K with a 100G DWDM Optic (PAM4) -
> 
> I've used a lot of SmartOptics DCP-M40 shelves for this purpose. They also 
> have transponders that allow you to go from a QSFP28 to CFP to do coherent 
> 100G out to 120Km using the DCP-M40, without a need for regen or extra amps 
> in line.
> 
> The DCP-M40 is a 1RU box. It looks like a deep 40ch DWDM filter but includes 
> a VAO, EDFA amp, and a WSS I think. 
> 
>> On Fri, Sep 24, 2021 at 4:40 PM Randy Carpenter  wrote:
>> 
>> How is everyone accomplishing 100GbE at farther than 40km distances?
>> 
>> Juniper is saying it can't be done with anything they offer, except for a 
>> single CFP-based line card that is EOL.
>> 
>> There are QSFP "ZR" modules from third parties, but I am hesitant to try 
>> those without there being an equivalent official part.
>> 
>> 
>> The application is an ISP upgrading from Nx10G, where one of their fiber 
>> paths is ~35km and the other is ~60km.
>> 
>> 
>> 
>> thanks,
>> -Randy


Re: 100GbE beyond 40km

2021-09-24 Thread Joe Freeman
Open Line Systems can get you to 80K with a 100G DWDM Optic (PAM4) -

I've used a lot of SmartOptics DCP-M40 shelves for this purpose. They also
have transponders that allow you to go from a QSFP28 to CFP to do coherent
100G out to 120Km using the DCP-M40, without a need for regen or extra amps
in line.

The DCP-M40 is a 1RU box. It looks like a deep 40ch DWDM filter but
includes a VAO, EDFA amp, and a WSS I think.

On Fri, Sep 24, 2021 at 4:40 PM Randy Carpenter 
wrote:

>
> How is everyone accomplishing 100GbE at farther than 40km distances?
>
> Juniper is saying it can't be done with anything they offer, except for a
> single CFP-based line card that is EOL.
>
> There are QSFP "ZR" modules from third parties, but I am hesitant to try
> those without there being an equivalent official part.
>
>
> The application is an ISP upgrading from Nx10G, where one of their fiber
> paths is ~35km and the other is ~60km.
>
>
>
> thanks,
> -Randy
>


Re: AS6461 issues in Montreal

2021-09-24 Thread Martin Cook
Same here, I'm showing around 3:15 - 3:20 Eastern time our traffic doubled out 
of Montreal.. to somewhat normal levels, no notifications at all from Zayo.

-- Original Message --
From: "Pascal Larivee" 
mailto:pascal.lari...@gmail.com>>
To: "nanog@nanog.org" 
mailto:nanog@nanog.org>>
Sent: 9/24/2021 3:51:36 PM
Subject: Re: AS6461 issues in Montreal

[This email comes from outside of your organization. Please be cautious opening 
or clicking on any attachments or links.]


Just saw the total go back to 800+K now and we are picking up more traffic. No 
updates from Zayo support.

On Fri, Sep 24, 2021 at 2:03 PM Pascal Larivee 
mailto:pascal.lari...@gmail.com>> wrote:
Yes, saw the same thing this morning, They dropped half the internet.
No reply from them on our support ticket.

--
Pascal Larivée


--
Pascal Larivée


Re: 100GbE beyond 40km

2021-09-24 Thread Bill Blackford
Does this have to be Ethernet? You could look into line gear with coherent
optics. IIRC, they have built-in chromatic dispersion compensation, and
depending on the card, would include amplification.

On Fri, Sep 24, 2021 at 1:40 PM Randy Carpenter 
wrote:

>
> How is everyone accomplishing 100GbE at farther than 40km distances?
>
> Juniper is saying it can't be done with anything they offer, except for a
> single CFP-based line card that is EOL.
>
> There are QSFP "ZR" modules from third parties, but I am hesitant to try
> those without there being an equivalent official part.
>
>
> The application is an ISP upgrading from Nx10G, where one of their fiber
> paths is ~35km and the other is ~60km.
>
>
>
> thanks,
> -Randy
>


-- 
Bill Blackford

Logged into reality and abusing my sudo privileges.


Re: 100GbE beyond 40km

2021-09-24 Thread Eric Litvin
There’s an eER4 that can do 60km

Sent from my iPhone

> On Sep 24, 2021, at 2:00 PM, Mauricio Rodriguez via NANOG  
> wrote:
> 
> 
> Perhaps a small long-haul OTN platform, supporting FEC, front-ending the JNPR 
> gear?
> 
> https://www.fs.com/c/transponder-muxponder-3390
> 
> Best Regards,
> Mauricio Rodriguez
> Founder / Owner
> Fletnet Network Engineering (www.fletnet.com)
> Follow us on LinkedIn
> 
> mauricio.rodrig...@fletnet.com
> Office: +1 786-309-1082
> Direct: +1 786-309-5493
> 
> 
> 
> 
>> On Fri, Sep 24, 2021 at 4:42 PM Randy Carpenter  wrote:
>> 
>> How is everyone accomplishing 100GbE at farther than 40km distances?
>> 
>> Juniper is saying it can't be done with anything they offer, except for a 
>> single CFP-based line card that is EOL.
>> 
>> There are QSFP "ZR" modules from third parties, but I am hesitant to try 
>> those without there being an equivalent official part.
>> 
>> 
>> The application is an ISP upgrading from Nx10G, where one of their fiber 
>> paths is ~35km and the other is ~60km.
>> 
>> 
>> 
>> thanks,
>> -Randy
> 
> This message (and any associated files) may contain confidential and/or 
> privileged information. If you are not the intended recipient or authorized 
> to receive this for the intended recipient, you must not use, copy, disclose 
> or take any action based on this message or any information herein. If you 
> have received this message in error, please advise the sender immediately by 
> sending a reply e-mail and delete this message. Thank you for your 
> cooperation.


Re: 100GbE beyond 40km

2021-09-24 Thread Tarko Tikan

hey,


How is everyone accomplishing 100GbE at farther than 40km distances?


See previous thread 
https://www.mail-archive.com/nanog@nanog.org/msg109955.html



--
tarko


Re: IPv6 woes - RFC

2021-09-24 Thread Joe Maimon




b...@uu3.net wrote:

Well, I see IPv6 as double failure really. First, IPv6 itself is too
different from IPv4. What Internet wanted is IPv4+ (aka IPv4 with
bigger address space, likely 64bit). Of course we could not extend IPv4,
so having new protocol is fine.
IPv4 was extendable, with header option as one concept that was shot 
down in favor of a new protocol.


If it was just an incremental IPv4 upgrade, than we would have been 
there already, and you could be using your extended IPv4 addresses to 
communicate with any gear over any network gear that had been upgraded 
in the past decade or two.


Its just that the internet was supposed to be able deploy a new protocol 
in the same or less time. Which didnt happen.


Joe






Re: AS6461 issues in Montreal

2021-09-24 Thread Christopher Munz-Michielin
For what it's worth my company has a Beanfield circuit in Toronto which 
was heavily disrupted this morning that was also blamed on a fiber cut.


Chris

On 24/09/2021 13:12, None None wrote:
Zayo explained they couldn’t access their PE which I thought was odd 
since my box was still seeing 160k v4 routes since the outage started


On Fri, Sep 24, 2021 at 4:10 PM Eric Dugas via NANOG > wrote:


Traffic resumed about 30 minutes ago. They blamed a fiber cut but
the fiber cut is still ongoing between Ottawa and Kingston. Not
sure how you can blame loosing half of the Internet when you
lose half of your connectivity... Montreal is connected to Toronto
and NYC.


Eric

On Fri, Sep 24, 2021 at 3:07 PM Pascal Larivee
mailto:pascal.lari...@gmail.com>> wrote:

Yes, saw the same thing this morning, They dropped half the
internet.
No reply from them on our support ticket.

-- 
Pascal Larivée




Re: 100GbE beyond 40km

2021-09-24 Thread Mauricio Rodriguez via NANOG
Perhaps a small long-haul OTN platform, supporting FEC, front-ending the
JNPR gear?

https://www.fs.com/c/transponder-muxponder-3390

Best Regards,

Mauricio Rodriguez

Founder / Owner

Fletnet Network Engineering (www.fletnet.com)
*Follow us* on LinkedIn 

mauricio.rodrig...@fletnet.com

Office: +1 786-309-1082

Direct: +1 786-309-5493



On Fri, Sep 24, 2021 at 4:42 PM Randy Carpenter 
wrote:

>
> How is everyone accomplishing 100GbE at farther than 40km distances?
>
> Juniper is saying it can't be done with anything they offer, except for a
> single CFP-based line card that is EOL.
>
> There are QSFP "ZR" modules from third parties, but I am hesitant to try
> those without there being an equivalent official part.
>
>
> The application is an ISP upgrading from Nx10G, where one of their fiber
> paths is ~35km and the other is ~60km.
>
>
>
> thanks,
> -Randy
>

-- 
This message (and any associated files) may contain confidential and/or 
privileged information. If you are not the intended recipient or authorized 
to receive this for the intended recipient, you must not use, copy, 
disclose or take any action based on this message or any information 
herein. If you have received this message in error, please advise the 
sender immediately by sending a reply e-mail and delete this message. Thank 
you for your cooperation.


Re: Rack rails on network equipment

2021-09-24 Thread Joe Maimon




Andrey Khomyakov wrote:

Hi folks,
Happy Friday!


Interesting tidbit is that we actually used to manufacture custom 
rails for our Juniper EX4500 switches so the switch can be actually 
inserted from the back of the rack (you know, where most of your 
server ports are...) and not be blocked by the zero-U PDUs and all the 
cabling in the rack. Stock rails didn't work at all for us unless we 
used wider racks, which then, in turn, reduced floor capacity.



Inserting switches into the back of the rack, where its nice and hot, 
usually suggests having reverse air flow hardware. Usually not stock.


Also, since its then sucking in hot air (from the midpoint of the cab or 
so), it is still hotter than having it up front, or leaving the U open 
in front.


On the other hand, most switches are quite fine running much hotter than 
servers with their hard drives and overclocked CPU's. Or perhaps thats 
why you keep changing them.


Personally I prefer pre-wiring front-to-back with patch panels in the 
back. Works for fiber and copper RJ, not so much all-in-one cables.


Joe



Re: 100GbE beyond 40km

2021-09-24 Thread Dan Murphy
Look into EDFA amplifier systems. They work over a range of bands and are
pretty affordable.

They can carry regular 1310nm as well as C-Band. Although if you are
carrying C-band you may have to compensate for chromatic dispersion.

Happy to help, I love this stuff!

On Fri, Sep 24, 2021 at 4:39 PM Randy Carpenter 
wrote:

>
> How is everyone accomplishing 100GbE at farther than 40km distances?
>
> Juniper is saying it can't be done with anything they offer, except for a
> single CFP-based line card that is EOL.
>
> There are QSFP "ZR" modules from third parties, but I am hesitant to try
> those without there being an equivalent official part.
>
>
> The application is an ISP upgrading from Nx10G, where one of their fiber
> paths is ~35km and the other is ~60km.
>
>
>
> thanks,
> -Randy
>


-- 
Daniel Murphy
Senior Data Center Engineer
(646) 698-8018


Re: AS6461 issues in Montreal

2021-09-24 Thread Pascal Larivee
Just saw the total go back to 800+K now and we are picking up more traffic.
No updates from Zayo support.

On Fri, Sep 24, 2021 at 2:03 PM Pascal Larivee 
wrote:

> Yes, saw the same thing this morning, They dropped half the internet.
> No reply from them on our support ticket.
>
> --
> Pascal Larivée
>


-- 
Pascal Larivée


100GbE beyond 40km

2021-09-24 Thread Randy Carpenter


How is everyone accomplishing 100GbE at farther than 40km distances?

Juniper is saying it can't be done with anything they offer, except for a 
single CFP-based line card that is EOL.

There are QSFP "ZR" modules from third parties, but I am hesitant to try those 
without there being an equivalent official part.


The application is an ISP upgrading from Nx10G, where one of their fiber paths 
is ~35km and the other is ~60km.



thanks,
-Randy


Re: IPv6 woes - RFC

2021-09-24 Thread Grant Taylor via NANOG

On 9/24/21 11:53 AM, b...@uu3.net wrote:

Well, I see IPv6 as double failure really.


I still feel like you are combining / conflating two distinct issues 
into one generalization.



First, IPv6 itself is too different from IPv4.


Is it?  Is it really?  Is the delta between IPv4 and IPv6 greater than 
the delta between IPv4 and IPX?


If anything, I think the delta between IPv4 and IPv6 is too small. 
Small enough that both IPv4 and IPv6 get treated as one protocol and 
thus a lot of friction between the multiple personalities therein.  I 
also think that the grouping of IPv4 and IPv6 as one protocol is part of 
the downfall.


More over if you think of IPv4 and IPv6 dual stack as analogous to the 
multi-protocol networks of the '90s, and treat them as disparate 
protocols that serve similar purposes in (completely) different ways, a 
lot of the friction seems to make sense and as such becomes less 
friction through understanding and having reasonable expectations for 
the disparate protocols.


What Internet wanted is IPv4+ (aka IPv4 with bigger address space, 
likely 64bit). Of course we could not extend IPv4, so having 
new protocol is fine.


I don't think you truly mean that having a new protocol is fine. 
Because if you did, I think you would treat IPv6 as a completely 
different protocol from IPv4.  E.g. AppleTalk vs DECnet.  After all, we 
effectively do have a new protocol; IPv6.


IPv6 is as similar to IPv4 as Windows 2000 is similar to Windows 98.  Or 
"different" in place of "similar".


It should just fix problem (do we have other problems I am not aware 
of with IPv4?) of address space and thats it.  Im happy with IPv4, 
after 30+ years of usage we pretty much fixed all problems we had.


I disagree.

The second failure is adoption. Even if my IPv6 hate is not rational, 
adoption of IPv6 is crap. If adoption would be much better, more IPv4 
could be used for legacy networks ;) So stuborn guys like me could 
be happy too ;)


I blame the industry, not the IPv6 protocol, for the lackluster adoption 
of IPv6.



As for details, that list is just my dream IPv6 protocol ;)

But lets talk about details:
- Loopback on IPv6 is ::1/128
   I have setups where I need more addresses there that are local only.
   Yeah I know, we can put extra aliases on interfaces etc.. but its extra
   work and not w/o problems


How does IPv6 differ from IPv4 in this context?


- IPv6 Link Local is forced.
   I mean, its always on interface, nevermind you assign static IP.
   LL is still there and gets in the way (OSPFv3... hell yeah)


I agree that IPv6 addresses seem to accumulate on interfaces like IoT 
devices do on a network.  But I don't see a technical problem with this 
in and of itself.  --  I can't speak to OSPFv3 issues.



- ULA space, well.. its like RFC1918 but there are some issues with it
   (or at least was? maybe its fixed) like source IP selection on with
   multiple addresses.


I consider this to be implementation issues and not a problem with the 
protocol itself.



- Neighbor Discovery protocol... quite a bit problems it created.


Please elaborate.


   What was wrong w/ good old ARP? I tought we fixed all those problems
   already like ARP poisoning via port security.. etc


The apparent need to ""fix / address / respond to a protocol problem at 
a lower layer seems like a problem to me.



- NAT is there in IPv6 so no futher comments
- DHCP start to get working on IPv6.. but it still pain sometimes


What problems do you have with DHCP for IPv6?  I've been using it for 
the better part of a decade without any known problems.  What pain are 
you experiencing?



And biggest problem, interop w/ IPv4 was completly failure.


I agree that the interoperability between IPv4 and IPv6 is the tall pole 
in the tent.  But I also believe that's to be expected when trying to 
interoperate disparate protocols.


From ground zero, I would expect that disparate protocols can't 
interoperate without external support, some of which requires explicit 
configuration.


Currently we have best Internet to migrate to new protocol. 
Why?


The primary motivation -- as I understand it -- is the lack of unique IP 
addresses.


Because how internet become centralized. Eyeball networks just 
want to reach content. E2E communication is not that much needed. 
We have games and enhusiast, but those can pay extra for public IPv4. 
Or get VPN/VPS.


Now you are talking about two classes of Internet connectivity:

1)  First class participation where an endpoint /is/ /on/ the Internet 
with a globally routed IP.
2)  Second class participation where an endpoint /has/ /access/ /to/ the 
Internet via a non-globally routed IP.


There may be some merit to multiple classes of Internet connectivity. 
But I think it should be dealt with openly and above board as such.


And end comment. I do NOT want to start some kind of flame war here. 
Yeah I know, Im biased toward IPv4.


I don't view honest and good spirited discussion of fa

Re: Rack rails on network equipment

2021-09-24 Thread George Herbert
I’ve seen Dell rack equipment leap for safety (ultimately very very 
unsuccessfully…) in big earthquakes.  Lots of rack screws for me.

-George 

Sent from my iPhone

> On Sep 24, 2021, at 9:41 AM, Andrey Khomyakov  
> wrote:
> 
> 
> Hi folks,
> Happy Friday!
> 
> Would you, please, share your thoughts on the following matter?
> 
> Back some 5 years ago we pulled the trigger and started phasing out Cisco and 
> Juniper switching products out of our data centers (reasons for that are not 
> quite relevant to the topic). We selected Dell switches in part due to Dell 
> using "quick rails'' (sometimes known as speed rails or toolless rails).  
> This is where both the switch side rail and the rack side rail just snap in, 
> thus not requiring a screwdriver and hands of the size no bigger than a 
> hamster paw to hold those stupid proprietary screws (lookin at your, cisco) 
> to attach those rails.
> We went from taking 16hrs to build a row of compute (from just network 
> equipment racking pov) to maybe 1hr... (we estimated that on average it took 
> us 30 min to rack a switch from cut open the box with Juniper switches to 5 
> min with Dell switches)
> Interesting tidbit is that we actually used to manufacture custom rails for 
> our Juniper EX4500 switches so the switch can be actually inserted from the 
> back of the rack (you know, where most of your server ports are...) and not 
> be blocked by the zero-U PDUs and all the cabling in the rack. Stock rails 
> didn't work at all for us unless we used wider racks, which then, in turn, 
> reduced floor capacity.
> 
> As far as I know, Dell is the only switch vendor doing toolless rails so it's 
> a bit of a hardware lock-in from that point of view. 
> 
> So ultimately my question to you all is how much do you care about the speed 
> of racking and unracking equipment and do you tell your suppliers that you 
> care? How much does the time it takes to install or replace a switch impact 
> you?
> 
> I was having a conversation with a vendor and was pushing hard on the fact 
> that their switches will end up being actually costlier for me long term just 
> because my switch replacement time quadruples at least, thus requiring me to 
> staff more remote hands. Am I overthinking this and artificially limiting 
> myself by excluding vendors who don't ship with toolless rails (which is all 
> of them now except Dell)?
> 
> Thanks for your time in advance!
> --Andrey


Re: AS6461 issues in Montreal

2021-09-24 Thread None None
Zayo explained they couldn’t access their PE which I thought was odd since
my box was still seeing 160k v4 routes since the outage started

On Fri, Sep 24, 2021 at 4:10 PM Eric Dugas via NANOG 
wrote:

> Traffic resumed about 30 minutes ago. They blamed a fiber cut but the
> fiber cut is still ongoing between Ottawa and Kingston. Not sure how you
> can blame loosing half of the Internet when you lose half of your
> connectivity... Montreal is connected to Toronto and NYC.
>
>
> Eric
>
> On Fri, Sep 24, 2021 at 3:07 PM Pascal Larivee 
> wrote:
>
>> Yes, saw the same thing this morning, They dropped half the internet.
>> No reply from them on our support ticket.
>>
>> --
>> Pascal Larivée
>>
>


Re: AS6461 issues in Montreal

2021-09-24 Thread Eric Dugas via NANOG
Traffic resumed about 30 minutes ago. They blamed a fiber cut but the fiber
cut is still ongoing between Ottawa and Kingston. Not sure how you can
blame loosing half of the Internet when you lose half of your
connectivity... Montreal is connected to Toronto and NYC.

Eric

On Fri, Sep 24, 2021 at 3:07 PM Pascal Larivee 
wrote:

> Yes, saw the same thing this morning, They dropped half the internet.
> No reply from them on our support ticket.
>
> --
> Pascal Larivée
>


Re: Rack rails on network equipment

2021-09-24 Thread Joe Greco
On Fri, Sep 24, 2021 at 02:49:53PM -0500, Doug McIntyre wrote:
> You mention about hardware lockin, but I wouldn't trust Dell to not switch
> out the design on their "next-gen" product, when they buy from a
> different OEM, as they are want to do, changing from OEM to OEM for
> each new product line. At least that is their past behavior over many years 
> in the past that I've been buying Dell switches for simple things. 
> Perhaps they've changed their tune. 

That sounds very much like their 2000's-era behaviour when they were
sourcing 5324's from Accton, etc.  Dell has more recently acquired
switch companies such as Force10 and it seems like they have been
doing more in-house stuff this last decade.  There has been somewhat
better stability in the product line IMHO.

> For me, it really doesn't take all that much time to mount cage nuts
> and screw a switch into a rack. Its all pretty 2nd nature to me, look
> at holes to see the pattern, snap in all my cage nuts all at once and
> go. If you are talking rows of racks of build, it should be 2nd nature?

The quick rails on some of their new gear is quite nice, but the best
part of having rails is having the support on the back end.

> Also, I hate 0U power, for that very reason, there's never room to
> move devices in and out of the rack if you do rear-mount networking.

Very true.

... JG
-- 
Joe Greco - sol.net Network Services - Milwaukee, WI - http://www.sol.net
"The strain of anti-intellectualism has been a constant thread winding its way
through our political and cultural life, nurtured by the false notion that
democracy means that 'my ignorance is just as good as your knowledge.'"-Asimov


Re: Rack rails on network equipment

2021-09-24 Thread Doug McIntyre
On Fri, Sep 24, 2021 at 09:37:58AM -0700, Andrey Khomyakov wrote:
>  We selected Dell switches in part due
> to Dell using "quick rails'' (sometimes known as speed rails or toolless
> rails). 

Hmm, I haven't had any of those on any of my Dell switches, but then
again, I haven't bought in in awhile. 

You mention about hardware lockin, but I wouldn't trust Dell to not switch
out the design on their "next-gen" product, when they buy from a
different OEM, as they are want to do, changing from OEM to OEM for
each new product line. At least that is their past behavior over many years 
in the past that I've been buying Dell switches for simple things. 
Perhaps they've changed their tune. 

For me, it really doesn't take all that much time to mount cage nuts
and screw a switch into a rack. Its all pretty 2nd nature to me, look
at holes to see the pattern, snap in all my cage nuts all at once and
go. If you are talking rows of racks of build, it should be 2nd nature?

Also, I hate 0U power, for that very reason, there's never room to
move devices in and out of the rack if you do rear-mount networking.


Re: Rack rails on network equipment

2021-09-24 Thread Randy Carpenter


Considering that the typical $5 pieces of bent metal list for ~$500 from most 
vendors, can you imagine the price of fancy tool-less rack kits?

Brand new switch: $2,000
Rack kit: $2,000


-Randy


RE: Rack rails on network equipment

2021-09-24 Thread Kevin Menzel via NANOG
Hi Andrey:

I work in upper education, we have hundreds upon hundreds of switches in at 
least a hundred network closets, as well as multiple datacenters, etc. We do a 
full lease refresh every 3-5 years of the full environment. The amount of time 
it takes me to get a switch out of a box/racked is minimal compared to the 
amount of time it takes for the thing to power on. (In that it usually takes 
about 3 minutes, potentially less, depending on my rhythm). Patching a full 48 
ports (correctly) takes longer that racking. Maybe that’s because I have far 
too much practice doing this at this point.

If there’s one time waste in switch install, from my perspective, it’s how long 
it takes the things to boot up. When I’m installing the switch it’s a minor 
inconvenience. When something reboots (or when something needs to be reloaded 
to fix a bug – glares at the Catalyst switches in my life) in the middle of the 
day, it’s 7-10 minutes of outage for connected operational hosts, which is… a 
much bigger pain.

So long story short, install time is a near-zero care in my world.

That being said, especially when I deal with 2 post rack gear – the amount of 
sag over time I’m expected to be OK with in any given racking solution DOES 
somewhat matter to me. (glares again at the Catalyst switches in my life). 
Would I like good, solid, well manufactured ears and/or rails that don’t change 
for no reason between equipment revisions? Heck yes.

--Kevin


From: NANOG  On Behalf 
Of Andrey Khomyakov
Sent: September 24, 2021 12:38
To: Nanog 
Subject: Rack rails on network equipment

This message was sent from outside of Sheridan College. Please be careful when 
opening attachments, clicking links, or responding to requests for information.


Hi folks,
Happy Friday!

Would you, please, share your thoughts on the following matter?

Back some 5 years ago we pulled the trigger and started phasing out Cisco and 
Juniper switching products out of our data centers (reasons for that are not 
quite relevant to the topic). We selected Dell switches in part due to Dell 
using "quick rails'' (sometimes known as speed rails or toolless rails).  This 
is where both the switch side rail and the rack side rail just snap in, thus 
not requiring a screwdriver and hands of the size no bigger than a hamster paw 
to hold those stupid proprietary screws (lookin at your, cisco) to attach those 
rails.
We went from taking 16hrs to build a row of compute (from just network 
equipment racking pov) to maybe 1hr... (we estimated that on average it took us 
30 min to rack a switch from cut open the box with Juniper switches to 5 min 
with Dell switches)
Interesting tidbit is that we actually used to manufacture custom rails for our 
Juniper EX4500 switches so the switch can be actually inserted from the back of 
the rack (you know, where most of your server ports are...) and not be blocked 
by the zero-U PDUs and all the cabling in the rack. Stock rails didn't work at 
all for us unless we used wider racks, which then, in turn, reduced floor 
capacity.

As far as I know, Dell is the only switch vendor doing toolless rails so it's a 
bit of a hardware lock-in from that point of view.

So ultimately my question to you all is how much do you care about the speed of 
racking and unracking equipment and do you tell your suppliers that you care? 
How much does the time it takes to install or replace a switch impact you?

I was having a conversation with a vendor and was pushing hard on the fact that 
their switches will end up being actually costlier for me long term just 
because my switch replacement time quadruples at least, thus requiring me to 
staff more remote hands. Am I overthinking this and artificially limiting 
myself by excluding vendors who don't ship with toolless rails (which is all of 
them now except Dell)?

Thanks for your time in advance!
--Andrey


Re: Rack rails on network equipment

2021-09-24 Thread Alain Hebert

    Hi,

    In my opinion:

        That time you take to rack devices with classic rail can be 
viewed as a bounding moment and, while appreciated by the device, will 
reducing downtime issues on the long run that you may have if you just 
rack & slap 'em.


    It is also Friday =D.

-
Alain Hebertaheb...@pubnix.net
PubNIX Inc.
50 boul. St-Charles
P.O. Box 26770 Beaconsfield, Quebec H9W 6G7
Tel: 514-990-5911  http://www.pubnix.netFax: 514-990-9443

On 9/24/21 12:56 PM, Grant Taylor via NANOG wrote:

On 9/24/21 10:37 AM, Andrey Khomyakov wrote:
So ultimately my question to you all is how much do you care about 
the speed of racking and unracking equipment and do you tell your 
suppliers that you care? How much does the time it takes to install 
or replace a switch impact you?


I was having a conversation with a vendor and was pushing hard on the 
fact that their switches will end up being actually costlier for me 
long term just because my switch replacement time quadruples at 
least, thus requiring me to staff more remote hands. Am I 
overthinking this and artificially limiting myself by excluding 
vendors who don't ship with toolless rails (which is all of them now 
except Dell)?


My 2¢ opinion / drive by comment while in the break room to get coffee 
and a doughnut is:


Why are you letting -- what I think is -- a relatively small portion 
of the time spent interacting with a device influence the choice of 
the device?


In the grand scheme of things, where will you spend more time 
interacting with the device; racking & unracking or administering the 
device throughout it's life cycle?  I would focus on the larger 
portion of those times.


Sure, automation is getting a lot better.  But I bet that your network 
administrators will spend more than an hour interacting with the 
device over the multiple years that it's in service.  As such, I'd 
give the network administrators more input than the installers racking 
& unracking.  If nothing else, break it down proportionally based on 
time and / or business expense for wages therefor.



Thanks for your time in advance!


The coffee is done brewing and I have a doughnut, so I'll take my 
leave now.


Have a good day ~> weekend.







Re: AS6461 issues in Montreal

2021-09-24 Thread Pascal Larivee
Yes, saw the same thing this morning, They dropped half the internet.
No reply from them on our support ticket.

-- 
Pascal Larivée


Re: Rack rails on network equipment

2021-09-24 Thread richey goldberg
30 minutes to pull a switch from the box stick ears on it and mount it in the 
rack seems like a really long time.I think at tops that portion it 
that’s a 5-10 minute job if I unbox it at my desk. I use a drill with the 
correct toque setting  and a magnetic bit to put them on while it boots on my 
desk so I can drop a base config on it.

If you are replacing defective switches often enough that this is another issue 
I think you would have bigger issues than this to address.

Like others said that most switches are in the rack for the very long haul, 
often in excess of 5 years.   The amount of time required to do the initial 
install is insignificant in the grand scheme of things.

-richey

From: NANOG  on behalf of 
Andrey Khomyakov 
Date: Friday, September 24, 2021 at 12:38 PM
To: Nanog 
Subject: Rack rails on network equipment
Hi folks,
Happy Friday!

Would you, please, share your thoughts on the following matter?

Back some 5 years ago we pulled the trigger and started phasing out Cisco and 
Juniper switching products out of our data centers (reasons for that are not 
quite relevant to the topic). We selected Dell switches in part due to Dell 
using "quick rails'' (sometimes known as speed rails or toolless rails).  This 
is where both the switch side rail and the rack side rail just snap in, thus 
not requiring a screwdriver and hands of the size no bigger than a hamster paw 
to hold those stupid proprietary screws (lookin at your, cisco) to attach those 
rails.
We went from taking 16hrs to build a row of compute (from just network 
equipment racking pov) to maybe 1hr... (we estimated that on average it took us 
30 min to rack a switch from cut open the box with Juniper switches to 5 min 
with Dell switches)
Interesting tidbit is that we actually used to manufacture custom rails for our 
Juniper EX4500 switches so the switch can be actually inserted from the back of 
the rack (you know, where most of your server ports are...) and not be blocked 
by the zero-U PDUs and all the cabling in the rack. Stock rails didn't work at 
all for us unless we used wider racks, which then, in turn, reduced floor 
capacity.

As far as I know, Dell is the only switch vendor doing toolless rails so it's a 
bit of a hardware lock-in from that point of view.

So ultimately my question to you all is how much do you care about the speed of 
racking and unracking equipment and do you tell your suppliers that you care? 
How much does the time it takes to install or replace a switch impact you?

I was having a conversation with a vendor and was pushing hard on the fact that 
their switches will end up being actually costlier for me long term just 
because my switch replacement time quadruples at least, thus requiring me to 
staff more remote hands. Am I overthinking this and artificially limiting 
myself by excluding vendors who don't ship with toolless rails (which is all of 
them now except Dell)?

Thanks for your time in advance!
--Andrey


Re: Rack rails on network equipment

2021-09-24 Thread Mauricio Rodriguez via NANOG
Andrey, hi.

The speed rails are nice, and are effective in optimizing the time it takes
to rack equipment.  It's pretty much par for the course on servers today
(thank goodness!), and not so much on network equipment.  I suppose the
reasons being what others have mentioned - longevity of service life,
frequency at which network gear is installed, etc.  As well, a typical
server to switch ratio, depending on number of switch ports and
fault-tolerance configurations, could be something like 38:1 in dense 1U
server install.  So taking a few more minutes on the switch installation
isn't so impactful - taking a few more minutes on each server installation
can really become a problem.

A 30-minute time to install a regular 1U ToR switch seems a bit excessive.
Maybe the very first time a tech installs any specific model switch with a
unique rail configuration.  After that one, it should be around 10 minutes
for most situations.  I am assuming some level of teamwork where there is
an installer at the front of the cabinet and another at the rear, and they
work in tandem to install cage nuts, install front/rear rails (depending on
switch), position the equipment, and affix to the cabinet.  I can see the
30 minutes if you have one person, it's a larger/heavier device (like the
EX4500) and the installer is forced to do some kind of crazy balancing act
with the switch (not recommended), or has to use a server lift to install
it.

Those speed rails as well are a bit of a challenge to install if it's not a
team effort. So, I'm wondering if in addition to using speed rails, you may
have changed from a one-tech installation process to a two-tech team
installation process?

Best Regards,

Mauricio Rodriguez

Founder / Owner

Fletnet Network Engineering (www.fletnet.com)
*Follow us* on LinkedIn 

mauricio.rodrig...@fletnet.com

Office: +1 786-309-1082

Direct: +1 786-309-5493



On Fri, Sep 24, 2021 at 12:41 PM Andrey Khomyakov <
khomyakov.and...@gmail.com> wrote:

> Hi folks,
> Happy Friday!
>
> Would you, please, share your thoughts on the following matter?
>
> Back some 5 years ago we pulled the trigger and started phasing out Cisco
> and Juniper switching products out of our data centers (reasons for that
> are not quite relevant to the topic). We selected Dell switches in part due
> to Dell using "quick rails'' (sometimes known as speed rails or toolless
> rails).  This is where both the switch side rail and the rack side rail
> just snap in, thus not requiring a screwdriver and hands of the size no
> bigger than a hamster paw to hold those stupid proprietary screws (lookin
> at your, cisco) to attach those rails.
> We went from taking 16hrs to build a row of compute (from just network
> equipment racking pov) to maybe 1hr... (we estimated that on average it
> took us 30 min to rack a switch from cut open the box with Juniper switches
> to 5 min with Dell switches)
> Interesting tidbit is that we actually used to manufacture custom rails
> for our Juniper EX4500 switches so the switch can be actually inserted from
> the back of the rack (you know, where most of your server ports are...) and
> not be blocked by the zero-U PDUs and all the cabling in the rack. Stock
> rails didn't work at all for us unless we used wider racks, which then, in
> turn, reduced floor capacity.
>
> As far as I know, Dell is the only switch vendor doing toolless rails so
> it's a bit of a hardware lock-in from that point of view.
>
> *So ultimately my question to you all is how much do you care about the
> speed of racking and unracking equipment and do you tell your suppliers
> that you care? How much does the time it takes to install or replace a
> switch impact you?*
>
> I was having a conversation with a vendor and was pushing hard on the fact
> that their switches will end up being actually costlier for me long term
> just because my switch replacement time quadruples at least, thus requiring
> me to staff more remote hands. Am I overthinking this and artificially
> limiting myself by excluding vendors who don't ship with toolless rails
> (which is all of them now except Dell)?
>
> Thanks for your time in advance!
> --Andrey
>

-- 
This message (and any associated files) may contain confidential and/or 
privileged information. If you are not the intended recipient or authorized 
to receive this for the intended recipient, you must not use, copy, 
disclose or take any action based on this message or any information 
herein. If you have received this message in error, please advise the 
sender immediately by sending a reply e-mail and delete this message. Thank 
you for your cooperation.


Re: Rack rails on network equipment

2021-09-24 Thread William Herrin
On Fri, Sep 24, 2021 at 9:39 AM Andrey Khomyakov
 wrote:
> Interesting tidbit is that we actually used to manufacture custom rails for 
> our Juniper EX4500 switches so the switch can be actually inserted from the 
> back of the rack (you know, where most of your server ports are...) and not 
> be blocked by the zero-U PDUs and all the cabling in the rack. Stock rails 
> didn't work at all for us unless we used wider racks, which then, in turn, 
> reduced floor capacity.

Hi Andrey,

If your power cable management horizontally blocks the rack ears,
you're doing it wrong. The vendor could and should be making life
easier but you're still doing it wrong. If you don't want to leave
room for zero-U PDUs, don't use them. And point the outlets towards
the rear of the cabinet not the center so that installation of the
cables doesn't block repair.


> So ultimately my question to you all is how much do you care about the speed 
> of racking and unracking equipment and do you tell your suppliers that you 
> care? How much does the time it takes to install or replace a switch impact 
> you?

I care, but it bothers me less that the inconsiderate air flow
implemented in quite a bit of network gear. Side cooling? Pulling air
from the side you know will be facing the hot aisle? Seriously, the
physical build of network equipment is not entirely competent.

Regards,
Bill Herrin



-- 
William Herrin
b...@herrin.us
https://bill.herrin.us/


Re: IPv6 woes - RFC

2021-09-24 Thread Michael Thomas



On 9/24/21 10:53 AM, b...@uu3.net wrote:

Well, I see IPv6 as double failure really. First, IPv6 itself is too
different from IPv4. What Internet wanted is IPv4+ (aka IPv4 with
bigger address space, likely 64bit). Of course we could not extend IPv4,
so having new protocol is fine. It should just fix problem (do we have other
problems I am not aware of with IPv4?) of address space and thats it.
Im happy with IPv4, after 30+ years of usage we pretty much fixed all
problems we had.

But that is what ipv6 delivers -- a 64 bit routing prefix. Am I to take 
it that a whopping 16 bytes of extra addressing information breaks the 
internet? And all of the second system syndrome stuff was always 
separable just like any other IETF protocol. you implement what is 
needed and ignore all of the rest -- there is no IETF police after all.


I can understand the sound and fury when people were trying to make this 
work on 56kb modems, but with speeds well over 1G it seems sort of archaic.


Mike




[afnog] Weekly Global IPv4 Routing Table Report

2021-09-24 Thread Routing Analysis Role Account
This is an automated weekly mailing describing the state of the Internet
Global IPv4 Routing Table as seen from APNIC's router in Japan.

The posting is sent to APOPS, NANOG, AfNOG, SANOG, PacNOG, SAFNOG
TZNOG, MENOG, BJNOG, SDNOG, CMNOG, LACNOG and the RIPE Routing WG.

Daily listings are sent to bgp-st...@lists.apnic.net

For historical data, please see http://thyme.rand.apnic.net.

If you have any comments please contact Philip Smith .

Global IPv4 Routing Table Report   04:00 +10GMT Sat 25 Sep, 2021

Report Website: http://thyme.rand.apnic.net
Detailed Analysis:  http://thyme.rand.apnic.net/current/

Analysis Summary


BGP routing table entries examined:  864175
Prefixes after maximum aggregation (per Origin AS):  327027
Deaggregation factor:  2.64
Unique aggregates announced (without unneeded subnets):  416114
Total ASes present in the Internet Routing Table: 72001
Prefixes per ASN: 12.00
Origin-only ASes present in the Internet Routing Table:   61879
Origin ASes announcing only one prefix:   25534
Transit ASes present in the Internet Routing Table:   10122
Transit-only ASes present in the Internet Routing Table:341
Average AS path length visible in the Internet Routing Table:   4.3
Max AS path length visible:  42
Max AS path prepend of ASN ( 45609)  34
Prefixes from unregistered ASNs in the Routing Table:   907
Number of instances of unregistered ASNs:   912
Number of 32-bit ASNs allocated by the RIRs:  37320
Number of 32-bit ASNs visible in the Routing Table:   31016
Prefixes from 32-bit ASNs in the Routing Table:  144062
Number of bogon 32-bit ASNs visible in the Routing Table:32
Special use prefixes present in the Routing Table:1
Prefixes being announced from unallocated address space:463
Number of addresses announced to Internet:   3065905408
Equivalent to 182 /8s, 190 /16s and 1 /24s
Percentage of available address space announced:   82.8
Percentage of allocated address space announced:   82.8
Percentage of available address space allocated:  100.0
Percentage of address space in use by end-sites:   99.5
Total number of prefixes smaller than registry allocations:  287414

APNIC Region Analysis Summary
-

Prefixes being announced by APNIC Region ASes:   231616
Total APNIC prefixes after maximum aggregation:   66252
APNIC Deaggregation factor:3.50
Prefixes being announced from the APNIC address blocks:  227068
Unique aggregates announced from the APNIC address blocks:91891
APNIC Region origin ASes present in the Internet Routing Table:   11991
APNIC Prefixes per ASN:   18.94
APNIC Region origin ASes announcing only one prefix:   3424
APNIC Region transit ASes present in the Internet Routing Table:   1678
Average APNIC Region AS path length visible:4.5
Max APNIC Region AS path length visible: 37
Number of APNIC region 32-bit ASNs visible in the Routing Table:   7144
Number of APNIC addresses announced to Internet:  772824960
Equivalent to 46 /8s, 16 /16s and 95 /24s
APNIC AS Blocks4608-4864, 7467-7722, 9216-10239, 17408-18431
(pre-ERX allocations)  23552-24575, 37888-38911, 45056-46079, 55296-56319,
   58368-59391, 63488-64098, 64297-64395, 131072-147769
APNIC Address Blocks 1/8,  14/8,  27/8,  36/8,  39/8,  42/8,  43/8,
49/8,  58/8,  59/8,  60/8,  61/8, 101/8, 103/8,
   106/8, 110/8, 111/8, 112/8, 113/8, 114/8, 115/8,
   116/8, 117/8, 118/8, 119/8, 120/8, 121/8, 122/8,
   123/8, 124/8, 125/8, 126/8, 133/8, 150/8, 153/8,
   163/8, 171/8, 175/8, 180/8, 182/8, 183/8, 202/8,
   203/8, 210/8, 211/8, 218/8, 219/8, 220/8, 221/8,
   222/8, 223/8,

ARIN Region Analysis Summary


Prefixes being announced by ARIN Region ASes:252530
Total ARIN prefixes after maximum aggregation:   115618
ARIN Deaggregation factor: 2.18
Prefixes being announced from the ARIN address blocks:   252353
Unique aggregates announced from the ARIN address blocks:120420
ARIN Region origin ASes present in the Internet Routing Table:18901
ARIN Prefixes per ASN:  

Re: Rack rails on network equipment

2021-09-24 Thread Denis Fondras
> You mention a 25-minute difference between racking a no-tools rail kit and
> one that requires a screwdriver. At any reasonable hourly rate for someone
> to rack and stack that is a very small percentage of the cost of the
> hardware. If a device that takes half an hour to rack is $50 cheaper than
> one that has the same specs and takes five minutes, you're past break-even
> to go with the cheaper one.
> 

I can understand the OP if his job is to provide/resell the switch and rack it
and then someone else (the customer) is operating it ;-)

As my fellow netops said, the switches are installed for a long time in the
racks (5+ years). I accept to trade installation easyness for
performance/feature/stability. When I need to replace it, it is never in a hurry
(and cabling properly takes more time than racking).

So easy installed rails may be a plus but far behind enything else.


Re: IPv6 woes - RFC

2021-09-24 Thread borg
Well, I see IPv6 as double failure really. First, IPv6 itself is too
different from IPv4. What Internet wanted is IPv4+ (aka IPv4 with
bigger address space, likely 64bit). Of course we could not extend IPv4,
so having new protocol is fine. It should just fix problem (do we have other
problems I am not aware of with IPv4?) of address space and thats it.
Im happy with IPv4, after 30+ years of usage we pretty much fixed all 
problems we had.

The second failure is adoption. Even if my IPv6 hate is not rational,
adoption of IPv6 is crap. If adoption would be much better, more IPv4
could be used for legacy networks ;) So stuborn guys like me could be happy 
too ;)

As for details, that list is just my dream IPv6 protocol ;)
But lets talk about details:
- Loopback on IPv6 is ::1/128
  I have setups where I need more addresses there that are local only.
  Yeah I know, we can put extra aliases on interfaces etc.. but its extra
  work and not w/o problems
- IPv6 Link Local is forced.
  I mean, its always on interface, nevermind you assign static IP.
  LL is still there and gets in the way (OSPFv3... hell yeah)
- ULA space, well.. its like RFC1918 but there are some issues with it
  (or at least was? maybe its fixed) like source IP selection on with 
  multiple addresses.
- Neighbor Discovery protocol... quite a bit problems it created.
  What was wrong w/ good old ARP? I tought we fixed all those problems
  already like ARP poisoning via port security.. etc
- NAT is there in IPv6 so no futher comments
- DHCP start to get working on IPv6.. but it still pain sometimes

And biggest problem, interop w/ IPv4 was completly failure.
Currently we have best Internet to migrate to new protocol.
Why? Because how internet become centralized. Eyeball networks
just want to reach content. E2E communication is not that much needed.
We have games and enhusiast, but those can pay extra for public IPv4.
Or get VPN/VPS.

And end comment. I do NOT want to start some kind of flame war here.
Yeah I know, Im biased toward IPv4. If something new popups, I want it 
better than previous thingie (a lot) and easier or at least same level of 
complications, but IPv6 just solves one thing and brings a lot of 
complexity.

The fact is, IPv6 failed. There are probably multiple reasons for it.
Do we ever move to IPv6? I dont know.. Do I care for now? Nope, IPv4
works for me for now.


-- Original message --

From: Grant Taylor via NANOG 
To: nanog@nanog.org
Subject: Re: IPv6 woes - RFC
Date: Fri, 24 Sep 2021 10:17:42 -0600

On 9/24/21 3:01 AM, b...@uu3.net wrote:
> Oh yeah, it would be very funny if this will really happen (new protocol).
> Im not happy with IPv6, and it seems many others too.

Is your dissatisfaction with the IPv6 protocol itself or is your dissatisfaction
with the deployment / adoption of the IPv6 protocol?

I think that it's a very critical distinction.  Much like DoH as a protocol vs
how some companies have chosen to utilize it.  Similar to IBM's computers vs
what they were used for in the 1940's.

> This is short list how my ideal IPv6 proto looks like:
> - 64bit address space
>more is not always better
> - loopback 0:0:0:1/48
> - soft LL 0:0:1-:0/32 (Link Local)
> - RFC1918 address space 0:1-:0:0/16
> - keep ARPs, ND wasnt great idea after all?
> - NAT support (because its everywhere these days)
> - IPv6 -> IPv4 interop (oneway)
>we can put customers on IPv6, while keeping services dualstack
> - correct DHCP support (SLAAC wasnt great idea after all?)
>I think its already in IPv6, but was an issue at the begining

I'm probably showing my ignorance, but I believe that the IPv6 that we have
today effectively does all of those things.  At least functionality, perhaps
with different values.

> If there are some weird requirements from others, put them into layer up.
> L3 needs to be simple (KISS concept), so its easy to implement and less
> prone to bugs.

How many of the hurtles to IPv6's deployment have been bugs in layer 3? It seems
to me that most of the problems with IPv6 that I'm aware of are at other layers,
significantly higher in, or on top of, the stack.

> And that IPv6 I would love to see and addapt right away :)

I'm of the opinion that IPv6 has worked quite well dual stack from about
2005-2015.  It's only been in the last 5 or so years that I'm starting to see
more problems with IPv6.  And all of the problems that I'm seeing are companies
making business level decisions, way above layer 7, that negatively impact IPv6.
Reluctance to run an MX on IPv6 for business level decisions is definitely not a
protocol, much less L3, problem.



-- 
Grant. . . .
unix || die



Re: Rack rails on network equipment

2021-09-24 Thread Jay Hennigan

On 9/24/21 09:37, Andrey Khomyakov wrote:

*So ultimately my question to you all is how much do you care about the 
speed of racking and unracking equipment and do you tell your suppliers 
that you care? How much does the time it takes to install or replace a 
switch impact you?*


Very little. I don't even consider it when comparing hardware. It's a 
nice-to-have but not a factor in purchasing.


You mention a 25-minute difference between racking a no-tools rail kit 
and one that requires a screwdriver. At any reasonable hourly rate for 
someone to rack and stack that is a very small percentage of the cost of 
the hardware. If a device that takes half an hour to rack is $50 cheaper 
than one that has the same specs and takes five minutes, you're past 
break-even to go with the cheaper one.


Features, warranty, performance over the lifetime of the hardware are 
far more important to me.


If there were a network application similar to rock band going on tour 
where equipment needed to be racked up, knocked down, and re-racked 
multiple times a week it would definitely be a factor. Not so much in a 
data center where you change a switch out maybe once every five years.


And there's always the case where all of that fancy click-together 
hardware requires square holes and the rack has threaded holes so you've 
got to modify it anyway.


--
Jay Hennigan - j...@west.net
Network Engineering - CCIE #7880
503 897-8550 - WB6RDV


Re: Rack rails on network equipment

2021-09-24 Thread Brandon Butterworth
On Fri Sep 24, 2021 at 09:37:58AM -0700, Andrey Khomyakov wrote:
> As far as I know, Dell is the only switch vendor doing toolless rails

Having fought for hours trying to get servers with those
rails into some DCs racks I'd go with slightly slow but fits
everywhere

> *So ultimately my question to you all is how much do you care
> about the speed of racking and unracking equipment

I don't care as long as it fits in the rack properly, the time
taken to do that is small compared to the time it'll be there (many
years for us). I use an electric screwdriver if I need to do many. I
care more about what is inside the box than the box itself, I'll
have to deal with their software for years.

brandon


Re: Rack rails on network equipment

2021-09-24 Thread Grant Taylor via NANOG

On 9/24/21 10:37 AM, Andrey Khomyakov wrote:
So ultimately my question to you all is how much do you care about the 
speed of racking and unracking equipment and do you tell your suppliers 
that you care? How much does the time it takes to install or replace a 
switch impact you?


I was having a conversation with a vendor and was pushing hard on the 
fact that their switches will end up being actually costlier for me long 
term just because my switch replacement time quadruples at least, thus 
requiring me to staff more remote hands. Am I overthinking this and 
artificially limiting myself by excluding vendors who don't ship with 
toolless rails (which is all of them now except Dell)?


My 2¢ opinion / drive by comment while in the break room to get coffee 
and a doughnut is:


Why are you letting -- what I think is -- a relatively small portion of 
the time spent interacting with a device influence the choice of the device?


In the grand scheme of things, where will you spend more time 
interacting with the device; racking & unracking or administering the 
device throughout it's life cycle?  I would focus on the larger portion 
of those times.


Sure, automation is getting a lot better.  But I bet that your network 
administrators will spend more than an hour interacting with the device 
over the multiple years that it's in service.  As such, I'd give the 
network administrators more input than the installers racking & 
unracking.  If nothing else, break it down proportionally based on time 
and / or business expense for wages therefor.



Thanks for your time in advance!


The coffee is done brewing and I have a doughnut, so I'll take my leave now.

Have a good day ~> weekend.



--
Grant. . . .
unix || die



smime.p7s
Description: S/MIME Cryptographic Signature


Re: IPv6 woes - RFC

2021-09-24 Thread Joe Maimon




Owen DeLong wrote:



On Sep 23, 2021, at 13:26 , Joe Maimon  wrote:


I hope not, both for IPv6 sake and for the network users. We dont know how much 
longer the goal will take, there is materializing a real possibility we will 
never quite reach it, and the potholes on the way are pretty rough.

By “the only way out is through” I meant that the only way we can get back to 
anything resembling mono-stack is, in fact, to complete the transition to IPv6.


The question is how? Waiting for everyone or nearly everyone to dual 
stack, the current strategy, is awful. Like pulling gum off a shoe.





And as the trip winds on, the landscape is changing, not necessarily for the 
better.

The IPv4 landscape will continue to get worse and worse. It cannot possibly get 
better, there just aren’t enough addresses for that.


I was referring to the more general network landscape, the governance 
system, the end of p2p, balkanization, etc, all trends and shifts that 
become more likely and entrenched the longer IPv6 lags and the scarcer 
IPv4 becomes.





One more "any decade now" and another IPv4 replacement/extension might just 
happen on the scene and catch on, rendering IPv6 the most wasteful global technical 
debacle to date.

If that’s what it takes to move forward with a protocol that has enough 
addresses, then so be it. I’m not attached to IPv6 particularly, but I 
recognize that IPv4 can’t keep up. As such, IPv6 is just the best current 
candidate. If someone offers a better choice, I’m all for it.


Whose to say it would be a proper p2p system? I know you believe 
strongly in that and want it fully restored, at least on the protocol level.



  Unfortunately, the IPv6 resistant forces
are making that hard for everyone else.

Owen

You say that as if it was a surprise, when it should not have been, and you say 
that as if something can be done about it, which we should know by now cannot 
be the primary focus, since it cannot be done in any timely fashion. If at all.

It’s not a surprise, but it is a tragedy.

There are things that can be done about it, but nobody currently wants to do 
them.


So lets make the conversation revolve around what can be done to 
actually advance IPv6, and what we should know by now is that convincing 
or coercing deployment with the current state of affairs does not have 
enough horsepower to get IPv6 anywhere far anytime soon.





Its time to throw mud on the wall and see what sticks. Dual stack and wait is 
an ongoing failure slouching to disaster.

IPv4 is an ongoing failure slouching to disaster, but the IPv6-resistant among 
us remain in denial about that.


Who is this "us"? Anybody even discussing IPv6 in a public forum is well 
ahead of the curve. Unfortunately. All early adopters. Real Early.


At some point, we are going to have to make a choice about how much longer we 
want to keep letting them hold us back. It will not be an easy choice, it will 
not be convenient, and it will not be simple.

The question is how much more pain an dhow much longer will it take before the 
choice becomes less difficult than the wait?

Owen

Exactly what does this choice look like? Turn off IPv4 when its fully 
functional? Only the have-nots may make the choice not to deploy IPv4 
sometime in the future, and for them, its not going to be a real choice.



Joe


Re: Rack rails on network equipment

2021-09-24 Thread Mel Beckman
We don’t care. We rack up switches maybe once or twice a year. It’s just not 
worth the effort to streamline. If we were installing dozens of switches a 
month, maybe. But personally I think it’s crazy to make rackability your 
primary reason for choosing a switch vendor. Do you base your automobile 
purchase decision on how easy it is to replace windshield wipers?

 -mel beckman

> On Sep 24, 2021, at 9:40 AM, Andrey Khomyakov  
> wrote:
> 
> So ultimately my question to you all is how much do you care about the speed 
> of racking and unracking equipment and do you tell your suppliers that you 
> care? How much does the time it takes to install or replace a switch impact 
> you?


Rack rails on network equipment

2021-09-24 Thread Andrey Khomyakov
Hi folks,
Happy Friday!

Would you, please, share your thoughts on the following matter?

Back some 5 years ago we pulled the trigger and started phasing out Cisco
and Juniper switching products out of our data centers (reasons for that
are not quite relevant to the topic). We selected Dell switches in part due
to Dell using "quick rails'' (sometimes known as speed rails or toolless
rails).  This is where both the switch side rail and the rack side rail
just snap in, thus not requiring a screwdriver and hands of the size no
bigger than a hamster paw to hold those stupid proprietary screws (lookin
at your, cisco) to attach those rails.
We went from taking 16hrs to build a row of compute (from just network
equipment racking pov) to maybe 1hr... (we estimated that on average it
took us 30 min to rack a switch from cut open the box with Juniper switches
to 5 min with Dell switches)
Interesting tidbit is that we actually used to manufacture custom rails for
our Juniper EX4500 switches so the switch can be actually inserted from the
back of the rack (you know, where most of your server ports are...) and not
be blocked by the zero-U PDUs and all the cabling in the rack. Stock rails
didn't work at all for us unless we used wider racks, which then, in turn,
reduced floor capacity.

As far as I know, Dell is the only switch vendor doing toolless rails so
it's a bit of a hardware lock-in from that point of view.

*So ultimately my question to you all is how much do you care about the
speed of racking and unracking equipment and do you tell your suppliers
that you care? How much does the time it takes to install or replace a
switch impact you?*

I was having a conversation with a vendor and was pushing hard on the fact
that their switches will end up being actually costlier for me long term
just because my switch replacement time quadruples at least, thus requiring
me to staff more remote hands. Am I overthinking this and artificially
limiting myself by excluding vendors who don't ship with toolless rails
(which is all of them now except Dell)?

Thanks for your time in advance!
--Andrey


Re: IPv6 woes - RFC

2021-09-24 Thread Grant Taylor via NANOG

On 9/24/21 3:01 AM, b...@uu3.net wrote:

Oh yeah, it would be very funny if this will really happen (new protocol).
Im not happy with IPv6, and it seems many others too.


Is your dissatisfaction with the IPv6 protocol itself or is your 
dissatisfaction with the deployment / adoption of the IPv6 protocol?


I think that it's a very critical distinction.  Much like DoH as a 
protocol vs how some companies have chosen to utilize it.  Similar to 
IBM's computers vs what they were used for in the 1940's.



This is short list how my ideal IPv6 proto looks like:
- 64bit address space
   more is not always better
- loopback 0:0:0:1/48
- soft LL 0:0:1-:0/32 (Link Local)
- RFC1918 address space 0:1-:0:0/16
- keep ARPs, ND wasnt great idea after all?
- NAT support (because its everywhere these days)
- IPv6 -> IPv4 interop (oneway)
   we can put customers on IPv6, while keeping services dualstack
- correct DHCP support (SLAAC wasnt great idea after all?)
   I think its already in IPv6, but was an issue at the begining


I'm probably showing my ignorance, but I believe that the IPv6 that we 
have today effectively does all of those things.  At least 
functionality, perhaps with different values.



If there are some weird requirements from others, put them into layer up.
L3 needs to be simple (KISS concept), so its easy to implement and less
prone to bugs.


How many of the hurtles to IPv6's deployment have been bugs in layer 3? 
It seems to me that most of the problems with IPv6 that I'm aware of are 
at other layers, significantly higher in, or on top of, the stack.



And that IPv6 I would love to see and addapt right away :)


I'm of the opinion that IPv6 has worked quite well dual stack from about 
2005-2015.  It's only been in the last 5 or so years that I'm starting 
to see more problems with IPv6.  And all of the problems that I'm seeing 
are companies making business level decisions, way above layer 7, that 
negatively impact IPv6.  Reluctance to run an MX on IPv6 for business 
level decisions is definitely not a protocol, much less L3, problem.




--
Grant. . . .
unix || die



smime.p7s
Description: S/MIME Cryptographic Signature


Re: AS6461 issues in Montreal

2021-09-24 Thread Oliver O'Boyle
We have an office in Montreal that is showing signs of intermittent
routing issues. So can confirm there's an issue somewhere.

On Fri, 24 Sept 2021 at 11:25, Jason Canady  wrote:
>
> We're in Indianapolis / Chicago and seeing 854,787 routes.
>
> On 9/24/21 11:17 AM, Eric Dugas via NANOG wrote:
> > Hello,
> >
> > Anyone else seeing a large withdrawal of routes on their Zayo AS6461
> > sessions? We've lost about 400k routes at around 10:40 EDT.
> >
> > Nothing in their Network Status so far
> >
> > Eric



-- 
:o@>


Re: AS6461 issues in Montreal

2021-09-24 Thread Jason Canady

We're in Indianapolis / Chicago and seeing 854,787 routes.

On 9/24/21 11:17 AM, Eric Dugas via NANOG wrote:

Hello,

Anyone else seeing a large withdrawal of routes on their Zayo AS6461 
sessions? We've lost about 400k routes at around 10:40 EDT.


Nothing in their Network Status so far

Eric


AS6461 issues in Montreal

2021-09-24 Thread Eric Dugas via NANOG
Hello,

Anyone else seeing a large withdrawal of routes on their Zayo AS6461
sessions? We've lost about 400k routes at around 10:40 EDT.

Nothing in their Network Status so far

Eric


Re: Upcycling devices like DOCSIS 3.0 MODEMs

2021-09-24 Thread Blake Hudson
While most cable networks consist primarily of DOCSIS 3.0 devices, 
there's an appreciable difference between an older 8 channel capable 
modem with 802.11n and a 16-32 channel capable modem with 802.11ac. Most 
ISPs I've worked with also like to standardize on a single vendor or a 
few models for ease of support. The modems certainly have value and I 
agree with Michael that the second hand market (for those that bring 
their own modem) might be the most appropriate place. I've also seen 
places like Goodwill take computer equipment like cable/dsl modems so 
that may be an option for you that keeps them out of a landfill.


On 9/23/2021 5:10 PM, mich...@spears.io wrote:


Could sell them on FB Marketplace or Craigslist.

*From:* NANOG  *On Behalf 
Of *Andrew Latham

*Sent:* Thursday, September 23, 2021 5:22 PM
*To:* NANOG 
*Subject:* Upcycling devices like DOCSIS 3.0 MODEMs

I found some new in box MODEMs in storage and they are 3.0 DOCSIS. I 
was wondering how I could donate them to an ISP that still uses DOCSIS 
3.0. I think several ISPs have switched to 3.1


Should I use the vendor recycling method and hope it stays out of a 
landfill?


--

- Andrew "lathama" Latham -





Re: Fiber Network Equipment Commercial Norms

2021-09-24 Thread Lady Benjamin Cannon of Glencoe, ASCE
Honestly good call and we’re looking at raising funds to do exactly that - 
however some of these buildings have values near a billion dollars each and 
there is more money in commercial real estate than telecom.

In my experience these things tend to crop-up with ownership of the building 
being a lot newer than the telco’s presence.   at $dayjob I’ve seen it 
personally - "MPOE access denied, you don’t have an agreement with the RMC”  
then we produce an agreement (with us) that pre-dates the RMC’s agreement.  We 
can find the docs, but not every telco has every document from generations ago 
in some cases.

I’ve found in almost every business, there is a much greater efficiency 
presumed than realized.   Since I was a child I’ve felt that automation could 
fix this.

—L.B.

Ms. Lady Benjamin PD Cannon of Glencoe, ASCE
6x7 Networks & 6x7 Telecom, LLC 
CEO 
l...@6by7.net 
"The only fully end-to-end encrypted global telecommunications company in the 
world.”
FCC License KJ6FJJ



> On Sep 22, 2021, at 8:49 PM, Seth Mattinen  wrote:
> 
> On 9/22/21 6:12 PM, Lady Benjamin Cannon of Glencoe, ASCE wrote:
>> If someone were to make us remove a redundant DWDM node, we’d charge them 
>> list price to ever consider putting it back*, plus a deposit, plus our costs 
>> for the removal in the first place.  Bad move.  Enjoy the $8million, it 
>> could cost more than that to undo this mistake.
>> *you’d actually never ever get it back in the form you’d want. We’ll never 
>> trust the site again and won’t place critical infrastructure there, we’d 
>> only build back what’s needed to serve the use.
> 
> 
> 
> Buy the building then. Owners change and some are more friendly than others. 
> Why would someone ever place critical infrastructure at a site without a 
> solid agreement that prohibits removal, or at least making them whole 
> financially so they don't have to take it out on the next person that comes 
> along? I'd hate to be the poor customer that gets treated as lesser class 
> because a previous owner caused hurt feelings.



Re: IPv6 woes - RFC

2021-09-24 Thread borg
Oh yeah, it would be very funny if this will really happen (new protocol).
Im not happy with IPv6, and it seems many others too.

This is short list how my ideal IPv6 proto looks like:
- 64bit address space
  more is not always better
- loopback 0:0:0:1/48
- soft LL 0:0:1-:0/32 (Link Local)
- RFC1918 address space 0:1-:0:0/16
- keep ARPs, ND wasnt great idea after all?
- NAT support (because its everywhere these days)
- IPv6 -> IPv4 interop (oneway)
  we can put customers on IPv6, while keeping services dualstack
- correct DHCP support (SLAAC wasnt great idea after all?)
  I think its already in IPv6, but was an issue at the begining

If there are some weird requirements from others, put them into layer up.
L3 needs to be simple (KISS concept), so its easy to implement and less
prone to bugs.

And that IPv6 I would love to see and addapt right away :)


-- Original message --

From: Joe Maimon 
To: Owen DeLong , Bjrn Mork 
Cc: nanog@nanog.org
Subject: Re: IPv6 woes - RFC
Date: Thu, 23 Sep 2021 16:26:17 -0400



Owen DeLong via NANOG wrote:
> > There are real issues with dual-stack, as this thread started out with.
> > I don't think there is a need neither to invent IPv6 problems, nor to
> > promote IPv6 advantages.  What we need is a way out of dual-stack-hell.
> I dont disagree, but a reversion to IPv4-only certainly wont do it.

For everyone who does have enough IPv4 addresses, it does. This is the problem
in a nutshell. If that starts trending, IPv6 is done.

> I think the only way out is through.

I hope not, both for IPv6 sake and for the network users. We dont know how much
longer the goal will take, there is materializing a real possibility we will
never quite reach it, and the potholes on the way are pretty rough.

And as the trip winds on, the landscape is changing, not necessarily for the
better.

One more "any decade now" and another IPv4 replacement/extension might just
happen on the scene and catch on, rendering IPv6 the most wasteful global
technical debacle to date.


>   Unfortunately, the IPv6 resistant forces
> are making that hard for everyone else.
> 
> Owen

You say that as if it was a surprise, when it should not have been, and you say
that as if something can be done about it, which we should know by now cannot be
the primary focus, since it cannot be done in any timely fashion. If at all.

Its time to throw mud on the wall and see what sticks. Dual stack and wait is an
ongoing failure slouching to disaster.

Joe