Re: Rack rails on network equipment

2021-09-27 Thread Doug McIntyre
On Mon, Sep 27, 2021 at 03:38:15PM -0400, William Allen Simpson wrote:
> Anyway, wasn't the Open Compute Project supposed to fix all this?
> Why not just require OCP in all RFPs?

https://xkcd.com/927/



Re: Rack rails on network equipment

2021-09-27 Thread Mel Beckman
I think the primary issue for front- vs rear-mounted switches is cooling. As 
long as you use switches that can pull cooling air from either the front or the 
back, it’s feasible to mount the TOR switches in the back.

For example, I think these are parts I used to order for Cisco Catalyst 
3850-48XS switches:

FAN-T3-R= Fan module front-to-back airflow for 48XS

FAN-T3-F= Fan module back-to-front airflow for 48XS

But if the switch is hardwired to pull cooling air from the front, it’s going 
to be taking in hot air, not cold air, which could lead to overheating.

As far as rails mounting time goes, it’s just not enough of a time factor to 
outweigh more important factors such as switch feature set, management 
architecture, or performance. Dell is pretty much in the back of the line for 
all of those factors,

 -mel

On Sep 27, 2021, at 2:32 PM, Andrey Khomyakov 
mailto:khomyakov.and...@gmail.com>> wrote:

Folks,

I kind of started to doubt my perception (we don't officially calculate it) of 
our failure rates until Mel provided this:
"That’s about the right failure rate for a population of 1000 switches. 
Enterprise switches typically have an MTBF of 700,000 hours or so, and 1000 
switches operating 8760 hours (24x7) a year would be 8,760,000 hours. Divided 
by 12 failures (one a month), yields an MTBF of 730,000 hours." At least I'm 
not crazy and our failure rate is not abnormal.

I really don't buy the lack of failure in 15 years of operation or w/ever is 
the crazy long period of time that is longer than a standard depreciation 
period in an average enterprise. I operated small colo cages with a handful of 
Cisco Nexus switches - something would fail once a year at least. I operated 
small enterprise data centers with 5-10 rows of racks - something most 
definitely fails at least once a year. Fun fact: there was a batch of switches 
with the Intel Atom clocking bug. Remember that one a couple of years ago? The 
whole industry was swapping out switches like mad in a span of a year or two... 
While I admit that's an abnormal event, the quick rails definitely made our 
efforts a lot less painful.

It's also interesting that there were several folks dismissing the need for 
toolless rails because switching to those will not yield much savings in time 
compared to recabling the switch. Somehow it is completely ignored that 
recabling has to happen regardless of the rail kit kind, i.e. it's not a data 
point in and of itself. And since we brought up the time it takes to recable a 
switch at replacement point, how is tacking on more time to deal with the rail 
kit a good thing? You have a switch hard down and you are running around 
looking for a screwdriver and a bag screws. Do we truly take that as a 
satisfactory way to operate? Screws run out, the previous tech misplaced the 
screw driver, the screw was too tight and you stripped it while undoing it, 
etc, etc...

Finally, another interesting point was brought up about having to rack the 
switches in the back of the rack vs the front. In an average rack we have about 
20-25 servers, each consuming at least 3 ports (two data ports for redundancy 
and one for idrac/ilo) and sometimes even more than that. Racking the switch 
with ports facing the cold aisle seems to then result in having to route 60 to 
70 patches from the back of the rack to the front. All of a sudden the cables 
need to be longer, heavier, harder to manage. Why would I want to face my 
switch ports into the cold aisle when all my connections are in the hot aisle? 
What am I missing?

I went back to a document my DC engineering team produced when we asked them to 
eval Mellanox switches from their point of view and they report that it takes 1 
person 1 minute to install a Dell switch from cutting open the box to applying 
power. It took them 2 people and 15 min (hence my 30 min statement) to install 
a Mellanox switch on traditional rails (it was a full width switch, not the 
half-RU one). Furthermore, they had to install the rails in reverse and load 
the switch from the front of the rack, because with 0-U PDUs in place the 
racking "ears" prevent the switch from going in or out of the rack from the 
back.

The theme of this whole thread kind of makes me sad, because summarizing it in 
my head comes off as "yeah the current rail kit sucks, but not enough for us to 
even ask for improvements in that area." It is really odd to hear that most 
folks are not even asking for improvements to an admittedly crappy solution. 
I'm not suggesting making the toolless rail kit a hard requirement. I'm asking 
why we, as an industry, don't even ask for that improvement from our vendors. 
If we never ask, we'll never get.

--Andrey


On Mon, Sep 27, 2021 at 10:57 AM Mel Beckman 
mailto:m...@beckman.org>> wrote:
That’s about the right failure rate for a population of 1000 switches. 
Enterprise switches typically have an MTBF of 700,000 hours or so, and 1000 
switches operating 8760 hours (24x7) a year 

Re: Rack rails on network equipment

2021-09-27 Thread Andrey Khomyakov
Folks,

I kind of started to doubt my perception (we don't officially calculate
it) of our failure rates until Mel provided this:
"That’s about the right failure rate for a population of 1000 switches.
Enterprise switches typically have an MTBF of 700,000 hours or so, and 1000
switches operating 8760 hours (24x7) a year would be 8,760,000 hours.
Divided by 12 failures (one a month), yields an MTBF of 730,000 hours." At
least I'm not crazy and our failure rate is not abnormal.

I really don't buy the lack of failure in 15 years of operation or w/ever
is the crazy long period of time that is longer than a standard
depreciation period in an average enterprise. I operated small colo cages
with a handful of Cisco Nexus switches - something would fail once a year
at least. I operated small enterprise data centers with 5-10 rows of racks
- something most definitely fails at least once a year. Fun fact: there was
a batch of switches with the Intel Atom clocking bug. Remember that one a
couple of years ago? The whole industry was swapping out switches like mad
in a span of a year or two... While I admit that's an abnormal event, the
quick rails definitely made our efforts a lot less painful.

It's also interesting that there were several folks dismissing the need for
toolless rails because switching to those will not yield much savings in
time compared to recabling the switch. Somehow it is completely ignored
that recabling has to happen regardless of the rail kit kind, i.e. it's not
a data point in and of itself. And since we brought up the time it takes to
recable a switch at replacement point, how is tacking on more time to deal
with the rail kit a good thing? You have a switch hard down and you are
running around looking for a screwdriver and a bag screws. Do we truly take
that as a satisfactory way to operate? Screws run out, the previous tech
misplaced the screw driver, the screw was too tight and you stripped it
while undoing it, etc, etc...

Finally, another interesting point was brought up about having to rack the
switches in the back of the rack vs the front. In an average rack we have
about 20-25 servers, each consuming at least 3 ports (two data ports for
redundancy and one for idrac/ilo) and sometimes even more than that.
Racking the switch with ports facing the cold aisle seems to then result in
having to route 60 to 70 patches from the back of the rack to the front.
All of a sudden the cables need to be longer, heavier, harder to manage.
Why would I want to face my switch ports into the cold aisle when all my
connections are in the hot aisle? What am I missing?

I went back to a document my DC engineering team produced when we asked
them to eval Mellanox switches from their point of view and they report
that it takes 1 person 1 minute to install a Dell switch from cutting open
the box to applying power. It took them 2 people and 15 min (hence my 30
min statement) to install a Mellanox switch on traditional rails (it was a
full width switch, not the half-RU one). Furthermore, they had to install
the rails in reverse and load the switch from the front of the rack,
because with 0-U PDUs in place the racking "ears" prevent the switch from
going in or out of the rack from the back.

The theme of this whole thread kind of makes me sad, because summarizing it
in my head comes off as "yeah the current rail kit sucks, but not enough
for us to even ask for improvements in that area." It is really odd to hear
that most folks are not even asking for improvements to an admittedly
crappy solution. I'm not suggesting making the toolless rail kit a hard
requirement. I'm asking why we, as an industry, don't even ask for that
improvement from our vendors. If we never ask, we'll never get.

--Andrey


On Mon, Sep 27, 2021 at 10:57 AM Mel Beckman  wrote:

> That’s about the right failure rate for a population of 1000 switches.
> Enterprise switches typically have an MTBF of 700,000 hours or so, and 1000
> switches operating 8760 hours (24x7) a year would be 8,760,000 hours.
> Divided by 12 failures (one a month), yields an MTBF of 730,000 hours.
>
>  -mel
>
> > On Sep 27, 2021, at 10:32 AM, Doug McIntyre  wrote:
> >
> > On Sat, Sep 25, 2021 at 12:48:38PM -0700, Andrey Khomyakov wrote:
> >> We operate over 1000 switches in our data centers, and hardware failures
> >> that require a switch swap are common enough where the speed of swap
> starts
> >> to matter to some extent. We probably swap a switch or two a month.
> > ...
> >
> > This level of failure surprises me. While I can't say I have 1000
> > switches, I do have hundreds of switches, and I can think of a failure
> > of only one or two in at least 15 years of operation. They tend to be
> > pretty reliable, and have to be swapped out for EOL more than anything.
> >
>


Re: Rack rails on network equipment

2021-09-27 Thread William Allen Simpson

On 9/25/21 7:52 PM, Joe Greco wrote:

On Sat, Sep 25, 2021 at 04:23:38PM -0700, Jay Hennigan wrote:

On 9/25/21 16:14, George Herbert wrote:

(Crying, thinking about racks and racks and racks of AT 56k modems
strapped to shelves above PM-2E-30s???)


And all of their wall-warts [...]


You were doing it wrong, then.  :-)



Oh, you young rascals!  Started with Racal-Vadic triple modems
connected to custom multiport serial gear (still have the wirewrap tool),
upgraded to Telebit NetBlazers, then Livingston PortMasters.

Built and rebuilt many Points Of Presence (POPs) back in the day.  Two
days per rack wasn't unusual, labeling all those wires.

The real problem with racks is/was the changes in holes.  My personal
preference now is all square holes, because you can always replace the
plugs after the threads have stripped.  Stripped threads were at one
time the bane of my existence.

Anyway, wasn't the Open Compute Project supposed to fix all this?
Why not just require OCP in all RFPs?

Also, hot aisle cold aisle should have been replaced by now with
rack top hats.  Seem to remember a Colorado study that showed 15%
power reduction by moving the air return over a suspended ceiling.


Re: Rack rails on network equipment

2021-09-27 Thread Mel Beckman
That’s about the right failure rate for a population of 1000 switches. 
Enterprise switches typically have an MTBF of 700,000 hours or so, and 1000 
switches operating 8760 hours (24x7) a year would be 8,760,000 hours. Divided 
by 12 failures (one a month), yields an MTBF of 730,000 hours. 

 -mel 

> On Sep 27, 2021, at 10:32 AM, Doug McIntyre  wrote:
> 
> On Sat, Sep 25, 2021 at 12:48:38PM -0700, Andrey Khomyakov wrote:
>> We operate over 1000 switches in our data centers, and hardware failures
>> that require a switch swap are common enough where the speed of swap starts
>> to matter to some extent. We probably swap a switch or two a month.
> ...
> 
> This level of failure surprises me. While I can't say I have 1000
> switches, I do have hundreds of switches, and I can think of a failure
> of only one or two in at least 15 years of operation. They tend to be
> pretty reliable, and have to be swapped out for EOL more than anything.
> 
> 
> 


Re: Rack rails on network equipment

2021-09-27 Thread Doug McIntyre
On Sat, Sep 25, 2021 at 12:48:38PM -0700, Andrey Khomyakov wrote:
> We operate over 1000 switches in our data centers, and hardware failures
> that require a switch swap are common enough where the speed of swap starts
> to matter to some extent. We probably swap a switch or two a month.
...

This level of failure surprises me. While I can't say I have 1000
switches, I do have hundreds of switches, and I can think of a failure
of only one or two in at least 15 years of operation. They tend to be
pretty reliable, and have to be swapped out for EOL more than anything.





Re: Rack rails on network equipment

2021-09-27 Thread Tore Anderson
* Andrey Khomyakov

> Interesting tidbit is that we actually used to manufacture custom rails for 
> our Juniper EX4500 switches so the switch can be actually inserted from the 
> back of the rack (you know, where most of your server ports are...) and not 
> be blocked by the zero-U PDUs and all the cabling in the rack. Stock rails 
> didn't work at all for us unless we used wider racks, which then, in turn, 
> reduced floor capacity.
> 
> As far as I know, Dell is the only switch vendor doing toolless rails so it's 
> a bit of a hardware lock-in from that point of view. 

Amen.

I suspect that Dell is pretty much alone in realising that rack mount
kits that require insertion/removal from the hot aisle is pure idiocy,
since the rear of the rack tends to be crowded with cables, PDUs, and
so forth.

This might be due to Dell starting out as a server manufacturer. *All*
rack-mount servers on the market are inserted into (and removed from)
the cold aisle of the rack, after all. The reasons that make this the
only sensible thing for servers apply even more so for data centre
switches.

I got so frustrated with this after having to remove a couple of
decommissioned switches that I wrote a post about it a few years back:

https://www.redpill-linpro.com/techblog/2019/08/06/rack-switch-removal.html

Nowadays I employ various strategies to facilitate cold aisle
installation/removal, such as: reversing the rails if possible,
attaching only a single rack ear (for four-post mounted equipment) or
installing rivet nuts directly in the rack ears (for shallow two-post
mounted equipment).

(Another lesson the data centre switch manufacturers could learn from
the server manufacturers is to always include a BMC. I would *much
rather* spend my serial console infrastructure budget on switches with
built-in BMCs. That way I would get remote power control, IPMI Serial-
Over-LAN and so on – all through a *single* Ethernet management cable.)

Tore




Re: Rack rails on network equipment

2021-09-26 Thread Lady Benjamin Cannon
I can install an entire 384lb 21U core router in 30 minutes.

Most of that time is removing every module to lighten the chassis, then 
re-installing every module. 

We can build an entire POP in a day with a crew of 3, so I’m not sure there’s 
worthwhile savings to be had here.   Also consider that network engineers 
babysitting it later cost more than the installers (usually) who don’t have to 
be terribly sophisticated at say BGP.

Those rapid-rails are indeed nice for servers and make quick work of putting 
~30+ 1U pizza boxes in a rack quickly.  We use them on 2U servers we like a 
lot.   

And these days everyone is just buying merchant silicon and throwing a UI 
around it, so there’s less of a reason to pick any particular vendor, however 
there still is differentiation that can dramatically increase the TCO.

I don’t think they’re needed for switches, and for onesie-twosie, they’ll 
probably slow things down compared with basic (good, bad ones exist) rack rails.

I write all of this from the perspective of a network engineer, businesswoman, 
and telecom carrier - not necessarily that of a hyperscale cloud compute 
provider, although we are becoming one of those too it seems, so this 
perspective may shift for that unique use-case.

-LB


> On Sep 24, 2021, at 11:27 AM, Mauricio Rodriguez via NANOG  
> wrote:
> 
> Andrey, hi.
> 
> The speed rails are nice, and are effective in optimizing the time it takes 
> to rack equipment.  It's pretty much par for the course on servers today 
> (thank goodness!), and not so much on network equipment.  I suppose the 
> reasons being what others have mentioned - longevity of service life, 
> frequency at which network gear is installed, etc.  As well, a typical server 
> to switch ratio, depending on number of switch ports and fault-tolerance 
> configurations, could be something like 38:1 in dense 1U server install.  So 
> taking a few more minutes on the switch installation isn't so impactful - 
> taking a few more minutes on each server installation can really become a 
> problem.
> 
> A 30-minute time to install a regular 1U ToR switch seems a bit excessive.  
> Maybe the very first time a tech installs any specific model switch with a 
> unique rail configuration.  After that one, it should be around 10 minutes 
> for most situations.  I am assuming some level of teamwork where there is an 
> installer at the front of the cabinet and another at the rear, and they work 
> in tandem to install cage nuts, install front/rear rails (depending on 
> switch), position the equipment, and affix to the cabinet.  I can see the 30 
> minutes if you have one person, it's a larger/heavier device (like the 
> EX4500) and the installer is forced to do some kind of crazy balancing act 
> with the switch (not recommended), or has to use a server lift to install it.
> 
> Those speed rails as well are a bit of a challenge to install if it's not a 
> team effort. So, I'm wondering if in addition to using speed rails, you may 
> have changed from a one-tech installation process to a two-tech team 
> installation process?
> 
> Best Regards,
> Mauricio Rodriguez
> Founder / Owner
> Fletnet Network Engineering (www.fletnet.com )
> Follow us on LinkedIn 
> 
> mauricio.rodrig...@fletnet.com 
> Office: +1 786-309-1082
> Direct: +1 786-309-5493
> 
> 
> 
> 
> On Fri, Sep 24, 2021 at 12:41 PM Andrey Khomyakov  > wrote:
> Hi folks,
> Happy Friday!
> 
> Would you, please, share your thoughts on the following matter?
> 
> Back some 5 years ago we pulled the trigger and started phasing out Cisco and 
> Juniper switching products out of our data centers (reasons for that are not 
> quite relevant to the topic). We selected Dell switches in part due to Dell 
> using "quick rails'' (sometimes known as speed rails or toolless rails).  
> This is where both the switch side rail and the rack side rail just snap in, 
> thus not requiring a screwdriver and hands of the size no bigger than a 
> hamster paw to hold those stupid proprietary screws (lookin at your, cisco) 
> to attach those rails.
> We went from taking 16hrs to build a row of compute (from just network 
> equipment racking pov) to maybe 1hr... (we estimated that on average it took 
> us 30 min to rack a switch from cut open the box with Juniper switches to 5 
> min with Dell switches)
> Interesting tidbit is that we actually used to manufacture custom rails for 
> our Juniper EX4500 switches so the switch can be actually inserted from the 
> back of the rack (you know, where most of your server ports are...) and not 
> be blocked by the zero-U PDUs and all the cabling in the rack. Stock rails 
> didn't work at all for us unless we used wider racks, which then, in turn, 
> reduced floor capacity.
> 
> As far as I know, Dell is the only switch vendor doing toolless rails so it's 
> a bit of a 

Re: Rack rails on network equipment

2021-09-26 Thread Alan Buxey
> We operate over 1000 switches in our data centers, and hardware failures that 
> require a switch swap are common enough where the speed of swap starts to 
> matter to some extent. We probably swap a switch or two a month.

having operated a network of over 2000 switches, where we would see
maybe one die a year (and let me tell you, some of those switches were
not in nice places...no data centre air handled clean rack spaces etc)
this failure
rate is very high and would certainly be a factor in vendor choice.
for initial install, there are quicker ways of dealing with cage nut
installs... but when a switch die in service, the mounting isnt a
speed factor, its the cabling (and
as others have said, the startup time of some modern switches, you can
patch every cable back in before the thing has even booted these
days).

alan


Re: Rack rails on network equipment

2021-09-25 Thread Joe Greco
On Sat, Sep 25, 2021 at 04:23:38PM -0700, Jay Hennigan wrote:
> On 9/25/21 16:14, George Herbert wrote:
> >(Crying, thinking about racks and racks and racks of AT 56k modems 
> >strapped to shelves above PM-2E-30s???)
> 
> And all of their wall-warts [...]

You were doing it wrong, then.  :-)

ExecPC had this down to a science, and had used a large transformer
to power a busbar along the back of two 60-slot literature organizers,
with 4x PM2E30's on top, a modem in each slot, and they snipped off
the wall warts, using the supplied cable for power.  A vertical board
was added over the top so that the rears of the PM2s were exposed, and
the board provided a mounting point for an ethernet hub and three Amp
RJ21 breakouts.  This gave you a modem "pod" that held 120 USR Courier
56K modems, neatly cabled and easily serviced.  The only thing coming
to each of these racks was 3x AMP RJ21, 1x power, and 1x ethernet.

They had ten of these handling their 1200 (one thousand two hundred!)
modems before it got unmanageable, and part of that was that US Robotics
offered a deal that allowed them to be a testing site for Total Control.

At which point they promptly had a guy solder all the wall warts back on
to the power leads and proceeded to sell them at a good percentage of
original price to new Internet users.

The other problem was that they were getting near two full DS3's worth
of analog lines being delivered this way, and it was taking up a TON of
space.  A full "pod" could be reduced to 3x USR TC's, so two whole pods
could be replaced with a single rack of gear.

... JG
-- 
Joe Greco - sol.net Network Services - Milwaukee, WI - http://www.sol.net
"The strain of anti-intellectualism has been a constant thread winding its way
through our political and cultural life, nurtured by the false notion that
democracy means that 'my ignorance is just as good as your knowledge.'"-Asimov


Re: Rack rails on network equipment

2021-09-25 Thread Jay Hennigan

On 9/25/21 16:14, George Herbert wrote:

(Crying, thinking about racks and racks and racks of AT 56k modems strapped 
to shelves above PM-2E-30s…)


And all of their wall-warts and serial cables


The early 90s were a dangerous place, man.


Yes, but the good news is that shortly thereafter you got to replace 
that all of that gear with Ascend TNT space heaters which did double 
duty as modem banks.


--
Jay Hennigan - j...@west.net
Network Engineering - CCIE #7880
503 897-8550 - WB6RDV


Re: Rack rails on network equipment

2021-09-25 Thread George Herbert
(Crying, thinking about racks and racks and racks of AT 56k modems strapped 
to shelves above PM-2E-30s…)

The early 90s were a dangerous place, man.

-George 

Sent from my iPhone

> On Sep 24, 2021, at 8:05 PM, Wayne Bouchard  wrote:
> 
> Didn't require any additional time at all when equipment wasn't bulky
> enough to need rails in the first place
> 
> 
> I've never been happy about that change.
> 
> 
>> On Fri, Sep 24, 2021 at 09:37:58AM -0700, Andrey Khomyakov wrote:
>> Hi folks,
>> Happy Friday!
>> 
>> Would you, please, share your thoughts on the following matter?
>> 
>> Back some 5 years ago we pulled the trigger and started phasing out Cisco
>> and Juniper switching products out of our data centers (reasons for that
>> are not quite relevant to the topic). We selected Dell switches in part due
>> to Dell using "quick rails'' (sometimes known as speed rails or toolless
>> rails).  This is where both the switch side rail and the rack side rail
>> just snap in, thus not requiring a screwdriver and hands of the size no
>> bigger than a hamster paw to hold those stupid proprietary screws (lookin
>> at your, cisco) to attach those rails.
>> We went from taking 16hrs to build a row of compute (from just network
>> equipment racking pov) to maybe 1hr... (we estimated that on average it
>> took us 30 min to rack a switch from cut open the box with Juniper switches
>> to 5 min with Dell switches)
>> Interesting tidbit is that we actually used to manufacture custom rails for
>> our Juniper EX4500 switches so the switch can be actually inserted from the
>> back of the rack (you know, where most of your server ports are...) and not
>> be blocked by the zero-U PDUs and all the cabling in the rack. Stock rails
>> didn't work at all for us unless we used wider racks, which then, in turn,
>> reduced floor capacity.
>> 
>> As far as I know, Dell is the only switch vendor doing toolless rails so
>> it's a bit of a hardware lock-in from that point of view.
>> 
>> *So ultimately my question to you all is how much do you care about the
>> speed of racking and unracking equipment and do you tell your suppliers
>> that you care? How much does the time it takes to install or replace a
>> switch impact you?*
>> 
>> I was having a conversation with a vendor and was pushing hard on the fact
>> that their switches will end up being actually costlier for me long term
>> just because my switch replacement time quadruples at least, thus requiring
>> me to staff more remote hands. Am I overthinking this and artificially
>> limiting myself by excluding vendors who don't ship with toolless rails
>> (which is all of them now except Dell)?
>> 
>> Thanks for your time in advance!
>> --Andrey
> 
> ---
> Wayne Bouchard
> w...@typo.org
> Network Dude
> http://www.typo.org/~web/


Re: Rack rails on network equipment

2021-09-25 Thread Shawn L via NANOG
Why about thinks like the Cisco 4500 switch series that are almost as long as a 
1u server.  But yet only has mounts for a relay type rack. 

As far as boot times, try a Asr920.  Wait 15 minutes and decide if it’s time to 
power cycle again or wait 5 more minutes 

Sent from my iPhone

> On Sep 25, 2021, at 5:22 PM, Michael Thomas  wrote:
> 
> 
>> On 9/25/21 2:08 PM, Jay Hennigan wrote:
>>> On 9/25/21 13:55, Baldur Norddahl wrote:
>>> 
>>> My personal itch is how new equipment seems to have even worse boot time 
>>> than previous generations. I am currently installing juniper acx710 and 
>>> while they are nice, they also make me wait 15 minutes to boot. This is a 
>>> tremendous waste of time during installation. I can not leave the site 
>>> without verification and typically I also have some tasks to do after boot.
>>> 
>>> Besides if you have a crash or power interruption, the customers are not 
>>> happy to wait additionally 15 minutes to get online again.
>> 
>> Switches in particular have a lot of ASICs that need to be loaded on boot. 
>> This takes time and they're really not optimized for speed on a process that 
>> occurs once.
> 
> It doesn't seem like it would take too many reboots to really mess with your 
> reliability numbers for uptime. And what on earth are the developers doing 
> with that kind of debug cycle time?
> 
> Mike
> 


Re: Rack rails on network equipment

2021-09-25 Thread Michael Thomas



On 9/25/21 2:08 PM, Jay Hennigan wrote:

On 9/25/21 13:55, Baldur Norddahl wrote:

My personal itch is how new equipment seems to have even worse boot 
time than previous generations. I am currently installing juniper 
acx710 and while they are nice, they also make me wait 15 minutes to 
boot. This is a tremendous waste of time during installation. I can 
not leave the site without verification and typically I also have 
some tasks to do after boot.


Besides if you have a crash or power interruption, the customers are 
not happy to wait additionally 15 minutes to get online again.


Switches in particular have a lot of ASICs that need to be loaded on 
boot. This takes time and they're really not optimized for speed on a 
process that occurs once.


It doesn't seem like it would take too many reboots to really mess with 
your reliability numbers for uptime. And what on earth are the 
developers doing with that kind of debug cycle time?


Mike



Re: Rack rails on network equipment

2021-09-25 Thread Brandon Butterworth
On Sat Sep 25, 2021 at 12:48:38PM -0700, Andrey Khomyakov wrote:
> We are looking at Nvidia (former Mellanox) switches

If I was going to rule any out based on rails it'd be their half width
model. Craziest rails I've seen. It's actually a frame that sits inside
the rack rails so you need quite a bit of space above to angle it to
fit between the rails.

Once you have stuff above and below the frame isn't coming out (at
least the switches just slide into it)

branodn



Re: Rack rails on network equipment

2021-09-25 Thread Owen DeLong via NANOG


> On Sep 25, 2021, at 12:48 , Andrey Khomyakov  
> wrote:

> Let me just say from the get go that no one is making toolless rails a 
> priority to the point of shutting vendors out of the evaluation process. I am 
> not quite sure why that assumption was made by at least a few folks. With 
> that said, when all things being equal or fairly equal, which they rarely 
> are, that's when the rails come in as a factor.
> 
> Back some 5 years ago we pulled the trigger and started phasing out Cisco and 
> Juniper switching products out of our data centers (reasons for that are not 
> quite relevant to the topic). We selected Dell switches in part due to Dell 
> using "quick rails'' (sometimes known as speed rails or toolless rails).  
> This is where both the switch side rail and the rack side rail just snap in, 
> thus not requiring a screwdriver and hands of the size no bigger than a 
> hamster paw to hold those stupid proprietary screws (lookin at your, cisco) 
> to attach those rails.

Perhaps from this paragraph?

Owen




Re: Rack rails on network equipment

2021-09-25 Thread Jay Hennigan

On 9/25/21 13:55, Baldur Norddahl wrote:

My personal itch is how new equipment seems to have even worse boot time 
than previous generations. I am currently installing juniper acx710 and 
while they are nice, they also make me wait 15 minutes to boot. This is 
a tremendous waste of time during installation. I can not leave the site 
without verification and typically I also have some tasks to do after boot.


Besides if you have a crash or power interruption, the customers are not 
happy to wait additionally 15 minutes to get online again.


Switches in particular have a lot of ASICs that need to be loaded on 
boot. This takes time and they're really not optimized for speed on a 
process that occurs once.


--
Jay Hennigan - j...@west.net
Network Engineering - CCIE #7880
503 897-8550 - WB6RDV


Re: Rack rails on network equipment

2021-09-25 Thread Baldur Norddahl
The "niceness" of equipment does factor in but it might be invisible. For
example if you like junipers cli environment, you will look at their stuff
first even if you do not have it explicitly in your requirement list.

Better rack rails will make slightly more people prefer your gear, although
it might be hard to measure exactly how much. Which is probably the
problem.

Our problem with racking switches is how vendors deliver NO rack rails and
expect us to have them hanging on just the front posts. I have a lot of
switches on rack shelfs for that reason. Does not look very professional
but neither does rack posts bent out of shape.

My personal itch is how new equipment seems to have even worse boot time
than previous generations. I am currently installing juniper acx710 and
while they are nice, they also make me wait 15 minutes to boot. This is a
tremendous waste of time during installation. I can not leave the site
without verification and typically I also have some tasks to do after boot.

Besides if you have a crash or power interruption, the customers are not
happy to wait additionally 15 minutes to get online again.

Desktop computers used to be ages to boot until Microsoft declared that you
need to be ready in 30 seconds to be certified. And suddenly everything
could boot in 30 seconds or less. There is no good reason to waste techs
time by scanning the SCSI bus in a server that does not even have the
hardware.

Regards

Baldur



lør. 25. sep. 2021 21.49 skrev Andrey Khomyakov :

> Well, folks, the replies have certainly been interesting. I did get my
> answer, which seems to be "no one cares", which, in turn, explains why
> network equipment manufacturers give very little to no attention to this
> problem. A point of clarification is I'm talking about the problem in the
> context of operating a data center with cabinet racks, not a telecom closet
> with 2 post racks.
>
> Let me just say from the get go that no one is making toolless rails a
> priority to the point of shutting vendors out of the evaluation process. I
> am not quite sure why that assumption was made by at least a few folks.
> With that said, when all things being equal or fairly equal, which they
> rarely are, that's when the rails come in as a factor.
>
> We operate over 1000 switches in our data centers, and hardware failures
> that require a switch swap are common enough where the speed of swap starts
> to matter to some extent. We probably swap a switch or two a month.
> Furthermore, those switches several of you referenced, which run for 5+
> years are not the ones we use. I think you are thinking of the legacy days
> where you pay $20k plus for a top of rack switch from Cisco, and then sweat
> that switch until it dies of old age. I used to operate exactly like that
> in my earlier days. This does not work for us for a number of reasons, and
> so we don't go down that path.
>
> We use Force10 family Dell switches which are basically Broadcom TD2+/TD3
> based switches (ON4000 and ON5200 series) and we run Cumulus Linux on
> those, so swapping hardware without swapping the operating system for us is
> quite plausible and very much possible. We just haven't had the need to
> switch away from Dell until recently after Cumulus Networks (now Nvidia)
> had a falling out with Broadcom and effectively will cease support for
> Broadcom ASICs in the near future. We have loads of network config
> automation rolled out and very little of it is tied to anything Cumulus
> Linux specific, so there is a fair chance to switch over to Sonic with low
> to medium effort on our part, thus returning to the state where we can
> switch hardware vendors with fairly low effort. We are looking at Nvidia
> (former Mellanox) switches which hardly have any toolless rails, and we are
> also looking at all the other usual suspects in the "white box" world,
> which is why I asked how many of you care about the rail kit and I got my
> answer: "very little to not at all". In my opinion, if you never ask,
> you'll never get it, so I am asking my vendors for toolless rails, even if
> most of them will likely never get there, since I'm probably one of the
> very few who even brights that question up to them. I'd say network
> equipment has always been in a sad state of being compared to, well, just
> about any other equipment and for some reason we are all more or less
> content with it. May I suggest you all at least raise that question to your
> suppliers even if you know full well the answer is "no". At least it will
> start showing the vendors there is demand for this feature.
>
> On the subject of new builds. Over the course of my career I have hired
> contractors to rack/stack large build-outs and a good number of them treat
> your equipment the same way they treat their 2x4s. They torque all the
> screws to such a degree that when you have to undo that, you are sweating
> like a pig trying to undo one screw, eventually stripping it, so you have
> to drill 

Re: Rack rails on network equipment

2021-09-25 Thread Andrey Khomyakov
Well, folks, the replies have certainly been interesting. I did get my
answer, which seems to be "no one cares", which, in turn, explains why
network equipment manufacturers give very little to no attention to this
problem. A point of clarification is I'm talking about the problem in the
context of operating a data center with cabinet racks, not a telecom closet
with 2 post racks.

Let me just say from the get go that no one is making toolless rails a
priority to the point of shutting vendors out of the evaluation process. I
am not quite sure why that assumption was made by at least a few folks.
With that said, when all things being equal or fairly equal, which they
rarely are, that's when the rails come in as a factor.

We operate over 1000 switches in our data centers, and hardware failures
that require a switch swap are common enough where the speed of swap starts
to matter to some extent. We probably swap a switch or two a month.
Furthermore, those switches several of you referenced, which run for 5+
years are not the ones we use. I think you are thinking of the legacy days
where you pay $20k plus for a top of rack switch from Cisco, and then sweat
that switch until it dies of old age. I used to operate exactly like that
in my earlier days. This does not work for us for a number of reasons, and
so we don't go down that path.

We use Force10 family Dell switches which are basically Broadcom TD2+/TD3
based switches (ON4000 and ON5200 series) and we run Cumulus Linux on
those, so swapping hardware without swapping the operating system for us is
quite plausible and very much possible. We just haven't had the need to
switch away from Dell until recently after Cumulus Networks (now Nvidia)
had a falling out with Broadcom and effectively will cease support for
Broadcom ASICs in the near future. We have loads of network config
automation rolled out and very little of it is tied to anything Cumulus
Linux specific, so there is a fair chance to switch over to Sonic with low
to medium effort on our part, thus returning to the state where we can
switch hardware vendors with fairly low effort. We are looking at Nvidia
(former Mellanox) switches which hardly have any toolless rails, and we are
also looking at all the other usual suspects in the "white box" world,
which is why I asked how many of you care about the rail kit and I got my
answer: "very little to not at all". In my opinion, if you never ask,
you'll never get it, so I am asking my vendors for toolless rails, even if
most of them will likely never get there, since I'm probably one of the
very few who even brights that question up to them. I'd say network
equipment has always been in a sad state of being compared to, well, just
about any other equipment and for some reason we are all more or less
content with it. May I suggest you all at least raise that question to your
suppliers even if you know full well the answer is "no". At least it will
start showing the vendors there is demand for this feature.

On the subject of new builds. Over the course of my career I have hired
contractors to rack/stack large build-outs and a good number of them treat
your equipment the same way they treat their 2x4s. They torque all the
screws to such a degree that when you have to undo that, you are sweating
like a pig trying to undo one screw, eventually stripping it, so you have
to drill that out, etc, etc. How is that acceptable? I'm not trying to say
that _every_ contractor does that, but a lot do to the point that that
matters. I have no interest in discussing how to babysit contractors so
they don't screw up your equipment.

I will also concede that operating 10 switches in a colo cage probably
doesn't warrant considerations for toolless rails. Operating 500 switches
and growing per site?... It slowly starts to matter. And when your outlook
is expansion, then it starts to matter even more.

Thanks to all of you for your contribution. It definitely shows the
perspective I was looking for.

Special thanks to Jason How-Kow, who linked the Arista toolless rails
(ironically we have Arista evals in the pipeline and I didn't know they do
toolless, so it's super helpful)

--Andrey


On Fri, Sep 24, 2021 at 9:37 AM Andrey Khomyakov 
wrote:

> Hi folks,
> Happy Friday!
>
> Would you, please, share your thoughts on the following matter?
>
> Back some 5 years ago we pulled the trigger and started phasing out Cisco
> and Juniper switching products out of our data centers (reasons for that
> are not quite relevant to the topic). We selected Dell switches in part due
> to Dell using "quick rails'' (sometimes known as speed rails or toolless
> rails).  This is where both the switch side rail and the rack side rail
> just snap in, thus not requiring a screwdriver and hands of the size no
> bigger than a hamster paw to hold those stupid proprietary screws (lookin
> at your, cisco) to attach those rails.
> We went from taking 16hrs to build a row of compute (from just network
> equipment 

Re: Rack rails on network equipment

2021-09-25 Thread ic
Hi,

> On 24 Sep 2021, at 12:37, Andrey Khomyakov  wrote:
> 
> (you know, where most of your server ports are…)

Port side intake (switch at the front of the rack) is generally better for 
cooling the optical modules. The extra cabling difficulty is worth it.

Also, as others said, choosing an arguably inferior product only because it’s 
easier to rack sounds like a bad idea.

BR, ic



Re: Rack rails on network equipment

2021-09-25 Thread Sabri Berisha
- On Sep 24, 2021, at 11:19 AM, William Herrin b...@herrin.us wrote:

Hi,

> Seriously, the physical build of network equipment is not entirely
> competent.

Except, sometimes there is little choice. Look at 400G QSFP-DD for
example. Those optics can generate up to 20 watts of heat that needs
to be dissipated. For 800G that can go up to 25 watts.

That makes back-to-front cooling, as some people demand, very
challenging, if not impossible.

Thanks,

Sabri



Re: Rack rails on network equipment

2021-09-24 Thread Bryan Fields
On 9/24/21 10:58 PM, Owen DeLong via NANOG wrote:
> Meh… Turn off power supply input switch, open chassis carefully, apply 
> high-wattage 1Ω resistor across capacitor terminals for 10 seconds.
> 

If dealing with a charged capacitor, do not use a low resistance such as a
ohm.  This is the same as using a screwdriver, and will cause a big arc.  You
want to use a 100k ohm device for a couple seconds, this will bleed it off
over 5-10 seconds.

Most (all?) power supplies will have a bleeder over any large value caps, and
will likely be shielded/encased near the input anyways.  If you let it sit for
5-10 minutes the leakage resistance will dissipate the charge in any typical
capacitor.

-- 
Bryan Fields

727-409-1194 - Voice
http://bryanfields.net


Re: Rack rails on network equipment

2021-09-24 Thread Wayne Bouchard
Didn't require any additional time at all when equipment wasn't bulky
enough to need rails in the first place


I've never been happy about that change.


On Fri, Sep 24, 2021 at 09:37:58AM -0700, Andrey Khomyakov wrote:
> Hi folks,
> Happy Friday!
> 
> Would you, please, share your thoughts on the following matter?
> 
> Back some 5 years ago we pulled the trigger and started phasing out Cisco
> and Juniper switching products out of our data centers (reasons for that
> are not quite relevant to the topic). We selected Dell switches in part due
> to Dell using "quick rails'' (sometimes known as speed rails or toolless
> rails).  This is where both the switch side rail and the rack side rail
> just snap in, thus not requiring a screwdriver and hands of the size no
> bigger than a hamster paw to hold those stupid proprietary screws (lookin
> at your, cisco) to attach those rails.
> We went from taking 16hrs to build a row of compute (from just network
> equipment racking pov) to maybe 1hr... (we estimated that on average it
> took us 30 min to rack a switch from cut open the box with Juniper switches
> to 5 min with Dell switches)
> Interesting tidbit is that we actually used to manufacture custom rails for
> our Juniper EX4500 switches so the switch can be actually inserted from the
> back of the rack (you know, where most of your server ports are...) and not
> be blocked by the zero-U PDUs and all the cabling in the rack. Stock rails
> didn't work at all for us unless we used wider racks, which then, in turn,
> reduced floor capacity.
> 
> As far as I know, Dell is the only switch vendor doing toolless rails so
> it's a bit of a hardware lock-in from that point of view.
> 
> *So ultimately my question to you all is how much do you care about the
> speed of racking and unracking equipment and do you tell your suppliers
> that you care? How much does the time it takes to install or replace a
> switch impact you?*
> 
> I was having a conversation with a vendor and was pushing hard on the fact
> that their switches will end up being actually costlier for me long term
> just because my switch replacement time quadruples at least, thus requiring
> me to staff more remote hands. Am I overthinking this and artificially
> limiting myself by excluding vendors who don't ship with toolless rails
> (which is all of them now except Dell)?
> 
> Thanks for your time in advance!
> --Andrey

---
Wayne Bouchard
w...@typo.org
Network Dude
http://www.typo.org/~web/


Re: Rack rails on network equipment

2021-09-24 Thread Owen DeLong via NANOG



> On Sep 24, 2021, at 3:35 PM, Niels Bakker  wrote:
> 
> * c...@cmadams.net (Chris Adams) [Sat 25 Sep 2021, 00:17 CEST]:
>> Which - why do I have to order different part numbers for back to front 
>> airflow?  It's just a fan, can't it be made reversible?  Seems like that 
>> would be cheaper than stocking alternate part numbers.
> 
> The fan is inside the power supply right next to the high-voltage capacitors. 
> You shouldn't be near that without proper training.

Meh… Turn off power supply input switch, open chassis carefully, apply 
high-wattage 1Ω resistor across capacitor terminals for 10 seconds.

There isn’t going to be any high voltage left after that.

Owen

> 
> 
>   -- Niels.



Re: Rack rails on network equipment

2021-09-24 Thread Martin Hannigan
On Fri, Sep 24, 2021 at 1:34 PM Jay Hennigan  wrote:

> On 9/24/21 09:37, Andrey Khomyakov wrote:
>
> > *So ultimately my question to you all is how much do you care about the
> > speed of racking and unracking equipment and do you tell your suppliers
> > that you care? How much does the time it takes to install or replace a
> > switch impact you?*
>
> Very little. I don't even consider it when comparing hardware. It's a
> nice-to-have but not a factor in purchasing.
>
> You mention a 25-minute difference between racking a no-tools rail kit
> and one that requires a screwdriver. At any reasonable hourly rate for
> someone to rack and stack that is a very small percentage of the cost of
> the hardware. If a device that takes half an hour to rack is $50 cheaper
> than one that has the same specs and takes five minutes, you're past
> break-even to go with the cheaper one.
>

This. Once they're racked, they're not going anywhere. I would summarize as
they're certainly nice, but more of a nice to have. The only racking
systems I try to avoid are the WECO (Western Electric COmpany) standard.
The square "holes".

Warm regards,

-M<


Re: Rack rails on network equipment

2021-09-24 Thread Chris Adams
Once upon a time, Niels Bakker  said:
> * c...@cmadams.net (Chris Adams) [Sat 25 Sep 2021, 00:17 CEST]:
> >Which - why do I have to order different part numbers for back to
> >front airflow?  It's just a fan, can't it be made reversible?
> >Seems like that would be cheaper than stocking alternate part
> >numbers.
> 
> The fan is inside the power supply right next to the high-voltage
> capacitors. You shouldn't be near that without proper training.

I wasn't talking about opening up the case, although lots of fans are
themselves hot-swappable, so it should be possible to do without opening
anything.  They are just DC motors though, so it seems like a fan could
be built to reverse (although maybe the blade characteristics don't work
as well in the opposite direction).

-- 
Chris Adams 


Re: Rack rails on network equipment

2021-09-24 Thread William Herrin
On Fri, Sep 24, 2021 at 3:36 PM Niels Bakker  wrote:
> * c...@cmadams.net (Chris Adams) [Sat 25 Sep 2021, 00:17 CEST]:
> >Which - why do I have to order different part numbers for back to
> >front airflow?  It's just a fan, can't it be made reversible?  Seems
> >like that would be cheaper than stocking alternate part numbers.
>
> The fan is inside the power supply right next to the high-voltage
> capacitors. You shouldn't be near that without proper training.

Last rack switch I bought, no fan was integrated into the power
supply. Instead, a blower module elsewhere forced air past the various
components including the power supply. Efficient power supplies (which
you really should be using in 24/7 data centers) don't even generate
all that much heat.

Regards,
Bill Herrin




-- 
William Herrin
b...@herrin.us
https://bill.herrin.us/


Re: Rack rails on network equipment

2021-09-24 Thread Niels Bakker

* c...@cmadams.net (Chris Adams) [Sat 25 Sep 2021, 00:17 CEST]:
Which - why do I have to order different part numbers for back to 
front airflow?  It's just a fan, can't it be made reversible?  Seems 
like that would be cheaper than stocking alternate part numbers.


The fan is inside the power supply right next to the high-voltage 
capacitors. You shouldn't be near that without proper training.



-- Niels.


Re: Rack rails on network equipment

2021-09-24 Thread Chris Adams
Once upon a time, William Herrin  said:
> I care, but it bothers me less that the inconsiderate air flow
> implemented in quite a bit of network gear. Side cooling? Pulling air
> from the side you know will be facing the hot aisle? Seriously, the
> physical build of network equipment is not entirely competent.

Which - why do I have to order different part numbers for back to front
airflow?  It's just a fan, can't it be made reversible?  Seems like that
would be cheaper than stocking alternate part numbers.
-- 
Chris Adams 


Re: Rack rails on network equipment

2021-09-24 Thread Joe Maimon




Andrey Khomyakov wrote:

Hi folks,
Happy Friday!


Interesting tidbit is that we actually used to manufacture custom 
rails for our Juniper EX4500 switches so the switch can be actually 
inserted from the back of the rack (you know, where most of your 
server ports are...) and not be blocked by the zero-U PDUs and all the 
cabling in the rack. Stock rails didn't work at all for us unless we 
used wider racks, which then, in turn, reduced floor capacity.



Inserting switches into the back of the rack, where its nice and hot, 
usually suggests having reverse air flow hardware. Usually not stock.


Also, since its then sucking in hot air (from the midpoint of the cab or 
so), it is still hotter than having it up front, or leaving the U open 
in front.


On the other hand, most switches are quite fine running much hotter than 
servers with their hard drives and overclocked CPU's. Or perhaps thats 
why you keep changing them.


Personally I prefer pre-wiring front-to-back with patch panels in the 
back. Works for fiber and copper RJ, not so much all-in-one cables.


Joe



Re: Rack rails on network equipment

2021-09-24 Thread George Herbert
I’ve seen Dell rack equipment leap for safety (ultimately very very 
unsuccessfully…) in big earthquakes.  Lots of rack screws for me.

-George 

Sent from my iPhone

> On Sep 24, 2021, at 9:41 AM, Andrey Khomyakov  
> wrote:
> 
> 
> Hi folks,
> Happy Friday!
> 
> Would you, please, share your thoughts on the following matter?
> 
> Back some 5 years ago we pulled the trigger and started phasing out Cisco and 
> Juniper switching products out of our data centers (reasons for that are not 
> quite relevant to the topic). We selected Dell switches in part due to Dell 
> using "quick rails'' (sometimes known as speed rails or toolless rails).  
> This is where both the switch side rail and the rack side rail just snap in, 
> thus not requiring a screwdriver and hands of the size no bigger than a 
> hamster paw to hold those stupid proprietary screws (lookin at your, cisco) 
> to attach those rails.
> We went from taking 16hrs to build a row of compute (from just network 
> equipment racking pov) to maybe 1hr... (we estimated that on average it took 
> us 30 min to rack a switch from cut open the box with Juniper switches to 5 
> min with Dell switches)
> Interesting tidbit is that we actually used to manufacture custom rails for 
> our Juniper EX4500 switches so the switch can be actually inserted from the 
> back of the rack (you know, where most of your server ports are...) and not 
> be blocked by the zero-U PDUs and all the cabling in the rack. Stock rails 
> didn't work at all for us unless we used wider racks, which then, in turn, 
> reduced floor capacity.
> 
> As far as I know, Dell is the only switch vendor doing toolless rails so it's 
> a bit of a hardware lock-in from that point of view. 
> 
> So ultimately my question to you all is how much do you care about the speed 
> of racking and unracking equipment and do you tell your suppliers that you 
> care? How much does the time it takes to install or replace a switch impact 
> you?
> 
> I was having a conversation with a vendor and was pushing hard on the fact 
> that their switches will end up being actually costlier for me long term just 
> because my switch replacement time quadruples at least, thus requiring me to 
> staff more remote hands. Am I overthinking this and artificially limiting 
> myself by excluding vendors who don't ship with toolless rails (which is all 
> of them now except Dell)?
> 
> Thanks for your time in advance!
> --Andrey


Re: Rack rails on network equipment

2021-09-24 Thread Joe Greco
On Fri, Sep 24, 2021 at 02:49:53PM -0500, Doug McIntyre wrote:
> You mention about hardware lockin, but I wouldn't trust Dell to not switch
> out the design on their "next-gen" product, when they buy from a
> different OEM, as they are want to do, changing from OEM to OEM for
> each new product line. At least that is their past behavior over many years 
> in the past that I've been buying Dell switches for simple things. 
> Perhaps they've changed their tune. 

That sounds very much like their 2000's-era behaviour when they were
sourcing 5324's from Accton, etc.  Dell has more recently acquired
switch companies such as Force10 and it seems like they have been
doing more in-house stuff this last decade.  There has been somewhat
better stability in the product line IMHO.

> For me, it really doesn't take all that much time to mount cage nuts
> and screw a switch into a rack. Its all pretty 2nd nature to me, look
> at holes to see the pattern, snap in all my cage nuts all at once and
> go. If you are talking rows of racks of build, it should be 2nd nature?

The quick rails on some of their new gear is quite nice, but the best
part of having rails is having the support on the back end.

> Also, I hate 0U power, for that very reason, there's never room to
> move devices in and out of the rack if you do rear-mount networking.

Very true.

... JG
-- 
Joe Greco - sol.net Network Services - Milwaukee, WI - http://www.sol.net
"The strain of anti-intellectualism has been a constant thread winding its way
through our political and cultural life, nurtured by the false notion that
democracy means that 'my ignorance is just as good as your knowledge.'"-Asimov


Re: Rack rails on network equipment

2021-09-24 Thread Doug McIntyre
On Fri, Sep 24, 2021 at 09:37:58AM -0700, Andrey Khomyakov wrote:
>  We selected Dell switches in part due
> to Dell using "quick rails'' (sometimes known as speed rails or toolless
> rails). 

Hmm, I haven't had any of those on any of my Dell switches, but then
again, I haven't bought in in awhile. 

You mention about hardware lockin, but I wouldn't trust Dell to not switch
out the design on their "next-gen" product, when they buy from a
different OEM, as they are want to do, changing from OEM to OEM for
each new product line. At least that is their past behavior over many years 
in the past that I've been buying Dell switches for simple things. 
Perhaps they've changed their tune. 

For me, it really doesn't take all that much time to mount cage nuts
and screw a switch into a rack. Its all pretty 2nd nature to me, look
at holes to see the pattern, snap in all my cage nuts all at once and
go. If you are talking rows of racks of build, it should be 2nd nature?

Also, I hate 0U power, for that very reason, there's never room to
move devices in and out of the rack if you do rear-mount networking.


Re: Rack rails on network equipment

2021-09-24 Thread Randy Carpenter


Considering that the typical $5 pieces of bent metal list for ~$500 from most 
vendors, can you imagine the price of fancy tool-less rack kits?

Brand new switch: $2,000
Rack kit: $2,000


-Randy


RE: Rack rails on network equipment

2021-09-24 Thread Kevin Menzel via NANOG
Hi Andrey:

I work in upper education, we have hundreds upon hundreds of switches in at 
least a hundred network closets, as well as multiple datacenters, etc. We do a 
full lease refresh every 3-5 years of the full environment. The amount of time 
it takes me to get a switch out of a box/racked is minimal compared to the 
amount of time it takes for the thing to power on. (In that it usually takes 
about 3 minutes, potentially less, depending on my rhythm). Patching a full 48 
ports (correctly) takes longer that racking. Maybe that’s because I have far 
too much practice doing this at this point.

If there’s one time waste in switch install, from my perspective, it’s how long 
it takes the things to boot up. When I’m installing the switch it’s a minor 
inconvenience. When something reboots (or when something needs to be reloaded 
to fix a bug – glares at the Catalyst switches in my life) in the middle of the 
day, it’s 7-10 minutes of outage for connected operational hosts, which is… a 
much bigger pain.

So long story short, install time is a near-zero care in my world.

That being said, especially when I deal with 2 post rack gear – the amount of 
sag over time I’m expected to be OK with in any given racking solution DOES 
somewhat matter to me. (glares again at the Catalyst switches in my life). 
Would I like good, solid, well manufactured ears and/or rails that don’t change 
for no reason between equipment revisions? Heck yes.

--Kevin


From: NANOG  On Behalf 
Of Andrey Khomyakov
Sent: September 24, 2021 12:38
To: Nanog 
Subject: Rack rails on network equipment

This message was sent from outside of Sheridan College. Please be careful when 
opening attachments, clicking links, or responding to requests for information.


Hi folks,
Happy Friday!

Would you, please, share your thoughts on the following matter?

Back some 5 years ago we pulled the trigger and started phasing out Cisco and 
Juniper switching products out of our data centers (reasons for that are not 
quite relevant to the topic). We selected Dell switches in part due to Dell 
using "quick rails'' (sometimes known as speed rails or toolless rails).  This 
is where both the switch side rail and the rack side rail just snap in, thus 
not requiring a screwdriver and hands of the size no bigger than a hamster paw 
to hold those stupid proprietary screws (lookin at your, cisco) to attach those 
rails.
We went from taking 16hrs to build a row of compute (from just network 
equipment racking pov) to maybe 1hr... (we estimated that on average it took us 
30 min to rack a switch from cut open the box with Juniper switches to 5 min 
with Dell switches)
Interesting tidbit is that we actually used to manufacture custom rails for our 
Juniper EX4500 switches so the switch can be actually inserted from the back of 
the rack (you know, where most of your server ports are...) and not be blocked 
by the zero-U PDUs and all the cabling in the rack. Stock rails didn't work at 
all for us unless we used wider racks, which then, in turn, reduced floor 
capacity.

As far as I know, Dell is the only switch vendor doing toolless rails so it's a 
bit of a hardware lock-in from that point of view.

So ultimately my question to you all is how much do you care about the speed of 
racking and unracking equipment and do you tell your suppliers that you care? 
How much does the time it takes to install or replace a switch impact you?

I was having a conversation with a vendor and was pushing hard on the fact that 
their switches will end up being actually costlier for me long term just 
because my switch replacement time quadruples at least, thus requiring me to 
staff more remote hands. Am I overthinking this and artificially limiting 
myself by excluding vendors who don't ship with toolless rails (which is all of 
them now except Dell)?

Thanks for your time in advance!
--Andrey


Re: Rack rails on network equipment

2021-09-24 Thread Alain Hebert

    Hi,

    In my opinion:

        That time you take to rack devices with classic rail can be 
viewed as a bounding moment and, while appreciated by the device, will 
reducing downtime issues on the long run that you may have if you just 
rack & slap 'em.


    It is also Friday =D.

-
Alain Hebertaheb...@pubnix.net
PubNIX Inc.
50 boul. St-Charles
P.O. Box 26770 Beaconsfield, Quebec H9W 6G7
Tel: 514-990-5911  http://www.pubnix.netFax: 514-990-9443

On 9/24/21 12:56 PM, Grant Taylor via NANOG wrote:

On 9/24/21 10:37 AM, Andrey Khomyakov wrote:
So ultimately my question to you all is how much do you care about 
the speed of racking and unracking equipment and do you tell your 
suppliers that you care? How much does the time it takes to install 
or replace a switch impact you?


I was having a conversation with a vendor and was pushing hard on the 
fact that their switches will end up being actually costlier for me 
long term just because my switch replacement time quadruples at 
least, thus requiring me to staff more remote hands. Am I 
overthinking this and artificially limiting myself by excluding 
vendors who don't ship with toolless rails (which is all of them now 
except Dell)?


My 2¢ opinion / drive by comment while in the break room to get coffee 
and a doughnut is:


Why are you letting -- what I think is -- a relatively small portion 
of the time spent interacting with a device influence the choice of 
the device?


In the grand scheme of things, where will you spend more time 
interacting with the device; racking & unracking or administering the 
device throughout it's life cycle?  I would focus on the larger 
portion of those times.


Sure, automation is getting a lot better.  But I bet that your network 
administrators will spend more than an hour interacting with the 
device over the multiple years that it's in service.  As such, I'd 
give the network administrators more input than the installers racking 
& unracking.  If nothing else, break it down proportionally based on 
time and / or business expense for wages therefor.



Thanks for your time in advance!


The coffee is done brewing and I have a doughnut, so I'll take my 
leave now.


Have a good day ~> weekend.







Re: Rack rails on network equipment

2021-09-24 Thread richey goldberg
30 minutes to pull a switch from the box stick ears on it and mount it in the 
rack seems like a really long time.I think at tops that portion it 
that’s a 5-10 minute job if I unbox it at my desk. I use a drill with the 
correct toque setting  and a magnetic bit to put them on while it boots on my 
desk so I can drop a base config on it.

If you are replacing defective switches often enough that this is another issue 
I think you would have bigger issues than this to address.

Like others said that most switches are in the rack for the very long haul, 
often in excess of 5 years.   The amount of time required to do the initial 
install is insignificant in the grand scheme of things.

-richey

From: NANOG  on behalf of 
Andrey Khomyakov 
Date: Friday, September 24, 2021 at 12:38 PM
To: Nanog 
Subject: Rack rails on network equipment
Hi folks,
Happy Friday!

Would you, please, share your thoughts on the following matter?

Back some 5 years ago we pulled the trigger and started phasing out Cisco and 
Juniper switching products out of our data centers (reasons for that are not 
quite relevant to the topic). We selected Dell switches in part due to Dell 
using "quick rails'' (sometimes known as speed rails or toolless rails).  This 
is where both the switch side rail and the rack side rail just snap in, thus 
not requiring a screwdriver and hands of the size no bigger than a hamster paw 
to hold those stupid proprietary screws (lookin at your, cisco) to attach those 
rails.
We went from taking 16hrs to build a row of compute (from just network 
equipment racking pov) to maybe 1hr... (we estimated that on average it took us 
30 min to rack a switch from cut open the box with Juniper switches to 5 min 
with Dell switches)
Interesting tidbit is that we actually used to manufacture custom rails for our 
Juniper EX4500 switches so the switch can be actually inserted from the back of 
the rack (you know, where most of your server ports are...) and not be blocked 
by the zero-U PDUs and all the cabling in the rack. Stock rails didn't work at 
all for us unless we used wider racks, which then, in turn, reduced floor 
capacity.

As far as I know, Dell is the only switch vendor doing toolless rails so it's a 
bit of a hardware lock-in from that point of view.

So ultimately my question to you all is how much do you care about the speed of 
racking and unracking equipment and do you tell your suppliers that you care? 
How much does the time it takes to install or replace a switch impact you?

I was having a conversation with a vendor and was pushing hard on the fact that 
their switches will end up being actually costlier for me long term just 
because my switch replacement time quadruples at least, thus requiring me to 
staff more remote hands. Am I overthinking this and artificially limiting 
myself by excluding vendors who don't ship with toolless rails (which is all of 
them now except Dell)?

Thanks for your time in advance!
--Andrey


Re: Rack rails on network equipment

2021-09-24 Thread Mauricio Rodriguez via NANOG
Andrey, hi.

The speed rails are nice, and are effective in optimizing the time it takes
to rack equipment.  It's pretty much par for the course on servers today
(thank goodness!), and not so much on network equipment.  I suppose the
reasons being what others have mentioned - longevity of service life,
frequency at which network gear is installed, etc.  As well, a typical
server to switch ratio, depending on number of switch ports and
fault-tolerance configurations, could be something like 38:1 in dense 1U
server install.  So taking a few more minutes on the switch installation
isn't so impactful - taking a few more minutes on each server installation
can really become a problem.

A 30-minute time to install a regular 1U ToR switch seems a bit excessive.
Maybe the very first time a tech installs any specific model switch with a
unique rail configuration.  After that one, it should be around 10 minutes
for most situations.  I am assuming some level of teamwork where there is
an installer at the front of the cabinet and another at the rear, and they
work in tandem to install cage nuts, install front/rear rails (depending on
switch), position the equipment, and affix to the cabinet.  I can see the
30 minutes if you have one person, it's a larger/heavier device (like the
EX4500) and the installer is forced to do some kind of crazy balancing act
with the switch (not recommended), or has to use a server lift to install
it.

Those speed rails as well are a bit of a challenge to install if it's not a
team effort. So, I'm wondering if in addition to using speed rails, you may
have changed from a one-tech installation process to a two-tech team
installation process?

Best Regards,

Mauricio Rodriguez

Founder / Owner

Fletnet Network Engineering (www.fletnet.com)
*Follow us* on LinkedIn 

mauricio.rodrig...@fletnet.com

Office: +1 786-309-1082

Direct: +1 786-309-5493



On Fri, Sep 24, 2021 at 12:41 PM Andrey Khomyakov <
khomyakov.and...@gmail.com> wrote:

> Hi folks,
> Happy Friday!
>
> Would you, please, share your thoughts on the following matter?
>
> Back some 5 years ago we pulled the trigger and started phasing out Cisco
> and Juniper switching products out of our data centers (reasons for that
> are not quite relevant to the topic). We selected Dell switches in part due
> to Dell using "quick rails'' (sometimes known as speed rails or toolless
> rails).  This is where both the switch side rail and the rack side rail
> just snap in, thus not requiring a screwdriver and hands of the size no
> bigger than a hamster paw to hold those stupid proprietary screws (lookin
> at your, cisco) to attach those rails.
> We went from taking 16hrs to build a row of compute (from just network
> equipment racking pov) to maybe 1hr... (we estimated that on average it
> took us 30 min to rack a switch from cut open the box with Juniper switches
> to 5 min with Dell switches)
> Interesting tidbit is that we actually used to manufacture custom rails
> for our Juniper EX4500 switches so the switch can be actually inserted from
> the back of the rack (you know, where most of your server ports are...) and
> not be blocked by the zero-U PDUs and all the cabling in the rack. Stock
> rails didn't work at all for us unless we used wider racks, which then, in
> turn, reduced floor capacity.
>
> As far as I know, Dell is the only switch vendor doing toolless rails so
> it's a bit of a hardware lock-in from that point of view.
>
> *So ultimately my question to you all is how much do you care about the
> speed of racking and unracking equipment and do you tell your suppliers
> that you care? How much does the time it takes to install or replace a
> switch impact you?*
>
> I was having a conversation with a vendor and was pushing hard on the fact
> that their switches will end up being actually costlier for me long term
> just because my switch replacement time quadruples at least, thus requiring
> me to staff more remote hands. Am I overthinking this and artificially
> limiting myself by excluding vendors who don't ship with toolless rails
> (which is all of them now except Dell)?
>
> Thanks for your time in advance!
> --Andrey
>

-- 
This message (and any associated files) may contain confidential and/or 
privileged information. If you are not the intended recipient or authorized 
to receive this for the intended recipient, you must not use, copy, 
disclose or take any action based on this message or any information 
herein. If you have received this message in error, please advise the 
sender immediately by sending a reply e-mail and delete this message. Thank 
you for your cooperation.


Re: Rack rails on network equipment

2021-09-24 Thread William Herrin
On Fri, Sep 24, 2021 at 9:39 AM Andrey Khomyakov
 wrote:
> Interesting tidbit is that we actually used to manufacture custom rails for 
> our Juniper EX4500 switches so the switch can be actually inserted from the 
> back of the rack (you know, where most of your server ports are...) and not 
> be blocked by the zero-U PDUs and all the cabling in the rack. Stock rails 
> didn't work at all for us unless we used wider racks, which then, in turn, 
> reduced floor capacity.

Hi Andrey,

If your power cable management horizontally blocks the rack ears,
you're doing it wrong. The vendor could and should be making life
easier but you're still doing it wrong. If you don't want to leave
room for zero-U PDUs, don't use them. And point the outlets towards
the rear of the cabinet not the center so that installation of the
cables doesn't block repair.


> So ultimately my question to you all is how much do you care about the speed 
> of racking and unracking equipment and do you tell your suppliers that you 
> care? How much does the time it takes to install or replace a switch impact 
> you?

I care, but it bothers me less that the inconsiderate air flow
implemented in quite a bit of network gear. Side cooling? Pulling air
from the side you know will be facing the hot aisle? Seriously, the
physical build of network equipment is not entirely competent.

Regards,
Bill Herrin



-- 
William Herrin
b...@herrin.us
https://bill.herrin.us/


Re: Rack rails on network equipment

2021-09-24 Thread Denis Fondras
> You mention a 25-minute difference between racking a no-tools rail kit and
> one that requires a screwdriver. At any reasonable hourly rate for someone
> to rack and stack that is a very small percentage of the cost of the
> hardware. If a device that takes half an hour to rack is $50 cheaper than
> one that has the same specs and takes five minutes, you're past break-even
> to go with the cheaper one.
> 

I can understand the OP if his job is to provide/resell the switch and rack it
and then someone else (the customer) is operating it ;-)

As my fellow netops said, the switches are installed for a long time in the
racks (5+ years). I accept to trade installation easyness for
performance/feature/stability. When I need to replace it, it is never in a hurry
(and cabling properly takes more time than racking).

So easy installed rails may be a plus but far behind enything else.


Re: Rack rails on network equipment

2021-09-24 Thread Jay Hennigan

On 9/24/21 09:37, Andrey Khomyakov wrote:

*So ultimately my question to you all is how much do you care about the 
speed of racking and unracking equipment and do you tell your suppliers 
that you care? How much does the time it takes to install or replace a 
switch impact you?*


Very little. I don't even consider it when comparing hardware. It's a 
nice-to-have but not a factor in purchasing.


You mention a 25-minute difference between racking a no-tools rail kit 
and one that requires a screwdriver. At any reasonable hourly rate for 
someone to rack and stack that is a very small percentage of the cost of 
the hardware. If a device that takes half an hour to rack is $50 cheaper 
than one that has the same specs and takes five minutes, you're past 
break-even to go with the cheaper one.


Features, warranty, performance over the lifetime of the hardware are 
far more important to me.


If there were a network application similar to rock band going on tour 
where equipment needed to be racked up, knocked down, and re-racked 
multiple times a week it would definitely be a factor. Not so much in a 
data center where you change a switch out maybe once every five years.


And there's always the case where all of that fancy click-together 
hardware requires square holes and the rack has threaded holes so you've 
got to modify it anyway.


--
Jay Hennigan - j...@west.net
Network Engineering - CCIE #7880
503 897-8550 - WB6RDV


Re: Rack rails on network equipment

2021-09-24 Thread Brandon Butterworth
On Fri Sep 24, 2021 at 09:37:58AM -0700, Andrey Khomyakov wrote:
> As far as I know, Dell is the only switch vendor doing toolless rails

Having fought for hours trying to get servers with those
rails into some DCs racks I'd go with slightly slow but fits
everywhere

> *So ultimately my question to you all is how much do you care
> about the speed of racking and unracking equipment

I don't care as long as it fits in the rack properly, the time
taken to do that is small compared to the time it'll be there (many
years for us). I use an electric screwdriver if I need to do many. I
care more about what is inside the box than the box itself, I'll
have to deal with their software for years.

brandon


Re: Rack rails on network equipment

2021-09-24 Thread Grant Taylor via NANOG

On 9/24/21 10:37 AM, Andrey Khomyakov wrote:
So ultimately my question to you all is how much do you care about the 
speed of racking and unracking equipment and do you tell your suppliers 
that you care? How much does the time it takes to install or replace a 
switch impact you?


I was having a conversation with a vendor and was pushing hard on the 
fact that their switches will end up being actually costlier for me long 
term just because my switch replacement time quadruples at least, thus 
requiring me to staff more remote hands. Am I overthinking this and 
artificially limiting myself by excluding vendors who don't ship with 
toolless rails (which is all of them now except Dell)?


My 2¢ opinion / drive by comment while in the break room to get coffee 
and a doughnut is:


Why are you letting -- what I think is -- a relatively small portion of 
the time spent interacting with a device influence the choice of the device?


In the grand scheme of things, where will you spend more time 
interacting with the device; racking & unracking or administering the 
device throughout it's life cycle?  I would focus on the larger portion 
of those times.


Sure, automation is getting a lot better.  But I bet that your network 
administrators will spend more than an hour interacting with the device 
over the multiple years that it's in service.  As such, I'd give the 
network administrators more input than the installers racking & 
unracking.  If nothing else, break it down proportionally based on time 
and / or business expense for wages therefor.



Thanks for your time in advance!


The coffee is done brewing and I have a doughnut, so I'll take my leave now.

Have a good day ~> weekend.



--
Grant. . . .
unix || die



smime.p7s
Description: S/MIME Cryptographic Signature


Re: Rack rails on network equipment

2021-09-24 Thread Mel Beckman
We don’t care. We rack up switches maybe once or twice a year. It’s just not 
worth the effort to streamline. If we were installing dozens of switches a 
month, maybe. But personally I think it’s crazy to make rackability your 
primary reason for choosing a switch vendor. Do you base your automobile 
purchase decision on how easy it is to replace windshield wipers?

 -mel beckman

> On Sep 24, 2021, at 9:40 AM, Andrey Khomyakov  
> wrote:
> 
> So ultimately my question to you all is how much do you care about the speed 
> of racking and unracking equipment and do you tell your suppliers that you 
> care? How much does the time it takes to install or replace a switch impact 
> you?


Rack rails on network equipment

2021-09-24 Thread Andrey Khomyakov
Hi folks,
Happy Friday!

Would you, please, share your thoughts on the following matter?

Back some 5 years ago we pulled the trigger and started phasing out Cisco
and Juniper switching products out of our data centers (reasons for that
are not quite relevant to the topic). We selected Dell switches in part due
to Dell using "quick rails'' (sometimes known as speed rails or toolless
rails).  This is where both the switch side rail and the rack side rail
just snap in, thus not requiring a screwdriver and hands of the size no
bigger than a hamster paw to hold those stupid proprietary screws (lookin
at your, cisco) to attach those rails.
We went from taking 16hrs to build a row of compute (from just network
equipment racking pov) to maybe 1hr... (we estimated that on average it
took us 30 min to rack a switch from cut open the box with Juniper switches
to 5 min with Dell switches)
Interesting tidbit is that we actually used to manufacture custom rails for
our Juniper EX4500 switches so the switch can be actually inserted from the
back of the rack (you know, where most of your server ports are...) and not
be blocked by the zero-U PDUs and all the cabling in the rack. Stock rails
didn't work at all for us unless we used wider racks, which then, in turn,
reduced floor capacity.

As far as I know, Dell is the only switch vendor doing toolless rails so
it's a bit of a hardware lock-in from that point of view.

*So ultimately my question to you all is how much do you care about the
speed of racking and unracking equipment and do you tell your suppliers
that you care? How much does the time it takes to install or replace a
switch impact you?*

I was having a conversation with a vendor and was pushing hard on the fact
that their switches will end up being actually costlier for me long term
just because my switch replacement time quadruples at least, thus requiring
me to staff more remote hands. Am I overthinking this and artificially
limiting myself by excluding vendors who don't ship with toolless rails
(which is all of them now except Dell)?

Thanks for your time in advance!
--Andrey