On Jun 21, 2006, at 1:05 AM, Blaster wrote:
If the machines are important enough, there should be someone
on-site
24/7. If the company is too cheap to hire adequate staffing to be
on-hand for critical systems (we used to call this a "NOC", back when
companies hired clueful people), then remote console access is a
reasonable substitute.
I'd say the jury is still out on this one. There are cost benefits to
both
solutions.
Yes, but I tend to ignore cost and pursue uptime. :)
Obviously it's always nice to have a human on site, but I've
worked with DCs that didn't have anyone capable of doing anything more
than
changing a tape and with RSC and ALOMs, I never needed anyone. If a
piece
of hardware broke, Sun kindly sent someone out to fix it. I gave them
the
rack location and unit number, they called me when the wrench light no
longer illuminated. What more did I need?
That sounds like a reasonable plan. I have less faith in vendor
support, though, to be honest. I spent a good bit of time in the early
1990s telling Sun engineers things they didn't know about their own
products. (no offense intended to the Sun guys here)
Console access should be simple, bulletproof, secure, and
absolutely
reliable...and for that, a good old-fashioned terminal simply can't be
beat.
Perhaps. But tell me, I've worked in DCs that had 900 Sparc boxes. It
certainly wasn't feasible to have 900 ASCII terminals sucking power and
space.
The largest installation I've worked in was an ISP with about 1200
SPARC machines. We had three ASCII terminals (DEC VT320s) in the room.
Two were on a table with very long serial cables which were kept
coiled up on the table, and could reach any system in the room. The
third was on a roll-around cart which could be taken anywhere in the
room if an admin felt the need to be physically close to the machine
while working on it. That scheme worked very well for us.
I'll take the so called complexity of an RSC or ALOM any day. I can
remotely power cycle a system with them, can't do that with an ASCII
terminal. I've yet to have an ALOM or RSC fail on me, and I've worked
with
thousands of them.
I've used RSCs in three machines, and *all three* have given me
problems of one sort or another. I was very unlucky with them, I
guess. :-( But pure simplicity is what I strive for at every
opportunity, and that's the viewpoint my comments come from. :-)
I mean no disrespect, but I suspect you may have been infected by
the
cult of "it's been around awhile therefore it's automatically
obsolete", also known as "if it's old, it's bad, and if it's not new,
it's old." :-(
I mean no disrespect either, but you sound like an old dog who needs to
learn some new tricks.
That's quite possible! :-) I'm a tech guy, and I love new gadgets, I
rarely if ever fear them...I just adhere to rules of absolute
simplicity wherever possible. Too much of our industry seems built
around the idea that increased complexity is the same thing as
innovation. If someone is going to add complexity to something I'm
responsible for, I want REAL TANGIBLE BENEFITS from that added
complexity. The only thing I've seen in the past decade in this
context which gives me those benefits is the use of a terminal server
as a console server...so that's what I use.
Just because something is "NEW" and more complex doesn't make it bad.
Of course not. But just because something is new and more complex
doesn't make it good either. Each thing must be analyzed and
considered for new benefits instead of being blindly accepted, because
more often than not these days, the "new benefits" tend only to include
"increased profits for the vendor". I want to know what benefits I am
going to realize when the hair-greased-back salesdroid is doing his
schpiel on the whiteboard. I want to hear uptime, accessibility,
maintainability, traceability, clean design...if instead I hear words
like "synergistic", "action item", "global leader", or "committed",
that salesdroid is getting an escort to the door instead of a purchase
order. This business is preying on gullible buyers who will gleefully
buy anything that's new and shiny, then everything else somehow
magically stops working. I'm not saying *you* are doing that, but a
lot of people do, and my position is perhaps an overreaction toward
bucking that trend...at least in the small part of the world in which
my words have influence. Upgrading for the sake of upgrading is bogus,
and not all upgrades are actually "upgrades".
My Pentium 4 has many millions more transistors than
my 4Mhz 8088 did, and I've had far fewer problems with it than I did my
first IBM PC. My hard drives now hold gigabytes instead of megabytes,
but
last for years instead of months.
We're on the same side there. This is my hobby as well as my career
my friend, and it's been both as long as I can remember...please
believe me when I tell you it's not a matter of not wanting to move
forward with advancing technology. I just want those advances to be
real, not just the figment of the imagination of a vendor rep with an
overactive sales gland.
As complexity has increased, reliability has increased as well.
Well I absolutely cannot agree with that part...Sitting ten feet from
me are two DEC RK05 hard drives which were built in 1972 and still work
great. (hobby restoration stuff)...while sitting here on my desk are
about twelve IDE drives, from 120GB to 250GB, all dead. "Cost
engineering" and suitly profit-centric customer-satisfaction-be-damned
behavior has hurt reliability to the point where I don't trust any
consumer-grade hardware anymore, and even the "biggies" like Sun and HP
are using consumer-grade components in their otherwise-enterprise-grade
systems these days.
*laugh* Interesting, I have remote console access for my unmanned
sites, but you'd better believe the first person who takes the ASCII
terminals out of those rooms will be looking for a job the next day.
And anyone who came to interview for one of my job openings who
suggested we
use ASCII terminals would be thanked for his time and resume thrown in
the
trash before s/he could get his car started.
I think we'll just have to agree to disagree there, as anyone
interviewing with me would be shown the door if he/she asked about
putting big color monitors in the machine room! ;)
My original point, however, is that servers in a datacenter are a
very bad place for graphical consoles. Is my opinion on this matter
commonly held? Nope. Do people laugh at me for it? Yes, on
occasion.
Do the companies I work for clamor all over themselves for the
uptime
stats that I deliver to their production groups? Yup.
We're not talking about graphics terminals. Just more reliable and
efficient ways of accessing consoles using modern day methods that
provide
more control over the server it's talking to. An RSC or ALOM is hardly
graphical.
Agreed, but we were originally talking about graphics terminals.
Admittedly I have much less of an issue with RSC or ALOM than I do a
graphics head on a server.
Ever read Jonathan Schwartz's blog? I work in data centers similar to
the
ones he discusses with power/space/cooling problems. We shuttled
dozens of
functioning Enterprise class servers out the door for smaller
space/lower
power systems. Are you going to tell me that was a bad thing now?
Only if you're deploying consumer-grade PC hardware in datacenter
applications. But even if you're doing that, that's your business, and
more power to you.
-Dave
--
Dave McGuire
Cape Coral, FL
_______________________________________________
SunRay-Users mailing list
[email protected]
http://www.filibeto.org/mailman/listinfo/sunray-users