On 03/12/2017 11:34 PM, Gabe Goldberg wrote:
> Bridging the Distance
> Remote system control, despite its complexity, is worth it
>
> Remote system programming used to mean using a keypunch machine
> outside the data center. But card decks still needed to get to the
> clunky 2540 or equivalent unit record device. Maybe we had a key or
> door code to do this ourselves, or maybe we handed it to an operator.
> Then, 3270-style devices allowed for increased distance—and hike—to
> and from our systems. Finally, networked terminals and workstations
> made location irrelevant. Whether in an office or working from home, z
> Systems programmers/administrators can now work from the next office,
> building, city, time zone or continent.
>
> But should this be happening? Do today's system programmers need
> physical access to data centers? Why or why not? Does being able to
> see and touch one's systems hold real value, or is it just a matter of
> professional pride? And to what extent is it practical to have a lack
> of immediacy to data centers, operations staff, users or matters?
>
> http://destinationz.org/Mainframe-Solution/Business-Case/Bridging-the-Distance
>
>
>
Hardware is certainly more reliable than in the old days and in some
ways simpler, but in other ways more complex. 

20 years ago at the organization I worked, it was the System Programmers
and Technical Services that were familiar on some level with everything
relevant to the availability and maintenance of equipment in the data
center.  We had input on the design of a new data center and the
environmental systems to support it. We were directly involved in
decisions on where to place new equipment or how to remove old
equipment, even to the extent of knowing where spare floor tiles were
kept and occasionally cutting some of the tiles our selves.  We owned
the area beneath the raised floor, planned I/O device addresses and
channel assignments, planned cable routing, knew and documented where
the inter-equipment and power cables were laid and connected and in most
cases were the ones who had installed them.  We were familiar with
building environmental systems: power, cooling, UPS, fire-suppression,
(and even some plumbing) to the extent those impacted on the data center
availability.

This put the System Programmers in a position to be aware when something
wasn't quite right in the Data Center, to be aware when something was
planned that might be disruptive, and even to look over the shoulder of
an IBM CE doing "concurrent" maintenance to double check they didn't
accidentally take offline that last interface to a critical controller.

With that experience, I have some concern that at some point those most
familiar with the logical structure of the data center will become so
isolated from those that deal with the physical aspects of the data
center that some bad decisions will be made that could have been avoided
had the collective knowledge been left more centralized and coordinated. 

I appreciated the increased ability  over the years to resolve problems
from the relative warmth of my desk, or from the convenience of my house
at 2 AM; but one of the things that attracted me to computing and
Systems Programming in the first place was a fascination with the
physical computer hardware.
     Joel C. Ewing


-- 
Joel C. Ewing,    Bentonville, AR       jcew...@acm.org 

----------------------------------------------------------------------
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN

Reply via email to