Brings back some good memories - I enjoy reading your post, I seem to have 
forgotten more about my life at Boeing than I remember, short time, 11 years in 
Philly but I do recall the 4341 trail connected to some new state of the art 
3390 controllers running MVS 4.3 ? or 5 strictly for an electronic mock up 
system for the V-22, I had a great pleasure working with some AI engineers and 
Wind Tunnel engineers on some off the wall projects. 
I remember the data center running microwave sender/receiver from one building 
to another to support CATIA workstations, not an IBM approved method to extend 
a channel but back them there were less options. so many other projects I can't 
even remember any longer :( 

BCS then morphed into Boeing Shared Services group. I have some fond memories 
of those time and the GREAT folks I worked with. 
thanks again for the trip back in time. sorry to stray from the topic folks 
Carmen 


----- Original Message -----

From: "Anne & Lynn Wheeler" <[email protected]> 
To: [email protected] 
Sent: Tuesday, December 6, 2016 3:39:45 PM 
Subject: Re: Why Can't You Buy z Mainframe Services from Amazon Cloud Services? 

[email protected] (Vitullo, Carmen P) writes: 
> I found this out some time ago working for Boeing, even though we were 
> one company, we still had to submit a budget each year for computing 
> services, this drove Boeing Helicopters to look at alternatives, 
> mostly the costs of CATIA and CADAM, on Big Iron, distributed was 
> initially cheaper, but in the long run many cost overruns due to poor 
> planning and a desire to 'just get off the mainframe at any cost', 
> that's what drove us to look at consolidation afterwards, then 
> migrations to / from other platforms....if it made sense :) 

60s we started doing lots of stuff to leave system up 7x24 for online 
access. Part of the issue was that (especially initially), offshift 
online access was very light so there was little useage ... but in order 
to encourage offshit useage, the systems had to be available 
7x24. Because of light useage there would be little recovery charges 
... so lots of things were done to minimize offshift expenses. This was 
in the days when mainframes were leased and charges were based on system 
meter that ran whenever the processor and/or any channel was busy 
... also everything had to be idle for at least 400milliseonds before it 
the system meter would stop (triva: long after mainframes had switched 
from leases to sales, MVS still had a time ever that went off every 
400ms, making sure system meter would never stop). In any case, came up 
with special channel programs for terminal I/O ... that would let the 
channel "go idle" ... but would immediately startup when there was any 
characters. Also lots of support for "dark room" ... not requiring 
offshift operators. 

For big cloud megadatacenters, the price of systems has so dramatically 
dropped that they have hundreds of thousands of "blades" (each blade 
with more processing power than max. configured mainframe) supported by 
staff of 80-120 people per megadatacenter. Also with the dramatic cut in 
system cost, the major expense has increasingly become power & 
cooling. The big cloud datacenters have been on the leading edge of 
systems where power&cooling drop to near zero when idle ... but are 
effectively instant on for ondemand computing. 

while an undergraduate in the 60s, I was hired as fulltime boeing 
employee to help with the formation of boeing computer services ... 
consolidate all computing into independent business unit as part of 
better monetizing the investment (which would also have the freedom to 
sell computing services to non-boeing entities, a precursor to cloud 
computing). at the time, I thought renton data center was possibly the 
largest in the world with something like $200M-$300m (60s dollars) in 
ibm mainframe gear (for a period, there were 360/65s arriving faster 
than they could be installed, boxes were constantly being staged in the 
hallways outside the datacenter). 

there was disaster scenario where mt rainier warms up and the resulting 
mudslide takes out the renton datacenter. the analysis was that the cost 
of being w/o the renton datacenter for a week was more than the cost of 
the renton datacenter ... so there was a effort underway to repicate the 
renton datacenter up at the new 747 plant in everett. 

in any case, the politics with the different plant managers tended to 
dwarf any technical issues. 

-- 
virtualization experience starting Jan1968, online at home since Mar1970 

---------------------------------------------------------------------- 
For IBM-MAIN subscribe / signoff / archive access instructions, 
send email to [email protected] with the message: INFO IBM-MAIN 


----------------------------------------------------------------------
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to [email protected] with the message: INFO IBM-MAIN

Reply via email to