re: http://www.garlic.com/~lynn/2018c.html#77 z/VM Live Guest Relocation
Other CP/67 7x24 trivia. Initially moving to 7x24 was some amount of chicken & egg. This was back in the days when machines were rented that IBM charged based on the system "meter" ... that ran when ever the cpu and/or any channels were operating ... and datacenters recovered their costs with "use" charges. Initially there was little offshift use but in order to encourage offshift use, the system had to be available at all times. To minimize their offshift costs ... there was a lot of CP/67 work down to oeprate "dark room" w/o operator present ... and to have special CCWs that allowed the channel to stop when nothing was going on ... but startup immediately when there was incoming characters (allowing system be up and available but the system meter would stop when idle). Note that for system meter to actually come to stop, cpu(s) and all channels had to be completely idle for at least 400milliseconds. trivia: long after business had moved from rent to purchase, MVS still had a timer task that woke up every 400milliseconds making sure that if system was IPL'ed, the system meter never stopped. with regard to MVS killing VM370 product (with excuse they needed the people to work on MVS/XA) ... the VM370 development group was out in the old IBM SBC (service bureau corporation) in Burlington Mall (mass, after outgrowing 3rd, 545tech sq space in cambridge). The shutdown/move plan was to not notify the people until just before the move ... in order to minimize the number that would escape. However the information leaked early ... and a lot managed to escape to DEC (joke was major contributer to the new DEC VAX/VMS system development was the head of POK). There was then a witch hunt to find out the source of the leak ... fortunately for me, nobody gave up the leaker. past posts mentioning Future System product ... its demise (and some mention of POK getting the VM370 product killed) http://www.garlic.com/~lynn/submain.html#futuresys not long after that, I transferred from science center out to IBM San Jose Research ... which was not long after US HONE datacenter consolidation up in Palo Alto. One of my hobbies from time I originally joined IBM was enhanced production operating systems for internal datacenters ... and HONE was a long time customer from just about their inception (and when started clones in other parts of the world, I would get asked to go along for the install). I have some old email from HONE about the head of POK telling them that they had to moved to MVS because VM370 would no longer be supported on high-end POK processors (just low-end and mid-range 370s from Endicott) ... and then later having to retract the statements. past posts mentioning HONE http://www.garlic.com/~lynn/subtopic.html#hone some old HONE related email http://www.garlic.com/~lynn/lhwemail.html#hone in previous post I had mentioned VMSHARE ... TYMSHARE started offering its CMS-based online computer conferencing, free to SHARE starting in August1976. I cut a deal with TYMSHARE to get monthly distribution tape of all VMSHARE (and later PCSHARE) files for putting up on internal IBM systems (also available over the internal network) ... including HONE. The biggest problem I had was from the lawyers that were afraid IBMers would be contaminated by customer information. some old email http://www.garlic.com/~lynn/lhwemail.html#vmshare another run in with the MVS group ... was that I was allowed to wander around the San Jose area ... eventually getting to play disk engineer, DBMS developer, HONE development, visit lots of customers, make presentations at customer user group meetings, etc. bldg. 14 disk enginner lab and bldg. 15 disk product test lab had "test cells" with stand-alone, mainframe test time, prescheduled around the clock. They had once tried to run testing under MVS (for some concurrent testing), but MVS had 15min MTBF in that environment (requiring manual re-ipl). I offerred to rewrite input/output supervisor to be bullet proof and never fail ... allowing for anytime, on-demand concurrent testing greatly improving productivity. I then wrote up an internal research report on all the work and happened to mention the MVS 15min MTBF ... which brought down the wrath of the MVS organization on my head. It was strongly implied that they attempted to separate me from the company and when they couldn't they would make things unpleasant in other ways. past posts getting to play disk engineer in bldgs. 14&15 http:///www.garlic.com/~lynn/subtopic.html#disk part of what I had to deal with was new 3380 ... another MVS story ... FE had developed regression test of 57 3380 errors that they would typically expect in customer shops. Not long before 3380 customer ship, MVS was failing (requiring reipol) in all 57 cases ... and in 2/3rds of the cases there wasn't any indication of what caused the failure. old email http://www.garlic.com/~lynn/2007.html#email801015 While at SJR, I was also involved in the original SQL/relational implementation, System/R. System/R was done on modified VM370 running on 370/145. The official next generation DBMS was EAGLE ... and while the corporation was preoccupied with EAGLE, we managed to do tech transfer "under the radar" to Endicott and get it released as SQL/DS. Then when EAGLE imploded, there was a request about how fast it would take to port System/R to MVS. This was eventually released as DB2 (originally for decision support only, note IMS was sort of database1 and EAGLE would have been database2 ... but System/R becomes it replacement). past posts mentioning System/R http://www.garlic.com/~lynn/submain.html#systemr previous posts mentioned last product we did at IBM was HA/CMP, past posts http://www.garlic.com/~lynn/subtopic.html#hacmp We were also doing commercial cluster scaleup with RDBMS vendors and scientific/technical cluster scaleup with national labs. reference to Jan1992 meeting in Ellison's conference room on commercial cluster scaleup http://www.garlic.com/~lynn/95.html#13 within a few weeks of the Ellison meeting, cluster scaleup was transferred to Kingston, announced as IBM supercomputer, and we were told that we couldn't work on anything with more than four processors. Likely contributing factor was that the (mainframe) DB2 group had been complaining that if we were allowed to go ahead, it would be at least five years ahead of them. A few months later we depart the company. some old email http://www.garlic.com/~lynn/lhwemail.html#medusa 17Feb1992 press, for scientific/technical "ONLY" http://www.garlic.com/~lynn/2001n.html#6000clusters1 11May1992 press, surprised by national lab intersest http://www.garlic.com/~lynn/2001n.html#6000clusters2 trivia: later two of the Oracle people (mentioned in the Ellison meeting) have also left Oracle and are at a small client/server startup responsible for something called the "commerce server". We are brought in as consultants because they want to do payment transactions on their server, the startup had also invented this technology they called "SSL" they want to use ... the result is now frequently called "electronic commerce". note in this time-frame, IBM had gone into the red and was being reorganized into the 13 "baby-blues" in preparation for breaking up the company ... reference behind paywall, but lives (mostly) free at wayback machine http://web.archive.org/web/20101120231857/http://www.time.com/time/magazine/article/0,9171,977353,00.html although we had left the company, we get a call from the bowels of Armonk asking if we can help with the breakup. Business units were using MOUs to leverage supplied contracts that were frequently with other divisions. With the breakup, these would be in other corporations and the MOUs would have to be cataloged and turned into their own contracts. before we get started, a new CEO is brought in and reverses the breakup. -- virtualization experience starting Jan1968, online at home since Mar1970 ---------------------------------------------------------------------- For IBM-MAIN subscribe / signoff / archive access instructions, send email to [email protected] with the message: INFO IBM-MAIN
