Re: Co-existance of z/OS and z/VM on same DASD farm
p...@voltage.com (Phil Smith) writes: VM/XA MA begat VM/XA SF begat VM/XA SP, which eventually moved to Endicott, and became VM/ESA and then z/VM. The core of VM/XA was actually much better than VM/SP; as a developer I found it much easier to work with. re: http://www.garlic.com/~lynn/2012g.html#17 Co-existance of z/OS and z/VM on same DASD farm http://www.garlic.com/~lynn/2012g.html#19 Co-existance of z/OS and z/VM on same DASD farm http://www.garlic.com/~lynn/2012g.html#24 Co-existance of z/OS and z/VM on same DASD farm old email about vm/370 running in XA mode: http://www.garlic.com/~lynn/2011c.html#email860122 http://www.garlic.com/~lynn/2011c.html#email860123 http://www.garlic.com/~lynn/2011e.html#email870508 the early issue were claims that the resources to bring migration aid up to vm370 product level was several orders larger than the resources needed to fix any perceived deficiencies in vm370 (compared to migration aid). for little x-over with this thread: http://www.garlic.com/~lynn/2012g.html#29 24/7/365 appropriateness was Re: IBMLink outages in 2012 http://www.garlic.com/~lynn/2012g.html#30 24/7/365 appropriateness was Re: IBMLink outages in 2012 post from couple years ago about z/VM announcing cluster support: http://www.garlic.com/~lynn/2009p.html#43 From The Annals of Release No Software Before Its Time US HONE system had done vm370 cluster (loosely-coupled) single-system-image support in the late 70s (large number of multiprocessors sharing disk pool) ... US HONE datacenters had been consolidated in Palo Alto in mid-70s (building next door to where FACEBOOK later first moved into) and provided online salesmarketing support (HONE clones sprouted all over the world for world-wide salesmarketing support). In the early 80s, the datacenter was replicated in Dallas, and fall-over/load-balancing was extended across the two geographically separated datacenters. misc. poast posts mentioning HONE http://www.garlic.com/~lynn/subtopic.html#hone Prior to US HONE cluster support, vm370 commerical online service bureaus had done their own cluster support including non-disruptive migration of active running users between systems in the complex (not just logon load-balancing and fall-over). This allowed a system to be taken/varied offline for maintenance w/o impacting any users running on the system. misc. past posts mentioning commercial online service http://www.garlic.com/~lynn/submain.html#timeshare In the 80s, IBM research had done vm/4341 cluster support with 3088/trotter ... but when they went to release, they were told that they had to convert from their own home grown protocol to SNA/VTAM ... cluster operations that had taken small fraction of a second started taking half a minute or more. all of that would be disappearing in transition from vm370 base to vmtool/migration-aid base. with regard to loosely-coupled and SNA/VTAM battles ... my wife had earlier run into the problem when she had been con'ed into going to POK to be in charge of loosely-coupled architecture. She created peer-coupled shared data architecture while there ... but it saw very little uptake (except for IMS hot-standby) until SYSPLEX ... some past posts http://www.garlic.com/~lynn/submain.html#shareddata combination of little uptake and constant wars with the communication group over demands that she use SNA/VTAM for loosely-coupled operation contributed to her not remaining long in the position (there would be periodic temporary truces where it was allowed she could use anything she wanted within the datacenter ... but the communication group owned everything that crossed the datacenter walls). also note in the late 80s, a senior disk engineer had gotten a talked scheduled at the internal, worldwide, annual communication group conference and opened with the statement that the communication group was going to be responsible for the demise of the disk division. the issue that the communication group was protecting their terminal emulation install base ... and the disk division was starting to see drop of sales as data was fleeing the datacenter to more distributed computing friendly platforms. The disk division had come up with a number of solutions for the problem ... but (again) the communication group had strategic ownership for everything that cross the datacenter walls (and would veto the solutions). misc. past posts mentioning terminal emulation paradigm http://www.garlic.com/~lynn/subnetwork.html#emulation this whole situation contributed to the significant dropoff of mainframe use and the company going into the red in early 90s. Reference to a Gerstner's resurrection of IBM ... as well as pointer to review of Gerstner's book who says elephants can't dance (in IBM employee forum): http://www.garlic.com/~lynn/2012f.html#84 -- virtualization experience starting Jan1968, online at home since Mar1970 -- For IBM-MAIN subscribe /
Re: Co-existance of z/OS and z/VM on same DASD farm
Paul Gilmartin wrote: So, who won? It doesn't sound as if the climate would admit a compromise? VM/XA MA begat VM/XA SF begat VM/XA SP, which eventually moved to Endicott, and became VM/ESA and then z/VM. The core of VM/XA was actually much better than VM/SP; as a developer I found it much easier to work with. The memories fade, but VM/XA as it was shipped impressed me favorably. One thing I remember is that CP QUERY would tell the size of a spool file _before_ it was closed. Revolutionary! And invaluable to operators trying to identify the runaway VM. And the VM/XA SPOOL system in general was super-robust - I wrote a system mod (product) that tinkered with SPOOL, and while I created SPOOL files that couldn't be seen, and couldn't be opened, and couldn't be purged by normal means, I *never* took out the rest of SPOOL. Really nice stuff. Especially after the HPO 5 debacle! ...phsiii -- For IBM-MAIN subscribe / signoff / archive access instructions, send email to lists...@bama.ua.edu with the message: INFO IBM-MAIN
Re: Co-existance of z/OS and z/VM on same DASD farm
p...@voltage.com (Phil Smith) writes: And the VM/XA SPOOL system in general was super-robust - I wrote a system mod (product) that tinkered with SPOOL, and while I created SPOOL files that couldn't be seen, and couldn't be opened, and couldn't be purged by normal means, I *never* took out the rest of SPOOL. Really nice stuff. Especially after the HPO 5 debacle! re: http://www.garlic.com/~lynn/2012g.html#17 Co-existance of z/OS and z/VM on same DASD farm http://www.garlic.com/~lynn/2012g.html#19 Co-existance of z/OS and z/VM on same DASD farm 40th vm370 anniv. this year ... 2012 vm workshop discussion in (linkedin) z/VM http://lnkd.in/Emfz8Z some also archived here http://www.garlic.com/~lynn/2012g.html#18 and http://www.garlic.com/~lynn/2012g.html#23 I posted schedule for 1987 vm workshop ... mentions I gave two presentations (on performance and networking) and two BOFs (debugging and spool file system rewrite). The spool file system rewrite was because I needed at least a factor of 100 times increase in thruput (for RSCS network thruput). I also made the integrity of the spool file system and the integrity of the overall system completely independent (like I could loose whole spool file disk w/o impacting the running of the system and/or the integrity of the spool files on other disks). -- virtualization experience starting Jan1968, online at home since Mar1970 -- For IBM-MAIN subscribe / signoff / archive access instructions, send email to lists...@bama.ua.edu with the message: INFO IBM-MAIN
Re: Co-existance of z/OS and z/VM on same DASD farm
VM had dasd read/only for volser (vol1 record) to identify each mounted disk. VM r/w activity was limited to vm page formated disks. CMS running in virtual machine had support for cms filesystems and some primitive support for real formated OS DOS disks. regarding incorrently rewriting vtoc ... there is some possibility it might have happened if somebody had attached/linked the real disks to cms in a virtual machine (in r/w mode). In the mid-70s, one of the people in the vm370/cms development group significantly rewrote and developed full function OS r/w filesystem (real os vtoc, pds directory, etc.) function in CMS (joke that the 100k bytes was more efficient os/360 similiation than the 8+mbytes that had been done in MVS for os/360 simulation). however this was approx. the period when FS effort was imploding and there was mad rush to get products back into the 370 pipelines (during FS effort, 370 activity was being suspended and/or killed off). misc. past posts mentioning Future System effort (that was going to completely replace 360/370) http://www.garlic.com/~lynn/submain.html#futuresys As part of reconsituting 370, (303x was kicked off in parallel with 370/xa) and the head of POK managed to convince corporate to kill off vm370 product, shutdown the development group and move all the people to POK ... or otherwise they wouldn't be able to meet the mvs/xa ship schedule. somehow the vm370 development group was warned ahead of time and some of the people managed to escape being moved to POK (there was joke about head of POK was major contributor to DEC vax/vms). in the killing off of the vm370 product and shutdown of the group ... before the full function OS filesystem support shipped ... and it all just disappeared. Eventually, Endicott managed to save the vm370 product mission, but they had to reconstitute a development group from scratch. -- virtualization experience starting Jan1968, online at home since Mar1970 -- For IBM-MAIN subscribe / signoff / archive access instructions, send email to lists...@bama.ua.edu with the message: INFO IBM-MAIN
Re: Co-existance of z/OS and z/VM on same DASD farm
On Fri, 18 May 2012 14:21:49 -0400, Anne Lynn Wheeler wrote: In the mid-70s, one of the people in the vm370/cms development group significantly rewrote and developed full function OS r/w filesystem (real os vtoc, pds directory, etc.) function in CMS (joke that the 100k bytes was more efficient os/360 similiation than the 8+mbytes that had been done in MVS for os/360 simulation). Of course that was going the wrong way. In the long term, support in MVS for CMS-style FBA DASD would have been of greater value. Eventually, Endicott managed to save the vm370 product mission, but they had to reconstitute a development group from scratch. And somewhere in there, there was something like VM/XA/SF (System Facility), intended to allow virtual machines for development and testing, but not to support emigration of the OS workload as happened in the VSCR crisis. -- gil -- For IBM-MAIN subscribe / signoff / archive access instructions, send email to lists...@bama.ua.edu with the message: INFO IBM-MAIN
Re: Co-existance of z/OS and z/VM on same DASD farm
paulgboul...@aim.com (Paul Gilmartin) writes: And somewhere in there, there was something like VM/XA/SF (System Facility), intended to allow virtual machines for development and testing, but not to support emigration of the OS workload as happened in the VSCR crisis. re: http://www.garlic.com/~lynn/2012g.html#17 Co-existance of z/OS and z/VM on same DASD farm the POK group did VMTOOL that was supposed to be for internal use only for MVX/XA development. However, eventually the decision was made to release it as VM/SF ... for customer aid in MVS to MVS/XA conversion. There was lots of internal politics. Internally, vm370 had been ported and running in 370/XA support ... had much better function, features, performance, reliability, etc than VM/SF. However, there was growing politics to turn VM/SF into VM/XA ... even tho the vm370 solution running in XA-mode was significantly better. Part of the issue was that VM/SF was from the POK high-end group ... which was responsible for XA. vm370 was still from the endicott mid-range group ... which had less political clout. old post with mention of vm/811 (aka vm/sf ... XA was referred to as 811 internally for the nov1978 date on lots of the XA architecture documents). http://www.garlic.com/~lynn/2011b.html#70 VM/370 3081 and discussion (with old email) about vm370 running in xa-mode http://www.garlic.com/~lynn/2011c.html#87 A History of VM Performance with regard to FBA ... I've mentioned before that I was told that it would cost $26M to release MVS support for FBA (fixed-block archtecture, at the time 3370s) ... even if I gave the MVS group fully integrated and tested code. The $26M was just for education and documentation changes. To justify the $26M, I had to show incremental new disk sales (on the order of ten times the cost ... i.e. around $300M); and they were claiming that they were making selling as much disks as possible ... and if MVS had FBA support ... customers would just switch to having the same amount of FBA as CKD. I wasn't allowed to use business justification for drastically reduced lifetime costs ... I had to have business justification showing additional new sales. As as been pointed out ... current disks are all FBA ... there haven't been real CDK disks made for decades. misc. past posts mentioning DASD, CKD, FBA, multi-track search, etc http://www.garlic.com/~lynn/submain.html#dasd -- virtualization experience starting Jan1968, online at home since Mar1970 -- For IBM-MAIN subscribe / signoff / archive access instructions, send email to lists...@bama.ua.edu with the message: INFO IBM-MAIN
Re: Co-existance of z/OS and z/VM on same DASD farm
On Fri, 18 May 2012 16:32:34 -0400, Anne Lynn Wheeler wrote: VM/SF was from the POK high-end group ... which was responsible for XA. vm370 was still from the endicott mid-range group ... which had less political clout. So, who won? It doesn't sound as if the climate would admit a compromise? The memories fade, but VM/XA as it was shipped impressed me favorably. One thing I remember is that CP QUERY would tell the size of a spool file _before_ it was closed. Revolutionary! And invaluable to operators trying to identify the runaway VM. -- gil -- For IBM-MAIN subscribe / signoff / archive access instructions, send email to lists...@bama.ua.edu with the message: INFO IBM-MAIN
Re: Co-existance of z/OS and z/VM on same DASD farm
In 1696751327392196.wa.ronmacraehotmail.co...@bama.ua.edu, on 05/16/2012 at 03:03 PM, Ron MacRae ronmac...@hotmail.co.uk said: The reason I'm worried is that in a previous life, over 30 years ago, my previous company attempted to do the same between an VM system and a DOS/VSE system. This was a long time ago on a real machine in pre LPAR days. When they brought up VM for the first time it objected to the VSE VTOCs it found and rewrote them as OS VTOCs and we lost the whole DASD farm. Can you provide any details? I've certainly never seen anything like that with VM/SE (SEPP). -- Shmuel (Seymour J.) Metz, SysProg and JOAT ISO position; see http://patriot.net/~shmuel/resume/brief.html We don't care. We don't have to care, we're Congress. (S877: The Shut up and Eat Your spam act of 2003) -- For IBM-MAIN subscribe / signoff / archive access instructions, send email to lists...@bama.ua.edu with the message: INFO IBM-MAIN
Re: Co-existance of z/OS and z/VM on same DASD farm
Interesting worked vm since sp1 and VSE (1970s) never ever saw that problem... Scott ford www.identityforge.com On May 17, 2012, at 7:24 PM, Shmuel Metz (Seymour J.) shmuel+ibm-m...@patriot.net wrote: In 1696751327392196.wa.ronmacraehotmail.co...@bama.ua.edu, on 05/16/2012 at 03:03 PM, Ron MacRae ronmac...@hotmail.co.uk said: The reason I'm worried is that in a previous life, over 30 years ago, my previous company attempted to do the same between an VM system and a DOS/VSE system. This was a long time ago on a real machine in pre LPAR days. When they brought up VM for the first time it objected to the VSE VTOCs it found and rewrote them as OS VTOCs and we lost the whole DASD farm. Can you provide any details? I've certainly never seen anything like that with VM/SE (SEPP). -- Shmuel (Seymour J.) Metz, SysProg and JOAT ISO position; see http://patriot.net/~shmuel/resume/brief.html We don't care. We don't have to care, we're Congress. (S877: The Shut up and Eat Your spam act of 2003) -- For IBM-MAIN subscribe / signoff / archive access instructions, send email to lists...@bama.ua.edu with the message: INFO IBM-MAIN -- For IBM-MAIN subscribe / signoff / archive access instructions, send email to lists...@bama.ua.edu with the message: INFO IBM-MAIN
Co-existance of z/OS and z/VM on same DASD farm
Hi all, We are currently an exclusively z/OS site with multiple LPARs sharing a single IOCDS and DASD farm. We are about to install z/VM in a new LPAR and I'm worried about both OSs sharing the same DASD farm. They will not be sharing at the volume level. I've read through the install doc and it all seems fine, you tell the install process 6 or 9 unit adresses and it goes and loads stuff onto them and then you IPL. There is no mention of modifying other volumes, however there are include and exclude unit address lists that you can specify to define what z/VM will try to look at, which presumably you can't get at until after the basic install and IPL. Also z/VM can issue sense commands to determine what devices are out there. The reason I'm worried is that in a previous life, over 30 years ago, my previous company attempted to do the same between an VM system and a DOS/VSE system. This was a long time ago on a real machine in pre LPAR days. When they brought up VM for the first time it objected to the VSE VTOCs it found and rewrote them as OS VTOCs and we lost the whole DASD farm. Management were not best pleased. I wasn't directly involved at that time so I'm not 100% sure of my facts here and perhaps the guys who did this did something wrong, however my worry still remains. My question is - Do we have to isolate z/VM from the z/OS volumes or will z/VM play nice and leave stuff alone? I just want to double check that VM will only touch the 6 volumes it is given at install time. Regards, Ron MacRae. -- For IBM-MAIN subscribe / signoff / archive access instructions, send email to lists...@bama.ua.edu with the message: INFO IBM-MAIN
Re: Co-existance of z/OS and z/VM on same DASD farm
In my past life, on VM/370 (yes that far back), VM always played nice with our MVT and MVS 3.8 system. VM has the concept of system owned volumes. You must specify these in a control file on the IPL volume. Also, VM does not use a z/OS type VTOC at all (well, there is a VOL1 and a 1 track VTOC which says no space available). It has its own formatting. The VM system will not try to use any volume itself which does not have a VM directory on it. You must create this using a VM utility. And VM itself, as opposed to a guest under VM, will not touch a volume which does not have this directory on it. IMO, it is more likely that z/OS will trash a z/VM volume than vice versa. I'd strongly suggest keeping the z/VM volumes off line to z/OS. z/OS does not play nice with others. It is a bit arrogant and thinks it is the only thing in your environment. And, FWIW, we have run our current z/OS systems under z/VM for the past 5+ years at Sungard. We have never had a problem due to a z/VM malfunction. IOW, you should not have any fears. The only fear would be if you deliberately ATTACHed a z/OS volume to a guest and the guest (such as CMS) were to write on the volume. There is a z/VM list available at mailto:lists...@listserv.uark.edu In the body: subscribe ibmvm Ron MacRae -- John McKown Systems Engineer IV IT Administrative Services Group HealthMarkets(r) 9151 Boulevard 26 * N. Richland Hills * TX 76010 (817) 255-3225 phone * john.mck...@healthmarkets.com * www.HealthMarkets.com Confidentiality Notice: This e-mail message may contain confidential or proprietary information. If you are not the intended recipient, please contact the sender by reply e-mail and destroy all copies of the original message. HealthMarkets(r) is the brand name for products underwritten and issued by the insurance subsidiaries of HealthMarkets, Inc. -The Chesapeake Life Insurance Company(r), Mid-West National Life Insurance Company of TennesseeSM and The MEGA Life and Health Insurance Company.SM -Original Message- From: IBM Mainframe Discussion List [mailto:IBM-MAIN@bama.ua.edu] On Behalf Of Ron MacRae Sent: Wednesday, May 16, 2012 3:04 PM To: IBM-MAIN@bama.ua.edu Subject: Co-existance of z/OS and z/VM on same DASD farm Hi all, We are currently an exclusively z/OS site with multiple LPARs sharing a single IOCDS and DASD farm. We are about to install z/VM in a new LPAR and I'm worried about both OSs sharing the same DASD farm. They will not be sharing at the volume level. I've read through the install doc and it all seems fine, you tell the install process 6 or 9 unit adresses and it goes and loads stuff onto them and then you IPL. There is no mention of modifying other volumes, however there are include and exclude unit address lists that you can specify to define what z/VM will try to look at, which presumably you can't get at until after the basic install and IPL. Also z/VM can issue sense commands to determine what devices are out there. The reason I'm worried is that in a previous life, over 30 years ago, my previous company attempted to do the same between an VM system and a DOS/VSE system. This was a long time ago on a real machine in pre LPAR days. When they brought up VM for the first time it objected to the VSE VTOCs it found and rewrote them as OS VTOCs and we lost the whole DASD farm. Management were not best pleased. I wasn't directly involved at that time so I'm not 100% sure of my facts here and perhaps the guys who did this did something wrong, however my worry still remains. My question is - Do we have to isolate z/VM from the z/OS volumes or will z/VM play nice and leave stuff alone? I just want to double check that VM will only touch the 6 volumes it is given at install time. Regards, Ron MacRae. -- For IBM-MAIN subscribe / signoff / archive access instructions, send email to lists...@bama.ua.edu with the message: INFO IBM-MAIN -- For IBM-MAIN subscribe / signoff / archive access instructions, send email to lists...@bama.ua.edu with the message: INFO IBM-MAIN
Re: Co-existance of z/OS and z/VM on same DASD farm
Ron, I also did the same as John has MVS at that time under VM , with no issues. The Sungard people were great, so if you have gaps in understanding or knowledge they help out in a DR situation. Almost all of the DR problems I have seen, done a ton of tests, turned out to be poor planning and execution. I am of the mindset always, 'plan the work and work the plan'... Scott ford www.identityforge.com On May 16, 2012, at 4:03 PM, Ron MacRae ronmac...@hotmail.co.uk wrote: Hi all, We are currently an exclusively z/OS site with multiple LPARs sharing a single IOCDS and DASD farm. We are about to install z/VM in a new LPAR and I'm worried about both OSs sharing the same DASD farm. They will not be sharing at the volume level. I've read through the install doc and it all seems fine, you tell the install process 6 or 9 unit adresses and it goes and loads stuff onto them and then you IPL. There is no mention of modifying other volumes, however there are include and exclude unit address lists that you can specify to define what z/VM will try to look at, which presumably you can't get at until after the basic install and IPL. Also z/VM can issue sense commands to determine what devices are out there. The reason I'm worried is that in a previous life, over 30 years ago, my previous company attempted to do the same between an VM system and a DOS/VSE system. This was a long time ago on a real machine in pre LPAR days. When they brought up VM for the first time it objected to the VSE VTOCs it found and rewrote them as OS VTOCs and we lost the whole DASD farm. Management were not best pleased. I wasn't directly involved at that time so I'm not 100% sure of my facts here and perhaps the guys who did this did something wrong, however my worry still remains. My question is - Do we have to isolate z/VM from the z/OS volumes or will z/VM play nice and leave stuff alone? I just want to double check that VM will only touch the 6 volumes it is given at install time. Regards, Ron MacRae. -- For IBM-MAIN subscribe / signoff / archive access instructions, send email to lists...@bama.ua.edu with the message: INFO IBM-MAIN -- For IBM-MAIN subscribe / signoff / archive access instructions, send email to lists...@bama.ua.edu with the message: INFO IBM-MAIN