Hello Frank, > I am wondering how everyone handles maintenance in your VM >environment with your zLinux instances.
We have a seperate install environment where we install and maintain our z/VM environment. If service has been applied (using VMSES, if possible not with put2prod) we copy only the changed items to our production VM's, about 15 VM systems. So we only need to restart those users that use the new coding. If CMS is serviced we need to restart users that use CMS (ie restart all users with a link to the MAINT 190). But a zLinux machine doe s not require CMS after the IPL of the guest (Nor does VSE for that matter) . So change of CMS does not require linux to reboot. We do not IPL unless C P has been serviced or a POR is required. A POR usually is only required fo r microcodeupgrades from IBM. If a change can be done without disrupting availability we will do that. So dynamic IO, set timezone, TCPIP OBEY, et c. >Also to note we just implemented z/VM on two CEC's and plan to allow >failover between the two machines. Our z/Linux environment is located on 2 VM's on two different machines. I n future all z/Linux images will use vswitch. The DASD can be shared. The idea is that we can start a z/Linux guest on either one of the two VM images. The linux customer will never know the difference other than perhaps a short outage during reboot of the guest. We did some tests but we havn't implemented this full scale yet. Note that this does only help you in a planned (or unplanned) outage of VM. If a z/Linux guest fails it will still fail on the other node. And keep in mind when the guests from the one VM are started in the second VM it will result in some performance degradation. Remember to have enough storage, paging DASD, reserved CPU etc on the failover VM to handle the added load. Regards, Berry.
