Everyone has their own ideas/processes for maintaining their systems whether it is one or many z/VM images. When I discuss this topic with 'new' installations bringing z/VM into their shop, the first point I make is that using the supplied IBM manuals and following the published instructions -works- although they may not save time or be the most 'efficient'. (Efficiency is sometimes gained by carefully cutting corners!) Even when maintaining multiple z/VM systems, the IBM processes still work although one will note that you will be repeating the steps for each system. But the end product is that you will have consistent and working z/VM systems.
SERVICE/PUT2PROD were not built for multi-system maintenance as Alan pointed out. Adding that function would be (IMHO) a royal PITA. The processes we use here are based on things we have done/learned/improved(?) over the past 20+ years. We don't use SERVICE/PUT2PROD (S/P for short) after the initial install of z/VM because we change the 'out of the box defaults' and S/P does not like that. S/P are really great as long as you color within the lines; once you go out you are on your own. That isn't as bad as it sounds because the standard VMSES/E steps will work fine but the nice 'do-it-all' S/P simply won't. We have developed our own utilities to wrap the standard VMSES/E tools together - sort of like mini S/P tools. They simplify the processes you find in the Service guide; the routines are built to evaluate in an EXEC what the manual has you do with your eyes/hands/toes... My philosophy for service is to install ONCE and copy MANY. We have 12 z/VM systems and will probably be adding more. If I had to install service 12 times I would go batty and it would take a huge amount of time to accomplish. Coming up with a solid process to duplicate the maintenance parts to the other systems requires a knowledge of the z/VM VMSES/E environment. In other words, if you apply a PTF to CMS what would you have to copy/clone to the other systems to get it there too? Does it affect the nuc, segments, utilities, etc. CP is way easy (usually) and involves copying the CPLOAD MODULE. In some rare cases maintenance is applied to utilities, but not often. For the most part, it may be best to simply clone the whole VMSES/E structure to the other system(s) for CYA. We also have almost everything in SFS so we can share the service dirs between multiple systems. This makes the copy process smaller/easier too. There are local disks that each system must 'own' locally, but for the majority of the VMSES/E environment, SFS covers it. What does this all mean? Follow the given rules first and use the manuals. Once you are comfortable with those instructions and begin to understand the environment better, build processes to stream-line things to fit your shop. My processes are best for us and work here, but others may find that they don't 'fit' their needs. The only consistent best practice is to use the Install/Service guides. What everyone else does at their shop should be used to get ideas to build your own processes not blindly duplicated. Coloring outside the lines is not necessarily a bad thing as long as you know where the page ends. (huh?!) Moving away from the supplied instructions is a Good Thing once you have a comfortable understanding of your z/VM system and it's components. A knowledge of the pitfalls help you know what to do (or not do) in order to have a working system when you clone/copy your maintenance. What I think would be kind of cool is a "make" utility for z/VM maintenance. (Now Rick Troth is jumping up and down!) This magic utility would do a "pre-run" before maintenance is applied to get a before-picture. Then the maint is applied. A post-run of the magic utility would then be run to build a 'make package'. That package would contain instructions on what files got moved, updated (delta data included), added (content included) and could be copied around to other MAINT userids to be run and "poof" you have new maintenance on the other systems with all the files/structures matching the original. VMFMAKE... hmm... :-) _______________________________________ James Vincent Systems Engineering Consultant Nationwide Services Co., Technology Solutions Mainframe, z/VM and z/Linux Support One Nationwide Plaza 3-25-02 Columbus OH 43215-2220 U.S.A Voice: (614) 249-5547 Fax: (614) 677-7681 mailto:[EMAIL PROTECTED] The IBM z/VM Operating System <[email protected]> wrote on 06/08/2006 01:12:05 AM: > [email protected] > > On Wednesday, 06/07/2006 at 11:07 EST, Alan Ackerman > <[EMAIL PROTECTED]> wrote: > > That's good news. Perhaps it is time to re-examine our staunch > opposition > > to PUT2PROD. > > > > Another problem with PUT2PROD that comes to mind is that we don't want > to > > install VM on every system, we want to install it once (actually twice > > because of VPARS and VTAPE) and then copy it to all the other systems. > Can > > PUT2PROD do that now that it no longer insists on forcing off userids? > > Don't put the burden of multi-system maintenence on PUT2PROD's shoulders. > You might be able to bend PUT2PROD to your will, but the design of the > entire installation/service process is, as you imply, for a single system. > > I think people do things as you describe in order to avoid repetition and > save time, both excellent goals. We're looking at ways to reach them and > we're more than happy to hear how (and why!) people maintain their systems > the way they do. > > Alan Altmark > z/VM Development > IBM Endicott
