That's a fine overview.  I'm just sorry if I gave the impression that I 

havn't been involved with VM over the years.  On the contrary, it's merel
y 
z/OS as a guest that is not recent experience, so I'm more concerned with
 
any potential z/OS specific concerns.

Brian Nielsen (a fellow VM bigot)


On Sat, 11 Feb 2006 12:53:30 -0600, Tom Duerbusch 
<[EMAIL PROTECTED]> wrote:

>I'm not an MVS type, as I left the MVS world at MVS/SP time frame.  But
>I'm a VM bigot, so beware<G>.
>
>Given your VM knowledge in the 4300 days, z/VM is a lot more than you
>know.  It is also a lot less then you remember.  Times have changed.  It

>use to be that VM was required to share hardware.  Now a days, escon and

>above attached hardware can usually be shared across LPARs, so that
>eliminated one of the historic VM advantages.
>
>On whether to run the production MVS machine under z/VM....
>
>If you currently have sufficient resources for your z/OS machine, then
>it may be a canidate to run under z/VM.
>If you are some resource limitation, then futher study is needed to
>determine if z/VM will help eliminate that constrant or make that
>constrant worse.
>
>Then, if there are resources that are currently dedicated, that may be
>of benefit if they were shared, then running under z/VM would be a
>benefit.
>
>Of course, I'm not talking about dedicated non-SNA 3174s.  If you are
>still running that way, it is because it is your choice, not a
>requirement anymore.  I bring up the LPAR using the PCOMM on the z/890
>console, and after VTAM is up, I switch the consoles over to an SNA
>tube.  If VTAM ever goes down, I can always go back to the PCOMM
>session.
>
>Of course most test systems are good canidates for running under z/VM.
>When their resources are not being used, the resources can be used for
>other guests.  Hence you can support more test machines with the same
>amount of resources.
>
>Now that I have flashcopy support, I flash the production machines and
>do a trial upgrade (new software, maintenance, whatever), to see what
>problems I hit, before I apply the changes to the production machines.
>Just a small change that helps production machines from taking a hit,
>and it also minimized my time in on nights and weekends.
>
>(yea, I know, what are you doing in on this weekend?  DS6800 dasd
>subsystem software upgrade.)
>
>The most simular thing to hypersockets between LPARs is Guest Lans
>between machines.  It performs much better than hypersockets.  Virtual
>Switch is a special case of Guest Lans where the Guest Lan is connected
>to the OSA.  There are performance reasons to use Guest Lans over the
>Virtual Switch.
>
>One of the reasons to leave a large system, such as z/OS in its own
>LPAR, was eliminated with z/VM 5.2.  The reason had to do with z/VM 5.1
>and under, limitation of 2 GB of central storage.  The rest was expanded

>storage.  That is old hat now.  If you hear others bashing z/VM over
>this issue, as long as your processor can run z/VM 5.2....old news.
>
>For the most part, z/VM doesn't need a full time VM Systems Programmer
>anymore.  You can hire a consultant or a Business Partner to help with
>the install.  After that, very little training is needed to run a VM
>system.  Most of my VM clients only use between 150 to 300 man hours a
>year of my time for VM.  It is higher now, due to bringing up more
>zLinux machines.  Last year, one of my clients only needed 45 hours of
>my time for VM system programming related stuff.  (No new hardware, no
>VM upgrades, just some application related stuff.)
>
>Your IOCP can be handled by z/OS or by z/VM, but not both.  Both
>systems will avail them selves, dynamically, of any new hardware added.
>
>As you have mentioned, virtual channel to channel connections between
>guest running under z/VM can replace the real CTCA that you may have
>between LPARs now.  But you would still need the real CTCA(s) that go
>from z/VM to your production machines, if they are running in another
>LPAR.  But it would be "test z/OS to virtual CTCA to z/VM to real CTCA
>to LPAR".  i.e. no machines running under z/VM would need their own real

>CTCA hardware.
>
>As you remember, dedicated dasd to your z/OS system is most efficient.
>Minidisks are slightly less efficient, but allow for easy sharing
>between systems on the same LPAR.
>
>z/VM doesn't have a native tape manager.  Many backup their VM systems
>with z/OS utilities.
>
>Well, time to go home <G>.  IBM is done with the upgrades.
>I'm sure others with chime in when they get in on Monday.
>
>Tom Duerbusch
>THD Consulting
>
>St. Louis Missouri
>Host City
>1904 Summer Olympics
>
>
>>>> [EMAIL PROTECTED] 2/10/2006 5:03 PM >>>
>We currently run z/OS in multiple LPARS, one for production, two for
>test,
>with more LPARs for both production and test on the way.  We also run a
>
>z/VM LPAR on IFLs for LINUX guests.
>
>I'm putting together a pros & cons list for running z/VM on the CP s
ide
>
>with some or all of the z/OS systems as guests.  I'd like to make sure
>I
>don't leave out anything important.  There would be no other work in
>that
>z/VM image other than the z/OS guests, so the main thrust is the
>improved
>management of the z/OS images and devices.  If the production LPARs are
>
>under z/VM they would almost certainly have RESERVED pages to
>minimize/eliminate them being paged by z/VM.
>
>
>My high level outline has:
>
>Cons:
> - Additional license charges for z/VM on CP engines
> - z/VM will use some CPU, memory, & DASD
> - Some operating procedures will change
> - z/OS Systems Programmers & Operators will need some z/VM skills
>
>Pros:
> - Virtualization allows better workload isolation and resource
>sharing
> - Fewer POR's to make LPAR changes, and new guests on demand
> - Some real CTC's & devices can be replaced by virtual counterparts

> - VM minidisk support can be used to improve DASD management
> - Can simulate the disaster recovery site
>
>
>There are some items I don't know enough about yet to gauge the
>impact:
>
> - Workload Manager is used to throttle back the z/OS LPARS below a
>specified 4 hour rolling average of CPU usage (for cost reasons).  I've
>
>never used Workload Manager, but wonder: (a) will it work if z/OS is a
>
>guest of z/VM, and (b) if not, what would accomplish the same thing?
>Setting SHAREs is obviously not up to the task because we're talking
>about
>the whole CP side.
>
> - How do I properly evaluate if the production LPARs should be left
>alone
>and to only consolidate the test LPARs under z/VM?
>
> - How will SMF records from z/OS, which are used for billing, be
>impacted?
>
> - What will the impact of the additional level of SIE on z/OS be?
>
>
>Have I overlooked anything major? (Especially z/OS specific issues.)
>I'm trying to anticipate questions so that I have answers and to avoid
>
>surprises later.
>
>When other people have made this type of change, what problems popped
>up?
>What problems disappeared?
>
>Many (many) years ago I used to run MVS under VM/SP on a 4381, so that
>
>environment isn't new to me, it's just not recent vintage.
>
>Thanks.
>
>Brian Nielsen
>========================
=========================
========================

Reply via email to