On 2/11/18, 7:24 AM, "IBM Mainframe Discussion List on behalf of Munif Sadek" 
<IBM-MAIN@LISTSERV.UA.EDU on behalf of munif.sa...@gmail.com> wrote:
    
>    Finally we are getting an IFL from IBM on trial  to install Linux on our 
> z13. No KVM or zVM on the Mainframe.
    
Seriously reconsider this - not having zVM or KVM -  you will want/need a 
virtualization capability beyond just LPARs. The cost case does not work at all 
for one Linux machine - Z hardware is way too expensive in comparison with the 
alternatives. Being able to spin up virtual systems on demand via something 
like OpenStack is one of the major wins for Linux on Z. Both zVM and KVM have 
OpenStack plugins.
    
z/VM (my preference and recommendation) will pay for itself just from the 
simplicity of isolation from any hardware details and its ability to 
dynamically move resources around without downtime, and zVM has been around 
long enough to know how to play nice with other Z operating systems. I'm less 
familiar with KVM, but you'll be able to leverage your distributed systems 
people if they use KVM on Intel. IBM can usually lend you a zVM license for 
evaluation purposes, and I believe KVM is no charge, but lacks a lot of the 
management and performance capabilities of zVM. Dealing with the bare metal is 
still a major pain in Linux; the tools aren't there yet or are 
arcane/obscure/badly documented; just skip the hassle and use zVM. You'll 
wonder why you ever tolerated the limitations of LPARs. 
    
 > I am looking for any redbooks or IBM installation document for Linux on 
 > zSeries. I am inclined towards  Red Hat Enterprise Linux AS 7
 >  as I expect it to be better integrated with APACHE SPARK.
    
    There are three major choices:
    
    - RHEL and RHEL-derived systems like ClefOS (a Z version of CentOS, which 
is a RHEL clone)
    - SLES
    - Debian/Ubuntu
    
Pick the one you use on your Intel systems, you'll be a lot happier and won't 
have to invent as many processes to manage it or argue about it with the Intel 
folks. You don't need to invent the wheel here; use what you already know plus 
whatever you have to do to get a virtualization environment running. 
    
>    Hopefully IBM zLinux document can give me comparative studies of different 
> Linux distribution available on zSeries.
    
    There is an IBM manual called "Getting started with Linux on System z" that 
is included with the zVM documentation that is a pretty good overview. There 
are also redbooks on RHEL and SLES:
    
    The Virtualization Cookbook for IBM z Systems Volume 1: IBM z/VM 6.3, 
SG24-8147-01
    http://www.redbooks.ibm.com/abstracts/sg248303.html for RHEL/RHEL derived 
systems
    The Virtualization Cookbook for IBM z Systems Volume 3: SUSE Linux 
Enterprise Server 12, SG24-8890
    
    The Canonical people haven't done a similar redbook for Debian/Ubuntu on Z 
yet, but the concepts are very similar to RHEL. 
        
>    seeking expert advise , experience,   gotchas, ROTs.
    
    Some things we learned:
    
    1. Get a virtualization tool like zVM or KVM.
    
    2. Make sure you have at least 2 physical engines in the LPAR you intend to 
use Linux on, and at least 2 virtual CPUs in your VMs. Linux can and does 
exploit this, and will be sluggish in some situations without multiple CPUs 
available. If you run under VM and can test only one physical CPU in your LPAR, 
zVM will simulate the virtual CPUs at a slight performance penalty; you can 
have one physical CPU and as many virtual machines with as many virtual CPUs as 
you can define (I've run virtual 64-ways many times). 
    
    3. Conform as much as possible to the choices of distribution, etc. that 
you use elsewhere in your business. You don't need that argument with the 
distributed folks.
    
    4. Make sure you have a good performance monitor both on Z *and* on your 
current Intel systems if you're moving workload. Very few people know their 
workloads well enough to make valid comparisons, and hard numbers don't lie. 
    
    5. Do not use ROT on memory sizing; smaller is often better because most 
Unix systems use memory cache to compensate for poor I/O hardware. You will 
probably need only about half the machine size you need on distributed systems; 
it's a good place to start, and you can make the virtual machine bigger 
non-disruptively to the whole setup if it doesn't work. You really don't need 
multi-gig SGA for Oracle, for example, no matter what your DBAs think. 
    
    6. Put the Linux OS on 3390 disks, and data on FCP disk. 3390 disk can be 
quickly recovered; FCP is awkward on Z, but necessary for more than a few gig 
of disk. 
Assembling 100G from mod 3s just isn't practical     
    7. Make sure your backup solution can handle FCP disks. Most Z backup 
solutions can't, or can only do image backups. You may need to employ whatever 
system your distributed system backups use; check to see if a Z client exists. 
Most non-Z vendors can't spell Z, with a few exceptions.
    
    8. Use two zVM VDISK swap areas and one real disk swap area for Linux; 
VDISK in RAM are very fast and allocated only as needed; real disk tends to 
block on 1 I/O per device limitation in Z disks. Swap disks should be about 
50%, 20% and 10% of the size of the virtual machine (some people use 100%, 50% 
and 25%), and listed in order of decreasing size. The real disk should always 
be listed last. Optimum is only a little into the first one; VDISKs are 
allocated as they are actually used, so defining one larger than you need 
doesn't hurt you.  VDISKs come out of system paging space, so make sure you 
have enough paging space defined (the VM starter system does not have enough to 
run a useful test). zVM gets very cranky if it doesn't have enough page space, 
but it recovers fairly well.
    
    9. You can share an existing OSA for testing, but for production, 
overprovision network capacity and don't share network hardware with other Z 
systems. Linux relies on networks a lot more than zOS does, and a 10G adapter 
won't be hard to fill if you have to do backups over the network. You also 
should strongly consider using layer 2 VSWITCHes in zVM; not having to do 
routing on Z is a major win, the guests work like any other machine in DHCP, 
etc, and VSWITCH also handles hardware failover and aggregation easily and 
transparently. 
    
    10. Strongly consider a devops-like model of deployment, with scripts doing 
provisioning of applications. If you do this, upgrading is much easier (you 
build a VM with the new Linux, run your script, and test. Upgrade in place is 
still a bit iffy and most people don't do it. That's one place the 
Debian/Ubuntu folks really have it together; a new version is as easy as 
'apt-get dist-upgrade' and it Just Works.
    
    11. Buy software maintenance for whatever distribution you use and make 
sure at least one system can get to the public Internet, preferably one that 
can be set up and used as a local upgrade repository. Doesn't have to be on Z, 
but Linux maintenance is a lot more volatile and you don't want to have to rely 
on external downloads if you have a couple thousand guests that need patching 
ASAP.
    
    12. Get your distributed folks to make 'putty' and associated tools (a good 
VT100/ANSI terminal emulator and SSH client) available on your desktop, and 
make sure there is a NFS or HTTP server accessible to you to support the 
install. You can do it with FTP, but NFS or HTTP are LOTS easier.
        
    If you want to tell us more about what you intend to test, we can make more 
detailed suggestions.    
    
    

----------------------------------------------------------------------
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN

Reply via email to