In my "best practices" presentation, I recommend against the use of PAV
because of some incidents where it degraded performance (bad disk
performance was inexplainable other than PAV was enabled). There have
been no presentations or measurements validating PAV improved
performance in real life - the IBM presentations confirm that PAV should
be used "when a device is read/only or mostly read, and there is a
queue". In reality with all the caching mechanisms, this rarely happens.
Would highly recommend against using PAV for a Linux workload until you
can show a need and then only use it for volumes that have a
requirement. PAV is a good technology and was required to support large
disks (3390-27) in z/OS - and a tool to use WHEN NEEDED.
Krueger Richard wrote:
We are recent installers of z/VM, 5.4, and z/Linux, RHEL 5.3, and we are
testing applications on z/Linux to try and decide if we want to move
workload from current non-mainframe environments to it. At this time we
only have access to ECKD dasd for testing, and, we recently installed an
EMC DMX4 Storage Array with the HyperPAV feature, and, our z/VM and
z/Linux volumes are all allocated on the DMX4. If I run a CP Q PAV it
correctly shows the base and alias volumes we have configured on the
DMX4.
Using manual: Linux on System z, How to Improve Performance with PAV
May, 2008 SC33-8414-00, attempted to follow steps to test HYPERPAV on
z/Linux. One of the steps says to CP ATTACH a volume to the z/Linux
guest. We cannot do that because we have our volumes attached to SYSTEM
and allocated with non-full pack MDISK for assigning space to z/Linux
guests. Discussed this with IBM and the initial thought was to try to
use COMMAND DEFINE HYPERPAVALIAS in the user directory stmts. But that
does not work because it requires full pack MDISK, which they tell me we
can do by assigning cylinder 0 – END in MDISK stmt, and then allowing
the z/Linux guest to format the volume, with the requirement that
z/Linux assign the same volume label as the original on cylinder 0. We
also discussed that we can still use the more traditional PAV with our
current MDISK definitions, using user directory DASDOPT / MINIOPT
statements, and I still need to try that.
At the moment, we are creating z/Linux test guests by cloning two 3390-9
volumes ( our gold copy ) by using FDR on z/OS. These contain the OS
and related files with space reserved the way our z/Linux support team
defines it in the non-mainframe platforms. Actually they normally do it
in 10gb on the non-mainframe platforms, and, with two 3390-9 they get 14
gb. We allocate vdisk: 100 CYL 1 142, 101 CYL 143 9874, on the first
3390-9, and a 102 vdisk on the second 3390-9 CYL 1 10016, and they use
LVM to manage the space as desired. Any extra space that a particular
guest might need for whatever application is being tested is allocated
as MDISK from portions of additional 3390-9 volumes, or in some cases
entire 3390-9 volumes.
1. Should we be content with using traditional PAV, and not worry
about trying to make HyperPAV work ? Does the z/Linux OS kernel on the
two 3390-9 cloned volumes get any benefit from using HyperPAV, or, would
it be more likely that we would get benefit from it on volumes allocated
to the application databases run by the guest ?
2. Should we change the way we have allocated our gold copy volumes
so that they are full pack MDISK so that we can take advantage of using
HyperPAV ?
3. When we installed the DMX4 we considered allocating many of the
larger volume sizes, 27, 54, 220, ( but did not ), because we thought we
might be able to take advantage of them for z/VM MDISK allocation using
that allocate from the pool concept, and larger would be better for
that. Knowing now that HyperPAV only works with full pack MDISK
allocation it seems that it was a good idea we did not do that.
4. Does anyone have any other thoughts on this, are there other ways to
take advantage of HyperPAV ? What z/Linux applications could really
benefit from it ? How do you allocate your ECKD dasd space to z/Linux
guests, as full pack, or non-full pack MDISK ?
This e-mail is confidential. If you are not the intended recipient, you
must not disclose or use the information contained in it. If you have
received this e-mail in error, please tell us immediately by return
e-mail to [email protected] <mailto:[email protected]> and
delete the document.
E-mails containing unprofessional, discourteous or offensive remarks
violate Sentry policy. You may report employee violations by forwarding
the message to [email protected] <mailto:[email protected]>.
No recipient may use the information in this e-mail in violation of any
civil or criminal statute. Sentry disclaims all liability for any
unauthorized uses of this e-mail or its contents.
This e-mail constitutes neither an offer nor an acceptance of any offer.
No contract may be entered into by a Sentry employee without express
approval from an authorized Sentry manager.
Warning: Computer viruses can be transmitted via e-mail. Sentry accepts
no liability or responsibility for any damage caused by any virus
transmitted with this e-mail.