> >I wonder what if any studies on this have been done in the lab.
> >It would be nice if an IBM performance expert  like Kathy Walsh
> >could weigh in.

  The last performance studies I remember for paging were 
around the time of MVS XA/SP2.2.0.  Very little has been done
in the area of paging performance since then except for the 
PAV stuff to allow two concurrent operations to a page data set. 

> 
> I had the 'honour' of deleting and adding several local page data 
> sets on several lpars. They were a mixture of 1.10 and 1.12, I 
> think. What I did observe (and that clashed with what I thought I 
> knew about ASM) is the following:
> 
> 1) Adding one or more locals, I expected them to first fill up to 
> about the same percentage as the ones that were already in use (same
> size page ds, much faster -new- controller). No such luck. It looked
> to me like *all* of them were filling up (percentage wise) in about 
> the same manner. Meaning that the 'old' locals had about 27%, the 
> new ones right after add 0%. A day later the old ones had 35%, the 
> new ones 8%. About the same behaviour when adding locals of the same
> size on the same controller - we only have one DASD controller per 
> sysplex, and having two was the time when we migrated from one to the 
other.

  The page data set selection algorithm considers service time for
the devices, but not the full percentage.  One could argue that 
the full percentage should be considered, since it affects the 
likihood of finding contiguous slots, and the CPU time to find
available slots, but that is not how it currently works. 
 
> 2) A pagedel finishes MUCH faster than it ever did. It looked like 
> ASM is actively shifting slots away from the to-be-deleted page data
> set. A pagedel finishes in under a minute. This used to be a really 
> slow process because nothing was actively done.

  Pagedel has always done active removal.  There were some 
problems with doing active removal of VIO in the original SP3.1.0
implementation, but that was fixed in SP3.1.3.
 
> 6) I bemoan IBMs failure to give us a good means of figuring out who
> is using HVSHARE or HVCOMMON storage and how much storage-above-the-
> bar is actually *used*, i.e. backed. As far as I know, there still 
> isn't any tracking done for HVCOMMON storage, no means of reporting 
> about it. No way to know who is excessively using common storage 
> above the bar. Same for HVSHARE. Unless you're named Jim Mulder and 
> know where to look in a dump, us lowly customers cannot even check 
> that in a dump. Am I mistaken in the reporting capabilities? Has 
> that been fixed by now? Or is it another means of IBM trying to sell
> $$$$ software service contracts to get that done only by IBM? Not to
> mention the frustration until you find someone who can actually *do* it.

 IPCS has  RSMDATA HVCOMMON.  That at least tells you the owner of
the memory objects. 

  For HVCOMMON which is obtained via the IARST64 macro,
the IARST64 macro says:

There  is  diagnostic support for 64 bit cell pools, created
by  IARST64,  in IPCS via the CBFORMAT command.  In order to
locate  the  cell  pool  of interest, you need to follow the
pointers from HP1, to HP2, to the CPHD.  For common storage,
the  HP1  is  located in the ECVT.  CBF ECVT will format the
ECVT, then do a FIND on HP1.  Extract the address of the HP1
from the ECVT and 
CBF addrhp1 STR(HP1) 
will  format  the HP1.   Each entry in the HP1 represents an
attribute  set  (storage  key, storage type (pageable, DREF,
FIXED),  and  Fetch  Protection (ON or OFF)) The output from
this  command  will  contain  CBF commands for any connected
HP2s.    Select  the  CBF  command of interest and run it to
format  the HP2.   The HP2 consists of pointers to cell pool
headers  for  different sizes.   Choose the size of interest
and select the command that will look like this: 
CBF addrcphd STR(IAXCPHD). 
This will format the cell pool header.  To see details about
all  of  the  cells  in  the  pool,  use  the EXIT option as
follows: 
CBF addrcphd STR(IAXCPHD) EXIT. 
For  private  storage, the HP1 is anchored in the STCB.  The
quickest  way to locate the HP1 is to run the SUMMARY FORMAT
command  for  the address space of interest.  Locate the TCB
that  owns  the  storage of interest and then scroll down to
the  formatted STCB.   The HP1 field contains the address of
the HP1.  From here, the processing is the same as described
for common storage above. 
You can also use the EXIT option as follows: 
CBF addrhp1 STR(HP1) EXIT 
to  produce a report that summarizes the storage usage under
that HP1. 

 
Jim Mulder   z/OS System Test   IBM Corp.  Poughkeepsie,  NY

----------------------------------------------------------------------
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@bama.ua.edu with the message: INFO IBM-MAIN

Reply via email to