Hi Dave & co-listers,

1°
Is  OMEGAMON capable of drilling down into "MVS Overhead" ?
If not, how can further PD (Problem Detemination) be done?
2°
Is the CPU-MF HIS approach based on SMF113 not too  long-winded and
over-doing?

CPU-MF (HIS)  NOT being a substitute for traditional performance nor
capacity metrics.  And it does NOT indicate either the capacity being
achieved by the LPAR or processor

3° Where is "*phantom load*"   (or a *“phantom” logical partition) *
and  "*cap(ping)
pattern*" reported?  IF reported.   Or where can we find it?

SG24-6472-03 System Programmer's Guide to  Workload Manager (WLM) (last
update 20 april 2010) (applies to zOS V1R8 - Maart 2008)
http://www.redbooks.ibm.com/abstracts/sg246472.html
3.4 Soft capping   (page 101 ... etc ...)
3.4.1 *Defined capacity*
When WLM caps the logical partition, there are three possible scenarios:
 *1° The percentage share of the logical partition processing weight is
equal to the percentage share of the defined capacity.*  *(relative weight
= defined capacity)*
In this case, the capping is done at the logical partition processing
weight. This is the best and recommended situation.
 2° *The percentage share of the logical partition processing weight is
greater than the percentage share of the defined capacity.*   *(relative
weight > defined capacity)*
In this case, capping at the weight has no effect, because the logical
partition has more MSU than allowed. To enforce the percentage share of
WLC, PR/SM subtracts a certain number from the logical partition processing
weight to match the desired percentage share and calculates a *“phantom”
logical partition* that receives the remaining unused weight. This
guarantees that other logical partitions are not affected by the weight
management, because the total logical partition processing weight stays the
same.
* 3° The percentage share of the logical partition processing weight is
lower than the percentage share of the defined capacity.*   *(relative
weight < defined capacity)*
WLM defines a cap pattern that repeatedly applies and removes the cap at
the logical partition processing weight. Over time, this looks as though
the partition is constantly capped at its defined capacity limit.
The *cap pattern* depends on the difference between the capacity based on
the weight and the defined capacity limit. If the weight is small compared
to the defined capacity, the capacity of the partition can be reduced
drastically for short periods of time. This can cause performance to
suffer. Therefore, we recommend to keep both definitions as close as
possible.

Where is "phantom load" or the "phantom logical partition" reported?   (IF
reported !?)    Where is "cap(ping) pattern" reported? (IF reported !?)
RMF ???

Is "cap pattern"  part of the "MVS Overhead" ?  Because WLM must be working
like hell.

Please, enlighten us.
Jan


On Fri, Feb 15, 2013 at 12:48 AM, Dave Barry <dba...@ups.com> wrote:

> Jan,
>
> The description of MVS overhead in the User Guide is technically correct.
>  You might think of it as uncaptured time--basically hardware overhead not
> attributable to an address space or service class measurement taken by RMF.
>  That includes interrupt handling due to I/O, page faults, etc., performed
> in supervisor state.  I think some listers may dicker over the fine points,
> but the traditional way of apportioning this overhead is by attribution of
> I/O activity.
>
> The MVS overhead reading will fluctuate.  If you have more than one CP, it
> is not uncommon to see TCB and enclave percentages greater than 100.  The
> unnormalized, sliding scale runs from times-one to times-the number of CPs.
>  That is, an LPAR with two CPs running at a (normalized) level of 75
> percent overall can display as "150 percent of 200 percent."  The
> unnormalized value makes it easier to compare machines with different
> numbers of CPs, but if you want Omegamon to normalize the numbers to 100
> percent, you have the option.
>
> In my observation, as the underlying hardware architecture becomes more
> complex, MVS overhead has become less well correlated to interrupt rates.
>  You may not be able to do much about it, but you might try comparing your
> systems on the basis of SMF type 113 data from Hardware Instrumentation
> Services (HIS).  A higher or lower Relative Nest Intensity (RNI),
> indicating a less processor-cache-friendly workload mix, may help explain
> the differences in uncaptured time.  A good explanation can be found in
> "LSPR Workload Categories" on IBMs Web site at
> https://www-304.ibm.com/servers/resourcelink/lib03060.nsf/pages/lsprwork?OpenDocument
> .
>
> Sorry if this is more than you wanted to know, but IBM-MAIN is definitely
> a good place to ask.
>
> Dave Barry
> Doctor of Omegamology
> UPS
>
> -----Original Message-----
> From: IBM Mainframe Discussion List [mailto:IBM-MAIN@LISTSERV.UA.EDU] On
> Behalf Of Jan Vanbrabant
> Sent: Thursday, February 14, 2013 4:28 PM
> To: IBM-MAIN@LISTSERV.UA.EDU
> Subject: ''MVS overhead'' (as indicated by OMEGAMON)
>
> Hi,
>
> A customer of mine is  HiperDispatch with "defined" capacity"  (hence
> possible softcapping based on the 4Hr-rolling average.)
>
>
>
> In the the '*System CPU Utilization table view'* of Omegamon XE on z/OS,
> and more specifically in the '*Workload CPU Usage bar chart* '  we
>  regularly see (very) high values for the 'MVS OVERHEAD' attribute.
>
> Some examples:
>
> 1°
>
> Average CPU Percent:  19.5  (TCP % 19;  SRB % 2; partition overhead
> negligible)
>
> MVS OVERHEAD :  13
>
> 2°
>
> Average CPU Percent:  82  (TCP % 50;  SRB % 2; partition overhead
> negligible)
>
> MVS OVERHEAD :  31
>
> 3°
>
> On a small system on the contrary,  the MVS OVERHEAD is small.
>
> Average CPU Percent:  27  (TCP % 22;  SRB % 6; partition overhead
> negligible)
>
> MVS OVERHEAD :  2
>
> This system has bad P/Is, for example one Service Class has a P/I=18.5
> while there are  no bottlenecks at all with the resources, while being
>  idle for 96,9%.
>
>
>
>
>
> We would like to know how we've to interpet <MVS OVERHEAD>.
>
> The user's guide explains it as:
>
>                 CPU utilization percentage that is not attributable to any
> user or address space. It is calculated as the difference between the total
> software utilization times and the total hardware time ((TCB + SRB)-CPU)
> over the last reporting interval. Valid value is a 4-byte integer.
>
>                 In a complex with more than one CPU, z/OS overhead can be
> computed based on the number of processors, or normalized to a maximum of
> 100%.
>
>
>
> ???
>
> Any of you able to explain this in a better way?   (So that I'm able to
> understand what this means.)
>
>
>
> And where should we start looking to find out what's happening underneath?
>
> Poor  WLM definitions?   (The performance is n't bad finally.)
>
> But if the overhead can go down,  MSU-based CEC invoices are loweedr;  so
> the point of view is rather cost-oriented.
>
>
>
>
>
> In this sysplex  with 6 systems, 34 ServicesClasses are actually
> defined....
>
> In the system of the second example, 16 SCs of the 34 are actually used.
>
> (I don't know if it might be important, but on top of the 34 there are
> another 8 system SC's with
>
>  a goal of SYSTEM; they are SYSTEM, SYSSTC, SYSSTC1,  SYSSCT2, SYSSTC3,
> SYSSTC4,  USSCT5, SYSOTHER).
>
>
>
>
>
> IBM-MAIN is probably not the  most appropriate listserver or forum to post
> this OMEGAMON-based question at the origin?  Is there a better place at
> your knowledge?
>
>
>
> Jan
>
>
>
> PS
>
> Customer already reduced the number of logical processors  by running
> Alain Maneville's  EXCELLENT (!)  LPARDesign-HD-V3-spreadsheet.
>
> ----------------------------------------------------------------------
> For IBM-MAIN subscribe / signoff / archive access instructions, send email
> to lists...@listserv.ua.edu with the message: INFO IBM-MAIN
>
> ----------------------------------------------------------------------
> For IBM-MAIN subscribe / signoff / archive access instructions,
> send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN
>

----------------------------------------------------------------------
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN

Reply via email to