Re: RSH Consulting - RACF Survey - June 2019 - Performance - ERV

2019-07-22 Thread Horst Sinram
Prior to to z/OS 1.3 (i.e., some 15 years ago) there were reasons for 
increasing the value of the ERV parameter and from time to time one can still 
see values of up to 50,000.
Since the ERV is being restarted there are no good reasons for such exaggerated 
values and I'd consider the default of 500 to be good. 

And just to extend the topic a bit - the IEAOPT parameters that one would 
frequently want to specify non-default values for are:

-CPENABLE=(10,30)
-ManageNonEnclaveWork=Yes (when using WAS/Liberty with enclave management)
-BLWLIntHD=5 for more aggressive blocked workload support, namely for DB2 

And for both RCCFXET and RCCFXTT AUTO should be in place.

Then there are a few that deserve consideration, such as INITIMP=E, 
MT_ZIIP_MODE=2.

It is possible that the defaults will eventually change:-)

Horst Sinram - STSM, IBM z/OS Workload and Capacity Management

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN


Re: honorpriority=no in WLM

2019-06-19 Thread Horst Sinram
The OP's question was about DB2 workloads. Resource group  capping for DB2 
workloads would be pretty risky unless you could really guarantee that you do 
not share resources with your production work.

An RFE for period level resource groups has been rejected in the past for both 
technical reasons and and the (significant) effort. And, to add some larger 
context: For those installations that decide for themselves that the 
consumption based  Tailored Fit Pricing model is the right way to go the whole 
capping discussion will become a moot point. 

Horst Sinram - STSM, z/OS Workload and Capacity Management

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN


Re: honorpriority=no in WLM

2019-06-17 Thread Horst Sinram
Duncan,

why do you believe that HonorPriority=No could help? You will only be confining 
work to the already potentially overloaded zIIPs by disallowing help from the 
CPs. So, HonorPriority=No can only make things worse.
I assume that you don't want to go so far and configure all zIIPs offline to 
that partition; the way to go is really having sufficient zIIP capacity for 
your workload peaks.

Horst Sinram - IBM z/OS Workload and Capacity Management

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN


Re: SAP Processor Utilization

2019-06-04 Thread Horst Sinram
On z/OS, you can use SMF78.3 and the RMF IOQ report. Plus, there are some 
related overview conditions.

Horst Sinram - STSM, IBM z/OS Workload and Capacity Management

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN


Re: CBU expiration and activate dynamic

2019-05-13 Thread Horst Sinram
Multiple CBU records may be active at the same time. CBU resources can be 
activated, deactivated, or the activation level changed concurrently (from an 
LPAR perspective).
I assume you're using shared processors only and z/OS. Usually, from z/OS you 
may need to configure additional logical processors online to utilize more 
logicals from a given LPAR, or (recommended) configure logicals offline before 
physically deactivating resources, if the new number of physical processors in 
the shared processor pool would be less than the number of logicals online to 
any single partition.
You cannot extend the CBU resources *from a given record* beyond the grace 
period. The Capacity on Demand user's Guide for your hardware model is a good 
resource, e.g., regarding the grace period handling.
In general, there should be no reason for IPLing z/OS unless you had disabled 
DYNCPADD in your LOADxx,  or you would be running out of the limit and you need 
to utilize more logical processors. z/OS subsystems are usually well prepared 
to deal with model changes.

Horst Sinram - STSM, z/OS Workload and Capacity Management

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN


Re: "Workload" field in SDSF

2019-03-21 Thread Horst Sinram
Hi Andy,

there are many different ways to obtain the workload name. The probably 
simplest is to issue a Sysevent REQ(F)ASD against the space and find the 
workload name in the IRARASD as RASDWKLD.
Horst Sinram - STSM, z/OS Workload and Capacity Management

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN


Re: Novice query on WLM velocity

2019-01-21 Thread Horst Sinram
Peter,
classification is under control of the classification rules in your WLM service 
definition.
For started task, you need to check the classification rules in the STC 
subsystem. For STC a few particularities apply:
(1) The default service class, if not assigned differently, is SYSSTC
(2) System-defined properties (aka "SPM rules"), such as from SCHEDxx, are 
automatically into the classification rules.  If you do not explicitly include 
SPM SYSTEM and SPM SYSSTC they will be pulled in logically behind your own 
classification rules. The SPM rules can classify into the SYSTEM or SYSSTC 
service classes
(3) Some system address spaces will always be classified into SYSTEM, or at 
least SYSSTC, for reliability reasons. 

https://www.redbooks.ibm.com/abstracts/sg246472.html?Open is a bit older but 
probably still a good place to get started.
All the glory details are in 
https://www.ibm.com/support/knowledgecenter/SSLTBW_2.3.0/com.ibm.zos.v2r3.ieaw100/toc.htm

Horst Sinram - STSM, z/OS Workload and Capacity Management

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN


Re: Service class changes

2018-11-12 Thread Horst Sinram
See SMF Type 90, subtype 30: 
https://www.ibm.com/support/knowledgecenter/en/SSLTBW_2.1.0/com.ibm.zos.v2r1.ieag200/iea3g2_RESET_command_complete.htm

Horst Sinram - STSM, z/OS Workload and Capacity Management

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN


Re: Measuring Usage of zIIP per address space

2018-09-05 Thread Horst Sinram
The SMF30 Processor Accounting Section is a good place for the total and zIIP 
processor consumption but if you are also interested in the delay aspects you 
won't find anything in SMF30. For delay information you can turn to RMF Mon 
III, or just use the postprocessor reports with report classes defined over 
those address spaces that you're interested in.
As to what fields you should look at, be sure to read to the introduction to 
SMF30 in the SMF manual.
If enclaves are involved then 
https://www.ibm.com/support/knowledgecenter/en/SSLTBW_2.3.0/com.ibm.zos.v2r3.ieaw200/ieaw20046.htm
 describes the accounting scheme. 
Horst Sinram - STSM, IBM z/OS Workload and Capacity Management

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN


Re: Where is the WLM website???

2018-06-07 Thread Horst Sinram
Hi Kees,

glad that someone notices :-)
But seriously - while some restructuring of the web site is ongoing we're 
working on cutting over to mostly github based repositories.
If you open the presentation at 
https://github.com/IBM/IBM-Z-zOS/blob/master/zOS-WLM/WLM%20Tools.pdf you'll see 
a summary of the WLM related tools. Alain's LPAR design tool can be downloaded 
from 
ftp://public.dhe.ibm.com/eserver/zseries/zos/wlm/LPARDesign-HD-zPCR-V9-T03_IBM.zip
 

We expect a restructured "web page" to become available shortly at 
https://github.com/IBM/IBM-Z-zOS/tree/master/zOS-WLM/WLM%20Documents.md 
(It is NOT up and running as of today).

Horst Sinram - STSM, IBM z/OS Workload and Capacity Management

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN


Re: Can a job determine its own WLM priority?

2018-05-25 Thread Horst Sinram
Phil,

there is no "WLM priority". There is a dispatch priority, and there is a WLM 
importance (and goal).
- Which one are you interested in? 
- Only for simple cases the entire "job" (address space) would be managed to 
the same service class period, and therefore dispatch priority. Are you 
definitively not interested in e.g. in enclave management?
- Is the "job" authorized?

Horst Sinram - STSM, IBM z/OS Workload and Capacity Management

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN


Re: WLM?

2018-05-04 Thread Horst Sinram
Anne,
a daemon is a long running task, like an STC. It makes very little sense to use 
multi period goals (with short durations) because it will eventually fall 
through to later periods. A single period velocity goal is the preferred goal 
type for a daemon. 
Only if the daemon would be at risk to loop you would consider a multi period 
goal using a *really huge* duration value. Even in that latter case a resource 
group would be a better safety net.
Horst Sinram - STSM, IBM z/OS Workload and Capacity Management

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN


Re: Advice for WLM tuning of OMVS forked address spaces

2018-02-04 Thread Horst Sinram
Hi Kirk,

my initial assumption would be that the "spike" effect they're seeing is not 
specific to the fact that OMVS initiators are being used.

As you mentioned,it is important that the work that newly arrives into the 
system is properly classified  and has not overly aggressive goals. However, it 
is equally important that  the other "important" is correctly classified with a 
goal that is aggressive enough to protect against other work. This last item is 
"usually" what contributes most to such unwanted effects.

Then, depending on the JCL being used jobs may consume some (or more) CPU 
resources during initiation before classification is in effect. For such cases 
there is the IEAOPTxx INITIMP parameter. Specifying INITIMP(E), or one of the 
numeric values, depending on the workload to be protected can be useful to 
minimize the spike effect due to incoming work

Horst Sinram - STSM, z/OS Workload and Capacity Management

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN


Re: Hiding cross LPAR capacity information

2017-07-17 Thread Horst Sinram
In addition, by removing Global Performance Data Control authority one would 
also disable some CPC-wide optimizations in z/OS HiperDispatch.

Horst Sinram - STSM, IBM z/OS Workload and Capacity Management

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN


Re: DB2 Locks and WLM Blocked Workload Support?

2017-02-06 Thread Horst Sinram
Peter,

DB2 has the most comprehensive support to interact with WLM to address 
contention. 
DB2 will try to notify WLM about resource contention to extent it knows about 
dependencies. Based on such notifications DB2 and WLM support
- Regular "enqueue promotion" and short term promotion . Both promote the 
holder of the lock to an elevated dispatch priority, increasing its chances to 
get dispatched. Search for "Sysevent ENQHOLD")
- Chronic contention. Elevates the resource holder to the highest dispatch 
priority of any waiter. (Search for IWMCNTN).

Still there can be situations where neither DB2 (and certainly no one else) 
knows the dependencies. For such cases the blocked workload support provides 
the capability that *any* address space that has been blocked -not dispatched- 
for a given time will be granted one time slice. That's a very small amount of 
processor time and it is handed out *independently* of any contention. Yet it 
is proven to be a very effective way to address DB2 latch contention. (Only 
latches typically gate such a short path that the contention can be resolved 
through one or a few of such "trickles".)
See the ENV and BLWLINTHD IEAOPTxx parameter (BLWLTRPCT is almost never the 
limiting fact).
Searching for OA44526 will give you a good list of recommendations.

DB2 and GRS locks are unrelated (except for allocation related ones).

Horst Sinram - STSM, IBM z/OS Workload and Capacity Management

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN


Re: LPAR is 100% capped but MVS is busy only 60%. Not too bad, right?

2016-10-28 Thread Horst Sinram
Peter,
> When an LPAR is capped due to the R4HA being exceeded, RMF 3 CPC report shows 
> this as "WLM Capping %:" being greater that zero. But this alone is 
> not an indication if the capping really hurts. If MVS busy (RMF 3 SI report) 
> is nowhere near 100%, then the situation is not really bad. 
As you already mentioned, the %WLM Capping metric just indicates to what extent 
during the interval a cap had been in place. 100% just means that phantom 
weight capping was used. The more relevant metric is the %WLM Actual (or %ACT) 
metric that identiefies to what extent the consumption of the LPAR was close to 
the effective limit.

>Say WLM capping is 100% for a 300 second period, and MVS busy is 63% for the 
>same period, then I would say everything that wanted access 
>to processors should have gotten access. 
Over the interval(!) MVS did still decided to go into a wait 37% of the time. 
It may still worth while to check %Actual capping as well as the In-Ready work 
unit queue distribution (or processor delays in the Workload Activity Report).

>However, those task working on the processors will work slower due to PR/SM 
>taking away the physical CPs from the logicals due to capping being active.
Less dispatch time on the physical processors will be available.  
 >This is true even for vertial high (pseudo dedicated) CPs.
More precisely: If the LPAR is capped to an MSU limit below it's weight 
equivalent (positive phantom weight capping) it's PR/SM entitlement (weight) 
gets reduced. That means that vertical highs could become mediums or even lows. 
But those logicals that remain VHs will continue to be dispatched as VH 
("pseudo dedicated")
In 
ftp://public.dhe.ibm.com/eserver/zseries/zos/wlm/Capping_Technologies_and_4HRA_Optimization_2016.pdf
you'll find some more details.

Horst Sinram - STSM, z/OS Workload and Capacity Management

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN


Re: Would HiperDispatch likely delay heavy multitasking job?

2016-10-04 Thread Horst Sinram
Peter,

your second bullet isn't really correct. First, as described in my previous 
post, nodes will be able to help each other. Then, when deemed required, TCBs 
may also be broken out of their mother address space and assigned a different 
affinity node. Lastly, the process is independent of goal achievement.

Therefore, it is correct to say that for increased efficiency, HiperDispatch 
strives at dispatching related work close (on a node). However, as you could 
quickly proof, a heavily multitasking address space isn't necessarily confined 
to a single node.

Kind regards / Mit freundlichen Gruessen
Horst Sinram - STSM, IBM z/OS Workload and Capacity Management

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN


Re: AW: Re: Would HiperDispatch likely delay heavy multitasking job?

2016-09-30 Thread Horst Sinram
Peter,
also within the same processor class (here CP) affinity nodes  may help each 
other - see the IEAOPTxx CCCAWMT parameter. Therefore, the number of (unparked) 
logicals processors on a given affinity node does not necessarily limit the 
number of concurrently disptached work units of an address space. Giving the 
work unit a high priority can help increasing the parallelism because the 
helper node would select by priority.

Horst Sinram - STSM, z/OS Workload and Capacity Management

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN


Re: WLM Question

2016-08-30 Thread Horst Sinram
> Is there a way in batch to extract a WLM service definition and perform a 
> save/save as to a dataset that I can use for DR preprocessing purposes?

Bob, depending your further requirements there are two RFEs (requirements) that 
you could consider voting for 
https://www.ibm.com/developerworks/rfe/execute?use_case=viewRfe_ID=57448 
which is pretty much along the lines of what you're asking for.
Unfortunately it wasn't created as public, so probably need to create another 
one (using 'public' is always a good idea). 
Then http://www.ibm.com/developerworks/rfe/execute?use_case=viewRfe_ID=73305 
includes the capability of writing to a not active CDS.

Of course you could begin with a programming exercise (IWMDEXTR was already 
mentioned) but you're looking for what could be done with existing means I'd 
recommend that you
- converge (as everyone should...) to an XML format (sequential) service 
definition (i.e either use z/OSMF or 'save as' XML in the ISPF application
- Use naming conventions to allow you determine the latest level of the service 
definition (yes, enforcing conventions isn't bullet proof but probably good 
enough)
- Include the latest service definition(s) on your DR backup sets

Horst Sinram - STSM, z/OS Workload and Capacity Management

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN


Re: can a program determine the capacity setting of a z-box?

2016-04-27 Thread Horst Sinram
Sysevent REQLPDAT is z/OS only.

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN


Re: can a program determine the capacity setting of a z-box?

2016-04-27 Thread Horst Sinram
Sysevent REQLPDAT offers quite a bit more information than Sysevent QVS 
(https://www.ibm.com/support/knowledgecenter/SSLTBW_2.2.0/com.ibm.zos.v2r2.iead200/iead200821.htm)
and could also be called unauthorized when that matters.

Horst Sinram - STSM, z/OS Workload and Capacity Management

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN


Re: Silly problem with WLM Classification Rules

2014-12-11 Thread Horst Sinram
Hi Kees,
sorry, there is no way to classify based on the existence/specification of a 
work qualifier.
On z/OS V2.1 you could simplify your rules by creating a scheduling environment 
group but it would still require that the group specification includes all your 
scheduling environments.
Horst Sinram - z/OS Workload and Capacity Management

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN


Re: ENF 55(SRM Shortage) event

2014-02-03 Thread Horst Sinram
Miklos,
it depends on the qualifier what variables are filled in. We intend to add 
commentary to provide that information.
In the meantime, SYS1.SAMPLIB(IRAEN55S) has a sample exit that you may find 
useful. 

Kind regards / Mit freundlichen Gruessen
Horst Sinram - IBM z/OS Workload Management

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN


Re: Is it possible to open PCOMM session up to 50?

2013-07-18 Thread Horst Sinram
Personal Communications Version 6.0 supports up to 52 sessions. See
http://pic.dhe.ibm.com/infocenter/pcomhelp/v6r0/index.jsp?topic=%2Fcom.ibm.pcomm.doc%2Freadme%2FreadV60.html
I think you need to use sessions A-Z and a-z -- but I never tried myself :-)

Horst Sinram - IBM z/OS Workload Management

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN



Re: why does WLM Server status change from YES to NO

2013-05-23 Thread Horst Sinram
 Any region being managed to transaction goals has *always* been managed to 
 both - startup and shutdown use the region goal.
 I don't understand why this option was introduced - and why it is only 
 recommended for TORs. Guess I'd better go find some doco (pointers 
 gratefully accepted).

Shane, 

you could refer to 
ftp://public.dhe.ibm.com/eserver/zseries/zos/wlm/WLM2012Share.pdf 
(or navigate there via http://www.ibm.com/systems/z/os/zos/features/wlm/). 
As described there Mananage to Both is intended to combine the region 
management of the TORs (allowing to favor the TORs over the AORs) with the full 
transaction reporting that you would get from Manage to Goal of Transactions. 
It's primarily useful for systems where CICS is the predominant workload. (And, 
yes, I have seen data from quite a few large sites using it very successfully.)
Horst Sinram - IBM z/OS Workload Management

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN


Re: why does WLM Server status change from YES to NO

2013-05-23 Thread Horst Sinram
we have a single OLTP region, which is TOR and AOR together.
Would you suggest managing the region using goal of transaction or both ?

Walter, in that scenario you would probably stick to TRANSACTION if you're 
satisfied with the way that region is currently managed by the transactions 
running in it.
Horst Sinram - IBM z/OS Workload Management

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN


Re: Defined capacity

2013-02-28 Thread Horst Sinram
 It is probably true, that during the uncapped parts of the pattern, IRD could 
 have a chance to adjust weights, if this is what you mean by a 'better 
 chance'. 
Yes, exactly. (BTW, I don't want to promote pattern capping in general.)
 
AFAIK, I must use the HMC to set the values again to their initial values (or 
do a POR ;-). Altogether, I did not dare to activate IRD after activating 
GCLs. 
At the HMC, you could set MIN=MAX=INITIAL weight to enforce the weight setting 
that you want to implement and transition then.
Also, there are APIs that allow driving such changes automatically. These APIs 
could be used through some products (e.g. Tivoli System Automation ProcOps), or 
you could  (probably, but I did not verify that all operations are supported 
e.g. in z/OS BCPii) come up with homegrown automation.
Horst Sinram - IBM z/OS Workload Management

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN


Re: WLM Intelligent Resource Director (was Defined capacity)

2013-02-27 Thread Horst Sinram
  Now we know, that Hiperdispatch also disables fundamental IRD functionality. 

That statement is not correct:
- HiperDispatch *replaces* Vary CPU Management (and only when 
HiperDispatch=YES.)
- HiperDispatch provides *more* functionality and is much more efficient and 
faster than the old Vary CPU Management such that there is no reason why one 
would want to continue with the old management in a HD=YES environment

Horst Sinram - z/OS Workload Management

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN


Re: Defined capacity

2013-02-27 Thread Horst Sinram
 This at least suggests that IRD Weight Management is also disabled when an 
 Lpar is softcapped by DC limits and not only by GC limits. 
Kees:
Correct - when an LPAR is being capped, regardless whether that's due to an 
LPAR level defined capacity, or due to group capacity, IRD won't help the 
partition.

In the case of LPAR level DC that's likely no problem: The LPAR's 
4-hour-rolling-average has exceeded the installation defined limit despite the 
fact that the LPAR already has a low weight. Granting it even more weight 
doesn't make much sense.
With group capping the situation is a bit different: the LPAR weight does 
also determine the LPAR's entitlement of the group capacity. With weight 
being the current (vs. initial) weight IRD may have contributed to managing the 
LPAR to a low entitlement. That can sometimes be problematic, e.g. depending on 
whether the importance distribution of your workload within the LPAR cluster 
has changed or not, and how long the capping situation persists (with pattern 
capping there is obviously a much better chance.)

Horst Sinram - IBM z/OS Workload Management

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN


Re: Low priority workload

2013-02-26 Thread Horst Sinram
 Anyway, I decided to define a resource group with just 100 SUs to force 
batch down. Surprisingly batch still used up to 2000 SUs, because WLM 
promoted the batch workload due to any blockings, enqueues or locks batch 
held. So promotion by WLM might be another reason at your site that batch 
runs better than expected.
You can verify this with the Workload Activity report, it includes a 
column Service and Promoted.

Hmm, did you verify that 2000 SU/sec were in fact used at a promoted dispatch 
priority?
A z196 model 7xx (just as a typical example) delivers between 33,000 and 61,000 
SU/sec per processor.
Resource groups work by marking the work dispatchable/non-dispatchable for 
multiples of 1/64th  of the time.
Therefore, a single logical CP could deliver between roughly 500 and 950 
SU/sec. Depending on the type of resource group, the number of logical 
processors within the scope of your resource group (i.e. system or Sysplex), 
and the amount of work running at a higher priority you may well end up at that 
order of magnitude for the achievable granularity for *that* resource group in 
your environment.

Horst Sinram - IBM z/OS Workload Management

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN


Re: ''MVS overhead'' (as indicated by OMEGAMON)

2013-02-21 Thread Horst Sinram
Jan,

it appears the monitor is trying to tell you that you have a low capture 
ratio. Searching for that term should give you a good understanding of what 
the message really means, and in a lot of good advice, e.g. in the Effective 
zSeries Performance Monitoring Using Resource Measurement Facility 
publication. (IMHO the term overhead is misleading, 'management time' would 
be more accurate).
If you believe there might be a problem you may want to start with verifying 
LPAR configuration and IEAOPTxx settings (any obscure timing parameters? 
Anything violating recommendations?). You could also verify that the capture 
ratio is indeed low using SMF70/72 data.

Phantom weight is irrelevant in this context. It is only a vehicle to tell 
PR/SM how to cap the partition. The fact that the LPAR *is* capped at that time 
(to what extent?) may very well be relevant, though: More work, longer work 
queues...
CPUMF counters won't help you diagnosing anything. CPUMF sampling could help 
-theoretically- but the data may be very hard to evaluate.

Horst Sinram - IBM z/OS Workload Management

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN


Re: SMF Type 99 quandry

2012-08-23 Thread Horst Sinram
 According to the MVS Programming Workload Management Services manual, *a*
 Type SMF 99 record is written every policy interval (approximately 
 10seconds). 

Mark, I don't think the book is saying that.  It says SRM writes type 99 
record*s* for each policy interval, or approximately once every 10 seconds. 
In the MVS System Management Facilities (SMF) book 
(http://publibz.boulder.ibm.com/epubs/pdf/iea2g2c1.pdf) you can find a summary 
of what subtypes are written. And subtype 12-14 are written every 2 sec but 
they are very small as well.
Kind regards - Horst Sinram
IBM z/OS Workload Management  Capacity Management

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN